Friday, 29 June 2012

'Dr' Richard Stallman...

...has wasted me an hour of my life that I will never get back and I am miffed!

I went along to a lecture given by the eminent founder of GNU and the school of Free software which was timetables for an hour and a half... or an hour depending on which site you read. If I am being kind, I would have to agree with the filibusters calling for the free software movement to find a new voice.

Introduction to Merchandising

'Dr' Richard Stallman, with honorary doctorates from several universities, began the talk by selling free software badges and memorabilia, such as "don't SaaS me" badges, for between £2 and £8 pounds. OK, he is taking advantage of the capitalist movement to further his cause for free software. Fine, I have no problem with that, since that is somewhat the way I would go on a crusade. He went on to espouse his already familiar belief that software should be free to anyone and that software should be inclusive.

As part of his request to put videos or audio of his talk on sites running only free software and in a free software format (such as .ogg files instead of .mp3), he mentioned that Facebook is a surveillance engine.

Interesting point of view I thought. To me, Facebook is a site offering a service, which gathers data and one of the ways that this data 'could' be used would be 'surveillance' but you could equally argue that is also building useful marketing trends, usage stats to improve Facebook and optimise specific areas of the site amongst a host of other things. I happen to agree with the marketing edict that "If you are getting a service for free, then you are the product not the service", so I can see where this could be pertinent, but in reality, in the basic form it is just 'data' (and remember kids, data is just data until meaning is attached to it and in that that case and that case only, it becomes 'information').

He also went on to criticise Windows for destroying the resource that is you. It destroys the people and the freedom of the people. The reasons he stated this are not 100% clear, but he attempted to state that it blocks access for that resources to software and mass computing and so are spoiling the resource that is you.

To me, this was the point at which I figured this was going to be an abysmal talk. But I initially did the respectful thing and stayed for an hour and 10 minutes before having to walk out in disgust.

Thought: Cultural Tech Knowledge As A Sliding Window

I have had many a discussion on various types of technology over the years and have come to the conclusion that technology shifts cultural knowledge along as a kind of 'sliding window' over time. For example, the introduction of the calculator, made the populus as a whole less able to do arithmetic, but we gained how to use a calculator or anything with a numeric keypad... including the numeric keypad. The introduction of computers with WIMP/WYSIWYG editors, means people lost the understanding of formatting syntax, an understanding of command consoles and to some degree typing skills. Those buying computers with GUIs made people program less. In all case, feeding the apparent human need to find the path of least resistance meant we got to learn these 'labour saving' devices and forget the harder ways of doing the same job, sometimes to our detriment in the 'intermediate years' of each.

Stallman started to talk through 9 threats to freedom and free software. He went on to mention surveillance as a recurring theme throughout the talk, citing Facebook's compliance in handing over your data to the authorities on request, the anti-copyright lobby initiating a propaganda campaign against free software, human rights abuses in the iPhone/iPad factories etc.

He mentioned that proprietary software logs usage data on people, so companies can keep tabs on what you do (there was certainly machine code disassembled in the Windows 3.x environment that indicated that if a network was present, certain information could be passed back to a host).

4 Levels of Freedom

He stated that Free software allows you four levels of freedom (0 to 3) which included freedom 0, running the software as you wish, which would apply like freedom 1 to individuals, but also freedoms 2 and 3 exist would would allow you to build a community with people "if they cooperate" (which I thought was a very authoritarian stance to take for someone with his standpoint).

He claimed 'the community' would tell you if there was something wrong, 'the community' would give you support and help. He identified that not everyone is a programmer or has the skills to program, so the community could do that for them.

Please Sir Linus, can I have my ball back?

Stallman started to point out that whilst working on the Kernel for his Free software OS, he discovered that him and his team were in for a long-haul and thought that they would be around for years trying to get the kernel done. Then along came Linus Torvalds and slotted in his Linux kernel into the middle of all the other things that Stallman and his team had created and so the platform had become 'Linux'. So he would like us to call Linux GNU/Linux and give him and his team equal credit.

This happens to be a story I have heard from other sources, so I am not actually miffed about it.


The Digital Divide

Stallman went on to state that proprietary software creates a society who are divided and helpless. They either can or can't program and can't modify the software. Aside from that being complete rubbish (you can modify almost any at the machine code level/IL if you work hard enough at it. Let's not forget this is how crackers make it happen), using free software doesn't solve this problem at all. In fact, it makes the divide much worse, as less people from this time or at any time in computing history would be able to program their own software, so most would be both divided from those that can program and are experts and be helpless to deal with a problem without the support of those people. If he is arguing that writing software should be part of the fabric of society, then making free software available in the sense he means it would be wholly counter-productive.


He criticised Steve Jobs on his passing last year and drew a lot of criticism for it. Indeed, as other bloggers have already pointed out, Steve Jobs brought computing to the masses and changed the game fundamentally. Something Stallman has failed to do since 1983. I agree that very little of Apple/Jobs' work was new, however, what he did was identify the profile in society which needed a particular device, created a market for it, and then sold to that market. Stallman has failed to do this at any point, preferring his stance to come from a purely technocratic crusade when all people want is the labour saving device to save them time, keep in touch, same them space, get online, share things etc. Stallman fundamentally failed to show the world there was a problem and offer a solution like Apple (and indeed a lot of proprietary software did). Apple happened to identify the problem at the right time and marketed in the right way. Even though I  personally don't like their products much at all, I have to commend the marketing skills that Apple had under Steve Jobs. The had their finger on the pulse at all points, their market and brand awareness were exemplary and very few companies have matched them since. Maybe Samsung since, but they obviously were no the first. 


The supply of proprietary desktops to the classroom was another issue that he went on to target. My counter argument is that schools are woefully under prepared should anything goes wrong. Generally the UK public sector ICT jobs are incredibly low paid relative to the private sector. As such, won't appeal to the very highly skilled who can earn 6 times as much. So the support isn't there. People often purchase support for piece of mind and IT retail businesses know that. That is why the "extended 3 year warrantee" is often purchased by those not in the know. They want that piece of mind. 


Similarly, schools need the support contract and they need it with people who know the infrastructure in detail (usually having fitted it), understand the platform and are reliable. Free software doesn't have that one person/organisation they can turn to. So understandably, they are worried. After all, if 999 didn't exist (it celebrates it's 75th birthday this month), who would you call in a major emergency? Your mum? Your mate the badge selling Dick-Stall man?... sorry, typo :-S


Hypocrisy 

He said words to the effect of "Proprietary software is supplied to schools in the same way drugs are supplied to children!" and then in his pitch about "the war on sharing" (copyright and legislative frameworks designed to stop it) several minutes later, made the comment about anti-copyright propaganda. 


"WTF!?!?!" I hear you ask "Did he honestly make the connection between school kids and drugs, then complain about a propaganda campaign against him and his organisation?" Yes, I can confirm, he definitely did! That was the point at which the man lost all credibility with me and reduced him to a a giant, hairy blob of hypocrisy.


Don't SaaS Me!!

His 9 threats to freedom then included a criticism of SaaS services such as file storage apps (implying the likes of dropbox and G-Drive) and referred to how the "Pat Riot Act" (Patriot act) gives the US authorities access to your data from a provider without needing a court order. He also criticised PaaS/SaaS environments because the user effectively has to upload their data onto the service to run... which in my mind is the same as punch cards/mainframe system of days of yore. Mainframes can still store data or pipe it to a PC to be stored on disk and yet he kept exclaiming that any systems which the user has no control over is a threat to [the] free software [movement]. In any case, there is no difference in security risk as both mainframes and SaaS introduce a system that the 'resource' has no control over.


In reality, people would struggle. For example, how many people in your friends list are not programmers? Given you are reading this blog, there is a good chance the figure you came to is overstating it, as being nerds/geeks we tend to stick arond our ilk. We are the people too many others turn to when they have computer problems. Indeed, some geeks have developed coming strategies, so that when asked what they do for a living the reply is "erm... I am a refuse collector. I collect refuse!"


Don't Vote On Computer!

Cool! Thanks! I won't vote for you Stallman, whomever the Free Software Foundation decide should be their next voice, I will attempt to vote for them. I don't care who it is! Torvalds, Gates, Balmer, Cook whoever! Just get the chair from under that guy! 


I used to have a certain respect for the free software foundation several years ago. The FSF/OSS movement brought the battle to the Windows/UNIX platforms in the enterprise, at one point making up 60% of company servers and caused Microsoft to really look again at its server platforms. Indeed, I was in a focus group in London in 2001 where Linux was brought up in a question by the facilitator.


Summary

The Open Source community were right to splinter off from him and his ethos. Myself, having held the free software movement in fairly high regard for its achievements in pushing well in to proprietary software territory in the server space, I was sorely disappointed with Stallman's contradictory, hypocritical and nonsensical rantings which seemed somewhat detached from the way the market dynamics have worked. It is not that a lot of his sourced statements were wrong, but the meaning he attached to that 'data' was so far off it bordered on, dare I say, lunacy!


I finished my working day 30 minutes early to drag myself and a poor unfortunate to see this guy. I lost income due to this and I am very definitely not going to recommend seeing Stallman talk. I have to say, I should have heeded the advice of others who had experienced his one man 'cult' at work (I am afraid that is how I see it) and I can't say I can recommend this at all. The FSF need to find a better voice to take them to the next level. This has to happen to keep their crusade alive and give consumers options, as the more Stallman talks, the worse it will get. 


I can imagine some free software veterans saying "God MAN! Shut-up SHUT-UP SHUT-UUUUPPP!" and frankly, 70 minutes into the lecture, I really wished he would too!

Sunday, 24 June 2012

Windows Azure, my first look

I started writing this post from the first class lounge at Euston station, having finished my busman's holiday in London. I was at the London Windows Azure User Group's showcase of the new features of the Windows Azure platform.

To demonstrate the platform, a number of the Windows Azure Evangelist team arrived at Fulham Broadway's Vue cinema (all screens booked for the event) to demonstrate the platform across a number of track with supporting acts from the likes of SolidSoft, ElastaCloud, Derivitec, some of which were headline sponsors of the event.

The one and only Scott Guthrie, Corporate VP of the Windows Azure Application Platform, started the presentations the by outlining the new functionality available on the cloud platform released in the last couple of weeks. This coincides with the release of the new SDK. Unfortunately, despite his stated 99.95% Azure SLA availability, he had no control over the availability of the internet connection within the cinema itself. So predictably perhaps, the network connection went down and despite the efforts of some brave souls in handing over 3G dongles and using their phone data plans, it took a long while to get back online, with the help of a huge 100-BaseTX cable, which was so long the limits of that standard were breached, which meant  that an impromptu break was organised and a laptop which could work with such a poor wired signal was found to run the rest of the presentation on.

Scott Guthrie, before the network connection went down.

I went in there with my architecture hat on, to see what the platform had to offer and to see how the platform can be used to lower costs and deliver better value and to take away how to approach the decisions on whether or not to support or use Azure in the enterprise.

An Introduction to/Recap of Cloud Computing

To recap on the phenomenon that is cloud computing, the idea behind hosting on cloud computing infrastructure is to provide potentially infinite scaling of computing power to meet the demands of the services you wish to provide to your customers, whilst lowering the total cost of operating the platforms you would traditionally run in house.

This includes the ability to deliver extra computing cores, extra memory and the like for a linearly scaling cost through  a 'pay-as-you-go' business model which allows you to pay only for what you use.

The general provision of cloud serves take three main forms:
  • Infrastructure as a Service (IaaS) - This provides the 'bare-bones' platform for the end-user. The services are operated and maintained by the Azure platform. This often takes the form of the provision of a virtual machine with guest operating system under Azure.
  • Platform as a Service (PaaS) - On Azure, in addition to the services provided for IaaS, platform serves such as database server, e-mail services, and web services are provided, operated and maintained for the end-user by the data-center. In addition, Azure offers the ability to provision services such as the Azure Service Bus and WebSites through this service model type.
  • Software as a Service (SaaS) - In addition to the provision of the PaaS services, SaaS provides software services for the end user. On the Microsoft platform, the MS Office 365 environment is an example of a Software as a Service provision model. 
Below is a diagram depicting the relationship between the three types of service provision above.

cloud service provision types.

IaaS was available in the previous release of Azure. However, what interested me with my enterprise solution architect hat on was the provision of PaaS infrastructure. The Service Bus element especially ties in very nicely with ESB platforms that are currently making the rounds as the latest fad. So over the next few weeks, I am going to spend some of my included MSDN Azure minutes on finding out about that part of the platform.

Other benefits also include the implicit 'outsourcing' of the management of the platform to the in house Azure data-center staff, of which there are not many at all. The data-centers are designed to be operated and managed remotely, with racks of 25,000 servers set into a modified shipping container which is just hooked up to power, network and cooling before being let loose on the world. 

Scott showed a short montage of clips showing how the containers are put in place in the data-centers. When servers fail, they are simply left in place until a large enough number of servers in that container have failed, before the entire container is disconnected and shipped out to be refurbished/repaired.

Azure Cloud Availability 

Windows Azure claims a 99.95% availability per month for their cloud infrastructure. This is their SLA commitment. 

Now, as was made clear in other presentations on the day that there are no guarantees. The 99.95% SLA commitment is just a reflection of their confidence in the Azure platform. For those of us that have any experience with infrastructure, or have an understanding of terms such as 'three-9s', 'four-9s', 'five-9s' etc. then you will appreciate the sentiment and also the costs involved in claiming any more. Their SLA put them at the same level of availability as Amazon EC2, but higher than Google's Cloud service offering. 

The service provision of worker processes or VM instances is kept at that level by distributing 3 instances of your image offering (whether that be PaaS, SaaS, IaaS website offering or whatever) across three servers which have no shared single points of failure, thereby reducing the probability that your entire platform would be affected by an outage in any one of them.

This makes perfect sense, as distributing the load across diverse servers distributes the risk across a wider set of failure points (thereby reducing the risk that any single failure takes more than one server out). In addition, the Azure data centers replicate their server data across at least 500 miles of geographical space into another Azure data center. There are allegedly secure links to do this, so we were assured that the channels used to replicate the data are uncompromisable.

Cloud Services Available

Azure services are divided into 4 main streams:

  • Websites - An PaaS option which allow you to host up to 10 websites for free. This applies to anybody using the Azure platform, but bandwidth is chargeable out of the data-center. 2GB of data is provided at 24 cents per month. Again, you can increase this limit if you wish, but be aware it is an opt out service and not an opt-in. So you will be charged should you not change the default. 
  • Virtual Machines - An IaaS provision which allows for the creation of a number of virtual environments in Windows, different flavours of Linux or both. Again, georedundant storage is available.
  • Cloud Services - Additional computing functions, such as Service bus, worker role assignments, storage and the like.
  • Data Management - Different types of computing storage, such as BLOB storage, DB storage and management on platforms such as MS SQL Server and now MySql.
A number of additional cloud services across the three layers are available. However, more are being added each month. Unfortunately, I didn't see the Service Bus elements in any detail, but cloud services can be added to standard packages. These can be any or all of:
  • Web and Worker role instances - Sold in the computing unit sizes of XS, S, M, L, XL. Having had a more detailed look at the website, Apart from the extra small computing unit (Single 1 GHz CPU, 768MB Memory and 20GB storage), the rest of the options are based around the 'single computing unit' being (1.6GHz, 1.75GB memory and 225GB storage space). These scale linearly in the two dimensions of computing unit and number of units.
  • Storage - Extra Georedundant storage elements (where the data is stored in a different regional data-center) can be purchased up to 100 Terabytes for each processing unit. We were told that it could result in a Petabytes of data for some services.
  • Bandwidth - Same as usual
  • SQL Database - Unlike the others cloud services, this is the only service that doesn't scale linearly for the whole pricing model. The first GB is $9.99 per month, but after that, and especially when you get towards the 150GB mark, it is dirt cheap.
A lot of these options are replicated in the Data Storage part of the cloud service delivery model. You can choose not to have your data stored georedundantly, as Azure effectively creates a mirroring of your data across three Azure storage units. The presentations around the data storage elements indicated that there were both SSD and mechanical drives present in data-centers, but that the SSDs were being used to cache data. I asked Scott what the ratios were compared to the mechanical sizes and whether they were shared across all computing units for all users, but he couldn't give me the ratios and he was half way through trying to fix the internet connection at the time to answer the question fully. 

Multi-platform Support

Various demonstrations were set up to show the use of multi-platform deployment. These included the use of open source as well as Microsoft platforms with the aim of showing how these run out of the box without any extra config. 

Scott showed examples of nodejs and PHP code running straight out, whilst other tracks saw the use of Java on the Azure platform and Brady Gaster showed the open source track the use of a multilanguage site using classic ASP and PHP as well as the standard .NET toolkit. 

Whilst I can't imagine any of this being incredibly difficult in Windows, given that it only requires an ISAPI library to be able to run any of these as it stands, it is useful not to have to do the config yourself.

There was also a demonstration centered around the HPC capabilities in Azure, using the Azure HPC SDK elements. However, again, due to the internet connection having problems, the demonstrations were left a little lacking in response times.

Yosi Dahan explained the use of Hadoop to the uninitiated (somewhat including myself), though the information presented didn't include much that I didn't know already. There was no demo for this one, despite the billing, but given the problems with the internet that day, it wasn't likely to have been very good.

Microsoft are aiming to embrace the Hadoop platform for use in the cloud. Yosi stated the standard OSS version of the code was not enterprise ready, given there is hardly any security surrounding it. Microsoft are working to improve this and other aspects of the platform, before giving the changes back the Hadoop community. This was the second of two open source presentations which showcased the Azure platform as a place to host OSS sites. It is an interesting tack and one which Yosi himself stated that Microsoft has not always been good at (...at all I would say ;-)


Clouded Architecture Considerations

The 'Thinking Architecturally' presentation by Charles Young from SolidSoft highlighted that the cloud offers a unique way in which to provision infrastructure and platform service to end-users. Charles asked the audience if anyone could bill themselves as working as an Enterprise architect or work in or with enterprise architecture. Given I have a TOGAF certification and am a member of the Association of Enterprise Architects, I figured I could just about raise my hand... and was the only person to do so.... cringe city! A similar question to the floor for solution architects, for which there was a much better response, including my second vote ;-)

He presented the two sides of the architecture domains to the audience. Initially starting with enterprise architecture, he used the cloud costing models to illustrate typical investment forces which could lead down one path or another. However, Charles didn't sing the praises of every bit of the cloud infrastructure, in either the enterprise architecture or solutions architecture domains. I happened to like that as it showed a balanced viewpoint, which is what I was there to see. Note, architecture is often about trade-offs, and in order to do that, you need to know what those trade-offs points actually are. 

Charles referred to cloud computing as a 'game changer' which I certainly agree with, as the costing structure will certainly influence the financial forces at work in migration planning stages of any enterprise architecture strategy. I would suffix the words 'once it reaches critical mass'. This will most certainly applied across the board and industry. The usual question with such innovation is when will it reach the critical mass necessary to make this spread like wildfire? This would take it into all facets of the industry and become the de facto standard for deployment. 

Given the extreme examples of costings that Charles used as examples from his client list (the latter of which appeared to show that the operational costs for 10 year deployment would be 0.33% of what they would be in a traditional in-house hosted solution). However, Charles did indicate that those were extreme examples of money saving effects and that most will be much closer. However, even then, the savings would be big enough to be a 'no-brainer' for most accounting functions or investment committees. So there would not be any concern from this set of functions within an enterprise.

Security

Despite the insistence of SolidSoft and others that the network infrastructure is secure (and I have no doubt it is) the traditional in house functions responsible for the day-to-day operations of a company's infrastructure seem to win out from the 'safety' aspect. Security managers/architects still tend to have problems with the idea of cloud infrastructures and the security mitigation that the Azure data centers have put in place do not cover all of them at all. For one, development and infrastructure teams will have to become more adept at dealing with security issues outside their control, and make more use of secure channels to and from these data centers.

The worry, which I certainly think is a legitimate one is how to ensure compliance according to the legislative frameworks we currently have in place in some architecture landscapes. Some of the organisations that stand to benefit the most from cloud computing are the very ones who can both invest in it, pushing the market simply by the numbers and also the very ones who stand to be hit the most by any legislative data security issues. 

Unfortunately, my question to Charles surrounding the PCI-DSS standard were not answered, though this is due to the SolidSoft representative not having had experience of implementing it in the cloud. Also, given I was told that a delegate before me in the queue had already asked a similar question, it is certainly something that will have to be addressed before companies falling into the higher levels of the standards, who stand to lose the most should any of them be found in violation could sensibly take this on. For all the ease of scaling that cloud services provide, the trade off is that there will have to be greater emphasis by companies on the securing of the channels that would be needed to make it work realistically, against the backdrop of said legislative frameworks.

Sky High Costs?

For those who pay (or will pay) for cloud services, what is interesting about the costs of cloud models is the way it scales.

A traditional data center setup would involve an enterprise setting up their own hardware resources and running their own operations. Imagining a badly paid IT manager, with 25 servers running 24/365, but requests to them only run during a working day, plus the electricity for the cooling and the servers. The servers and cooling infrastructure alone are an upfront payment towards resources which may or may not be fully utilized. Similarly, so will the fixed cost of salaries for the poor badly paid IT manager and the cooling and electricity for servers which are on all night when very little is being processed. 

Contrast this with the per hour model of Azure's cloud service. 

In their Paas/IaaS model, depending upon the processing resource you require, it is a linear cost. If you need a processor with a single 1 GHz processor core, this is their extra small processing unit and costs very little. So much so, this model you (or anyone else, regardless of whether they have purchased Azure time or not) get for free in their Websites environment. Getting a single dedicated 1.6GHz processor requires their Small computing unit setup. This can either be a Windows and now a Linux virtual machine, which affords the purchaser one of two ways to target and distribute their services.

Additionally, the WebSites cloud service offering can provide 10 'small' scale websites for your business, included pre-created templates (such as WordPress), potentially with a 100MB SQL Server database for free (though very definitely big enough for a lot of small business needs). Both ANAME and CNAME records can be used on Azure, as there were previous concerns that the Azure platform had trouble with linking to domain name registrations which would override the 'mysitename.azurewebsites.net' style ofnaming. This will go some way to appeasing these concerns. 

Summary

On the whole, the day provided a useful insight into cloud computing on the Azure platform. There were  a number of presentations and it was not possible to catch all talks from all tracks. So there will no doubt be others who will enlighten use with their different viewpoints. 

The latest version of Azure certainly offers a richer environment from which to work and the rolling, potentially monthly, deployment of other cloud services, templates, platforms etc. I am looking forward to jumping in to the service bus elements of the platform, to see how it stacks up and what functionality it has (or has not) got in comparison to an in house ESB offering.

Watch this space...