by Derek Morrison, 9 September 2008 (addendum added 26 September 2008)
The recent release of Google’s Chrome is not intended to offer just another web browser to a marketplace already largely segmented by Microsoft’s Internet Explorer, Mozilla’s Firefox, or the eponymous Opera (plus a few other more minor players). No, Chrome is meant to advance the concept of ‘cloud’ computing.
Google and other would be cloud generators postulate that there is an inevitable transition underway from desktop applications, in which the processing is undertaken by a personal computer or – and- local server(s) to, a more distributed approach in which applications and their processing is offered and undertaken by vast external pools of computer hosted by one or more putative ‘cloud’ companies like Google, Amazon, and Microsoft et al.
In the cloud model the user’s or a team’s data is hosted in the internet cloud rather than stored locally and so it can be accessed anytime from any location with internet access. Even the applications and the processing could be occuring in different places. To some degree existing services already demonstrate the potential of such a cloud approach, e.g. Google Apps, Google YouTube, or Yahoo’s Flickr service; with users remaining largely untroubled with details of the origin and architecture of the services as long as they just work. The BBC’s iPlayer development for television and “listen again” or podcast services for radio are also, arguably, representative of this transition to the network. There are even sites dedicated to the promotion of web applications, e.g. WebWare which promotes:
Say No to boxed software! The future of applications is online delivery and access. Software is passé. Webware is the new way to get things done.
Ironically, the WebWare site also offers us the useful 10 Worst Web glitches of 2008 (so far) an item perhaps worth reading before reading the rest of this post.
The theoretical affordances associated with cloud computing, particularly at company level, could be attractive, e.g. no local capital outlay for machines and environments to host them, apparently infinite scalability, and the ability to access and share data globally from a variety of internet devices. Some major ‘cloud’ contributors, like Google, even make available free application development toolkits with the quid-quo-pro being that developers and users are effectively signing up for what currently is a free service … but may not always be so. For example here are some clauses from the Terms of Agreement for Google Apps (as of 3 September 2008).
16. Termination. Customer may discontinue use of the Service at any time. Except as provided in Section 18, Google reserves the right at any time and from time to time to modify the Service (or any part thereof) with or without notice. Customer agrees that Google may at any time and for any reason terminate this Agreement and/or terminate the provision of all or any portion of the Service. Notwithstanding the foregoing, Google will provide at least thirty (30) days notice to Customer prior to terminating or suspending the hosted email service (if provided to Customer); provided that such hosted email service may be terminated immediately if (i) Customer has breached this Agreement or (ii) Google reasonably determines that it is commercially impractical to continue providing such hosted email service in light of applicable laws. Customer agrees that Google shall not be liable to Customer, any End User, or any third party for any modification, suspension, or termination of the Service. Sections 8 (Confidentiality), 9 (Ownership; Restricted Use), 12 (Representations and Warranties), 13 (Warranty Disclaimer), 14 (Indemnification), 15 (Limitation of Liability), 16 (Termination), 19 (Information Requests) and 20 (Miscellaneous) shall survive the expiration or termination of this Agreement.
18. Fees. Provided that Google continues to offer the Service to Customer, Google will continue to provide a version of the Service (with substantially the same services as those provided as of the Effective Date) free of charge to Customer; provided that such commitment: (i) does not apply to the Domain Service described in Section 4 above; and (ii) may not apply to new opt-in services added by Google to the Service in the future. For sake of clarity, Google reserves the right to offer a premium version of the Service for a fee.
What the ‘cloud’ could easily represent is another variant of the “locked in” model I’ve waxed lyrical about in so many Auricle posts over the years. Free services will be good enough for recruiting a user base, but the premium service will always be much better. Users could end up paying for either the amount of computer space utilised, data stored, or transfers made. Of course, from an individual user perspective, as long as the cloud service or application allows a local copy of the data to be stored in a standard format there would always be the opportunity to migrate to another service that accepted such a data format; although the hassle factor in such a migration, particularly, for teams could be more important than the data format.
And what about the nature of that data? Who owns it? Who can access it and for what purposes?
For example, in a previous Auricle post BBC iMP has rights but at what cost? (Auricle, 26 May 2005) I’ve pointed out how perceptions of the iPlayer’s affordances need to balanced with the loss of freedoms as regarding the ability to make recordings in order to watch material when you like. Freedoms that the ‘old’ technologies like VCRs HD/DVD recorders allow. No. It’s watch within seven days or it’s gone, gone, gone … The cloud giveth and the cloud taketh away 🙂
Restricted access to audio and video services are a mere irritant in comparison to other scenarios.
Medical/health records represent an example of “mission critical” data of extreme sensitivity and importance to both patients and their clinicians. Co-ordinating and processing such records on a national scale is a major undertaking. Another of Google’s putative cloud services is Google Health which enables individuals to upload their medical records and share them with their medical practitioners. From one perspective this would appear to place the locus-of-control with the patient and his/her clinician rather than other parties with Google, of course, establishing the necessary partnerships with parts of the US health system. It will be interesting to see if this gains traction in the US and elsewhere although the different culture and phenomenal level of existing IT investments in UK NHS information systems via the at times troubled national Connecting for Health initiative may impede the chances of against any significant take-up of Google Health in the UK. However, entering Google Health into the search field of the Connecting for Health site shows that it and the rival Microsoft HealthVault are at least on the radar of those planning and designing the NHS initiative.
An open source alternative to Google Health (and which is ironically partially supported by Google) is OpenMRS whose underlying design mission is appears to be the provision of a self-sustainable medical records system for developing countries that lack a sophisticated network infrastructure. Neverthless, we can learn a lot from the design goals because OpenMRS can interact between aggregated reporting and patient-specific clinical systems. Also it has a decision-support architecture and so is intended not just to provide patient information but also advise doctors on the treatment of patients. OpenMRS is intended to become an open source platform which can be adapted and adopted without the licensing constraints of proprietary systems but yet can underpin the business models companies focusing on the implementation and support of OpenMRS. The demonstration on the the project web site conveys some of of the potential of grassroot systems like OpenMRS.
There are also other concerns regarding security, and confidentiality of data when it is effectively not under the direct control of the data generators/owners. Such cloud data dispersed accross the internet could easily be hosted within the jurisdication of countries who may be able to apply legislation that gives them rights to access and make decisions based on that data. For example, British Columbia’s outsourcing of the administration of its health service to a Canadian subsidiary of the large US company Maximus. The arrangement has been criticised by its opponents for making Canadian BC residents vulnerable to the sweeping terms of the US Patriot Act. Canadian medical records could theoretically be accessed by US government agencies such as the FBI or Homeland Security even if such US companies are operating in other countries. For example, it was only in July 2008 that President George W Bush signed the legislation to begin the process of repealing the statutory ban on entry to the US of all HIV-positive non-citizens. In March 2008 an HIV positive Canadian seeking to enter the US brought the impact of this legislation into the spotlight. Although said HIV status is based on self-declaration at the border, it’s interesting to reflect on the implications of data storage across national borders or where countries assert such trans-national legislative rights; particularly when they perceive a threat to their own national security and may, consequently, be prepared to tolerate censure from other countries so affected.
The ultimate disaster scenario of course is the collapse or takeover of one or more ‘cloud’ companies which may not even be based in the same country and in which access to critical data becomes impossible. Such distributed computing may on the surface appear to offer a robustness not possible locally but from another perspective the interdependencies betweeen the different components (applications, data, processing) may actually create multiple points of failure. Many of us, however, are already participating in the ‘cloud’ in our use of multiple online services, e.g. Flickr, Facebook, Blogger etc but were these services to be withdrawn, for the majority at worst it would be a serious incovenience rather than compromise some mission-critical activity. There can, however, be painful consequences arising from contracting to a cloud service that is later withdrawn. For example, prior to Google’s acquisition of YouTube it run its own service Google Video which as well as running a free upload/download service also had an alternative commercial offering in which customers could buy or rent digital videos for download. Said videos were protected from copying by a digital rights management system which validated itself online. Google Video was suddenly withdrawn from service which meant that customers could no longer access the videos they thought they had purchased, i.e. users thought they owned such videos in the way they would with a physical artefact, but this proved not to be the case. The issue attracted considerable censure at the time, e.g. example 1, and example 2). This case perhaps indicates the risks associated with a value-added service dependent on remote validation to function. A validation which one day may no longer be there for commercial or other reasons.
Reinforcing the concerns about ‘cloud computing’ and new forms of ‘lock-in’ Jonathan Zittrain, Professor of Law, Harvard Law School and, until recently, Professor of Internet Governance and Regulation, Oxford University, postulates in his book The Future of the Internet: And How to Stop It (May 2008) that the rise of tethered devices threatens to end the ‘free’ internet. Why? Because our current inability to solve the problems, e.g. spam, viruses etc, associated with the very flexibility and openness of our current what he calls ‘generative’ (or innovation generating) systems that could easily be driving users into the arms of the tethered device platforms, and service makers, as well as other would-be gatekeepers (such as government legislators).
Tethered devices such as the Apple iPhone and Microsoft’s XBox are characterised by the high level of control they provide for the device developers rather than the users in a way similar to the control exercised by Google and Facebook in the provision of their platforms and services. Strategies dependent on popular platforms such as Facebook, Google et al rely on business models and strategies which will not always be consonant with the strategies or values of external developers or companies. For example third party developers for the Apple iPhone must submit their artefacts via the iPhone store and it is Apple which will decide whether they make them available or not and what percentage of any income accrued they will deduct if they are offered for sale. In another example, applications developed by third parties for iPhone may prove very attractive to users but if they are perceived to undermine Apple’s agreements with, say, mobile telecoms companies then they may never be offered to end-users. Nullriver’s iPhone application Netshare provides an exemplar of this type of issue. Netshare was an instant but very short-lived hit when uploaded to the Apple iPhone – App Store. Netshare enabled an Apple laptop to connect to the internet using the iPhone as a 3G modem but was pulled from the Apple Store in short order because of the perceived freedom it granted iPhone users to access the internet using carriers other than those Apple had contracts with. A good summary was provided in the Guardian’s piece Can I use my iPhone as a modem with Netshare? (Guardian, 7 August 2008).
If you want to hear the words of the master himself then there are a couple of podcast interviews of Zittrain about his book that are really worth listening to. First there is the The Future Perfect (On the Media, 18 April 2008) and then the LegalTalkNetwork podcast of 16 July 2000.
This post was not meant to be an anti-cloud polemic because the reality is that many of us are already users of the cloud attracted by the ease of access from anywhere and the ‘free’ or relatively low cost service.
However …
Data of course could be considered transient and disposable as with an application like Twitter (for an example of Twitter use listen to Micro Reporting, On the Media, 22 August 2008). Otherwise, it would seem essential that even humble individuals at home should have a data backup and recovery plan in the event of their cloud service disappearing … into the clouds. The very ease of use of these cloud application and services is, however, what may attract the very individuals who know little (and care even less) about such apparently technical distractions. As regards the would-be corporate cloud user, the most watertight contract in the world cannot protect them from all eventualities. Ironically, a sound backup and recovery plan may well require investment in the very local expertise and infrastructure they had hoped to escape.
So I remain unconvinced of the argument that the future will be world of terminals connected to a distributed ‘cloud’. That world is less the future than an updated analogue of the past world of terminals connected to a mainframe computer, albeit this time a distributed mainframe with potentially multiple points of failure. It was that tethered world that the personal computer, in its various forms, enabled users to escape from. Instead the personal computer and other devices should enable to dip in and out of a range of services and applications, some local and some cloud, but never having to depend totally on either. As Zittrain has identified, however, we will have to take great care that we will not lose these freedoms, either from being seduced by the apparent promise of the tethered devices, or by the inadvertent consequences of government legislation introduced with the declared intention of protecting us from the “bad guys”.
Addendum (26 September 2008)
Tim Anderson’s Is it all clear skies ahead for cloud computing? (Guardian, 26 September 2008) gives an interesting account of both the risks and the potential of cloud computing. Note particularly, how the recent Amazon S3 service problem is being used as an example of a “single point of failure” but also what the solutions to this may be.