Corporate Lock-in vs. Open Clouds

Lately there has been publicity about how major corporate Cloud Computing offerings are really just a play to lock you into vendor specific solutions while they collect information about you and your customers.

Richard Stallman says cloud computing is ‘stupidity’ that ultimately will result in vendor lock-in and escalating costs.

Oracle’s Larry Ellison says cloud computing is basically more of the same of what we already do. I think he is saying they will continue business as usual and jump on the band wagon and use the term.

Tim Bray blogged about cloud computing vendor lock-in being defined as  “deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.

Even Steve Ballmer seems to be anti cloud computing, citing that consumers don’t want it. I’m not sure I understand his argument other than essentially saying it requires some proprietary software to run in the context of someones cloud.

Tim O’Reilly wrote an excellent blog on Open Source and Cloud Computing. He provides this bit of laudable advice. “if you care about open source for the cloud, build on services that are designed to be federated rather than centralized. Architecture trumps licensing any time.”

While some of the paranoia about being lock-in and vulnerable to a corporation is warranted, there is also an undercurrent of revulsion to its marketing. This stems from the fact that the term ‘cloud computing’ has already achieved a high silliness factor in its use to brand everything (à la 2.0). Also, this computing model is not yet sorted out and should evolve into something better with input and guidance from those who are technology savvy.

A well constructed architecture for a distributed execution platform will provide a truly open and scalable solution for clouds and distributed computing in general. By Distributed Execution Platform I mean primarily a platform which can among other things:

  • dynamically discover resources on a network
  • enable dynamic software provisioning of software services where execution is most efficient
  • manage services as needs dynamically change
  • detect failures and automatically reconfigure itself to accommodate

A non-proprietary platform with these types of capabilities must be architected to execute in data centers and across individual computers connected to the internet as well as out to the edge. UIs access the remote services and data independent of where they may be running much like browsers accessing web sites. Distributed data is accessible across peers in a p2p model and accessible to all services. Data is accessed using common apis and through use of proprietary interfaces used by collections of collaborating services.

The payoff for using services executing in a open distributed execution platform will be for

  • small companies needing to exist on strict budgets,
  • individual developers looking to create the next killer application and
  • large corporations who run virtualized services (for free or fee paid) in their own data centers

The above characteristics enable large corporations and individuals to compete basically on the same playing field.

Note: I was hoping to coin the phrase ‘Distributed Execution Platform’.  But, it has been used periodically elsewhere. Most commonly at the moment by the Dryad project. Maybe it will become the next overused buzzword.

Advertisements

2 thoughts on “Corporate Lock-in vs. Open Clouds

  1. I agree that most lock-in would result from the fear of not being able to move your OS instance to a new vendor. In current data center architecture, can we call it legacy data center? There is no easy way to move your Operating System installation from one machine to a new vendor machine. Just because you can move your OS instance, doesn’t mean it needs to be a requirement.

    Instead of focusing on being able to move an instance. Everyone should be focusing on designing dynamically deployed software and services. Bring up an instance, run a small script that loads all your software, installs and configures it.

    Nice Article.

  2. Thanks. I agree that development of services should focus on being independent of an OS and built such that it is agnostic of its surroundings. Agnostic in the sense that it doesn’t need to know about the OS or underlying CPU or storage hardware. It is the execution platform that handles the provisioning of service software and mechanisms for services to discover and communicate with each other. If required, a service can be bundled with scripts for custom installation and configuration. Those scripts get run automatically after provisioning. ( There is an example of provisioning Tomcat and configuring it to cluster automatically with other Tomcat instances in the Assimilator examples)

    With the use of virtual machines and virtualization architectures dependencies on OS and hardware are lessened. The Assimilator (http://assimilator.sourceforge.net/) execution platform provides some of these capabilities. It is my hope that it becomes an example of an open-source execution platform for service execution and management across the internet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s