Corporate Lock-in vs. Open Clouds

Lately there has been publicity about how major corporate Cloud Computing offerings are really just a play to lock you into vendor specific solutions while they collect information about you and your customers.

Richard Stallman says cloud computing is ‘stupidity’ that ultimately will result in vendor lock-in and escalating costs.

Oracle’s Larry Ellison says cloud computing is basically more of the same of what we already do. I think he is saying they will continue business as usual and jump on the band wagon and use the term.

Tim Bray blogged about cloud computing vendor lock-in being defined as  “deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.

Even Steve Ballmer seems to be anti cloud computing, citing that consumers don’t want it. I’m not sure I understand his argument other than essentially saying it requires some proprietary software to run in the context of someones cloud.

Tim O’Reilly wrote an excellent blog on Open Source and Cloud Computing. He provides this bit of laudable advice. “if you care about open source for the cloud, build on services that are designed to be federated rather than centralized. Architecture trumps licensing any time.”

While some of the paranoia about being lock-in and vulnerable to a corporation is warranted, there is also an undercurrent of revulsion to its marketing. This stems from the fact that the term ‘cloud computing’ has already achieved a high silliness factor in its use to brand everything (à la 2.0). Also, this computing model is not yet sorted out and should evolve into something better with input and guidance from those who are technology savvy.

A well constructed architecture for a distributed execution platform will provide a truly open and scalable solution for clouds and distributed computing in general. By Distributed Execution Platform I mean primarily a platform which can among other things:

  • dynamically discover resources on a network
  • enable dynamic software provisioning of software services where execution is most efficient
  • manage services as needs dynamically change
  • detect failures and automatically reconfigure itself to accommodate

A non-proprietary platform with these types of capabilities must be architected to execute in data centers and across individual computers connected to the internet as well as out to the edge. UIs access the remote services and data independent of where they may be running much like browsers accessing web sites. Distributed data is accessible across peers in a p2p model and accessible to all services. Data is accessed using common apis and through use of proprietary interfaces used by collections of collaborating services.

The payoff for using services executing in a open distributed execution platform will be for

  • small companies needing to exist on strict budgets,
  • individual developers looking to create the next killer application and
  • large corporations who run virtualized services (for free or fee paid) in their own data centers

The above characteristics enable large corporations and individuals to compete basically on the same playing field.

Note: I was hoping to coin the phrase ‘Distributed Execution Platform’.  But, it has been used periodically elsewhere. Most commonly at the moment by the Dryad project. Maybe it will become the next overused buzzword.