Scrum – Not Just For Developers

It seems on many projects using Scrum (especially those projects in large organizations) that key stakeholders don’t believe they need to be part of the Scrum. Stakeholders in this case being Project Directors and Business Development or Marketing executives who represent customers. They want to have a voice but don’t commit to participation in the process. They expect results and deliveries but are disappointed by not getting what they expect.

The key is communication!! Continuous ongoing communication through participation.  They must be part of the process because they define what gets built. They define it as much as the user of the systems and engineers do. Participation is critical to project success. Some of the keys to success include:

  • Inclusion of Stakeholders in more than Sprint Reviews is necessary.
  • Stakeholders must represent and help define the goals and direction for each sprint, potentially for each backlog item as well.
  • Engineers must clearly articulate both the feasibility of identified system capabilities and level of effort to the Stakeholders.
  • Assigning a Stakeholder proxy to participate is a dubious approach. Adding an extra level in the communication channel usually doesn’t help much.

Most times the Stakeholders are willing to participate when it’s made clear they are needed.  If not, it should be made clear that what gets delivered and when it gets delivered is estimated strictly based on the direct input from the stakeholders.  It should also be made clear that without the communication and participation the project is essentially bottle-necked or on hold. Making well directed progress using good communication throughout the entire process applies to all members of the team.

Everyone on the  team, especially the Scrum Master, needs to insist (or force) that there is participation by all members to drive the project forward. It’s not just for developers.

Scaling Social Networks in a Cloud

Usually we hear about web sites and web applications becoming popular, not being able to scale to handle growing load, then crashing and burning. There are plenty of examples. Large news web sites still struggle to handle load when major world events happen. Successful social networks struggle as well. has struggled in recent months to maintain performance. Due to usefulness or coolness some of these sites hang on to their user base, the users hoping things will get better in a reasonable amount of time. If problems are corrected quickly, existence is maintained. Twitter appears to be recovering.

Building networked applications is a bit of an art. However, if you anticipate (or hope) that your application or service will grow to accommodate millions of users, you have to architect it to scale from its inception. If you don’t, likely you’ll be added to the list of casualties.

There are published architectures for reference and lectures discussing how companies with a large internet presence have handled scalability. For example:

During the past couple weeks there have been success stories about popular services scaling successfully in Cloud Computing environments. Notable is the BumperSticker Facebook application,  built at LinkedIn, scaling to millions of users. This application was built using Ruby on Rails and deployed in Joyent’s Cloud. Joyent has a video posted describing how this application was built.

Sun Microsystem and Joyent announced a collaboration to offer free hosting (for a limited time) of social applications built for Facebook and OpenSocial environments. This is great news for developers and small companies without access to resources needed to handle scaling applications to huge numbers of users. We are witnessing the advent of companies now offering up new grid computing services as part of a business model and marketing it using one of the latest hot buzzwords, ‘Cloud Computing‘.

This should make it possible for more developers to implement and share new social applications where before it would be resource prohibitive. Some of the benefits of cloud computing offerings include:

  • low cost of entry into resources required to deploy scalable applications
  • no need to worry about hardware and support software maintenance

There are downsides as well. Some of them include:

  • locked into vendor agreements and configurations
  • doesn’t necessarily solve the scaling issues

Granted these types of services are of value to help host and scale applications to levels not previously possible by many individuals and small organizations. But, it is not a panacea for scaling social networks or individual services. It is not a substitute for architecting your software properly nor does it guarantee performance and reliability.  Also, when deploying your creation into one of these corporate citadels, be sure you understand the SLAs being provided and costs associated with resource usage of the deployed software (e.g. can you incur the cost of resources used to scale to 5 million users if that happens).

If you are building a new Social Network application and are looking for users, join the crowd.

If you think you have a new social application that can draw 1 million+ users make sure you’re prepared for it to scale whether you use the resources walled in a citadel of a large corporation or are hosting it yourself. Consider designing the application to scale first no matter where it is deployed. Then decide how it may be deployed,  whether in a data center, hosted web server farm or using a distributed execution platform such as Assimilator. If a Cloud is a corporate controlled data center, then there will be many isolated clouds. If somehow all computing resources connected to the internet are applied to deploying social networks, then many more clouds will dynamically form providing a much greater amount of available resources.

World Community Grid

Tackling the worlds problems with technology. At least that’s the goal of the World Community Grid. The organization uses BOINC as the infrastructure to run software on your computer using idle CPU time for processing power. Remember running SETI@Home on your computers to process of chunks of data looking for radio signals? Same technology. So there is not much new here but BOINC is still going strong and the World Community Grid is a worthy cause and a simple model to help utilize idle compute resources in a worldwide grid. Among the list of active projects, their latest project, launched in May 2008, aims to determine the structure of proteins in rice. With this knowledge better strains of rice can be developed for higher yield crops, stronger resistance to disease and pests and contain higher nutritional value. Hopefully working towards relieving some of the pains of world hunger.

Network Overlays

Basically a network overlay is a network built upon another network. It uses the underlying network as a support infrastructure without changing it and defines its own protocols to communicate between nodes.  This adds additional capabilities to the underlying network. Many peer-to-peer networks are built in this way.

The nodes in the peer-to-peer network are defined logically, change dynamically, have their own protocols for discovering nodes and transferring data and utilize internet protocols. They are overlays on the internet.

Overlays create the potential for creating new and disruptive network and service architectures. They are easily deployed on existing host machines connected to the internet. It can provide a mechanism for services to migrate across the internet, perform dynamic discovery of remote services and become adaptive to handle failures and distributed load. These capabilities could be available across any and all computers connected to the internet.

There are a handful of network overlay software packages available as open source today. They provide features beyond what the internet offers. Some of the packages include:

  • RON – to improve the availability and reliability of data packet routing
  • Chord – used to build scalable distributed peer-to-peer systems
  • Bamboo – implementation of a DHT algorithm for use in peer-to-peer architectures

If the following can be combined:

  • a good DHT algorithm to manage mappings of host across the internet which dynamically join and leave
  • software that discovers and host services at each node in the network

This would create a very powerful implementation of an architecture supporting many different kinds of dynamic services and social networks that execute not just in the isolation of data centers but across any and all computers wishing to join the network.