Agile Software Development Misconception

This is a follow-up to one of my previous posts “Scrum not just for developers”.  In the past couple of weeks in discussions about software development, a number of people I have spoken with have indicated they believe Agile development applies only to tasks engineers perform. It’s not that these people are opposed to having managers, stakeholders, and users involved in the development process, but that the process doesn’t apply to them. This is an absolutely incorrect assumption.

This misconception comes primarily from

  • not being familiar with Agile Methodologies, and
  • not knowing how or when this communication should take place.

Most typically missing from the process is communication with stakeholders and end-users. A typical anti-pattern that arises is a development team drifting away from interaction with stakeholders and users except at pre-defined scheduled meetings which are spaced too far apart. Stakeholders and users are critical for defining the desired functional capabilities of the system being built throughout each development cycle. Capabilities are refined in short development cycles and new requirements arise which must be addressed as soon as possible.

It’s true some methodologies focus more on the programming techniques. For example, XP focuses on Pair Programming, Test Driven Design and Refactoring. But even XP is dependent on a methodology driven by communication with all team members. Among other places, this is referenced in

It can’t be made important enough that no matter what agile methodology your team applies, the communications and involvement of All project team members is critical to project success. A simple overview of Agile Development principles can be found at Manifesto for Agile Software Development. Where one of the key principles is

” Business people and developers must work together daily throughout the project.”

Even better, I would include End Users or people who represent end users for constant valuable feedback throughout the development of the project.


Open Source and Cloud Computing

These are some comments I have on Tim O’Reilly’s insightful post about open source and cloud computing.

There are interesting thoughts in the post about clouds becoming monolithic and how control of data by a few privileged companies will drive the development of services which access and manipulate our information.  The entry into the market of smaller organizations with new and better ideas  becomes more difficult. This is all probably true. One of the main contributing factors to this happening is that we all let it happen. Most people are not technical and are primarily concerned with an application performing some function adequately for  their needs. If it happens to be a service built and hosted by a monopoly, most people don’t care. At least not until they grow weary of the application, perceive their may be better alternatives and then want them. So the evolution of monopolies with monolithic systems arises from organizations pushing their services for profit (which is fine) and the majority of service users only being concerned with their own satisfaction. Open source and open apis and standards don’t solve this problem.

Open source does make it easier for those who are technically savvy to build new software systems and services. It doesn’t solve the issue of being able to easily publish services for wide usage. It doesn’t solve the problem of having access to network, server and storage resources which the services may need to use. If you have choices of services that represent these resources, you begin to solve the problem. If these services can be discovered dynamically rather than referenced as static locations it begins to provide an even better solution. The process becomes:

  1. I decide I want a type of service (maybe storage)
  2. I lookup what my choices might be
  3. I discover which ones are available
  4. and select one.

Standards don’t necessarily help out either. Many of the existing protocols are sufficient for communication and data transfer. Standard APIs satisfy groups of service providers that may share resources and software. But if everyone uses the same standard doesn’t it become monolithic and antiquated as it no longer provides the needs of  and access to newly emerging technologies? Having multiple standards and options is usually a better alternative. I wish I could credit the original author of this quote that has been around for at least 15 years; “The great thing about standards is that there are so many to choose from.”

So the answer to keeping monolithic organizations from squeezing out small companies’ new ideas is not through the use of open source and standards (although open source is beneficial).  The answer lies in creating a platform which executes on compute resources within the internet allowing among other things:

  • a way to look up desired services
  • identify if they have the desired capabilities
  • discover where these services maybe available
  • select the desired services for use

By services I mean, software that represents a set of capabilities implemented as

  • a software component
  • as a component in conjunction with other components or services
  • or as a software component utilizing hardware resources such as CPU, Storage, network bandwidth
  • the services are dynamically hosted where they may run most efficiently

The answer lies in allowing everyone the opportunity to create and publish services for use on a platform accessible by everyone. Think of it as a layer on top of the existing internet. It is a network overlay within which everyone has access to services. There would be a large collection to choose from with an always changing selection.  This is analogous to selection of services we choose in our everyday lives for food, auto repair, home services, etc. The answer doesn’t lie in enforcing open source and standards, the answer lies in creating an open execution platform enabling all to create and provide services.