Success rate of agile projects Agile projects are far more successful than waterfall projects. How much more successful are agile projects - and is that enough?

The CHAOS Reports have been published every year since 1994 and are a snapshot of the state of the software development industry.  The 2015 edition studied 50,000 projects around the world FY2011-2015, ranging from tiny enhancements to massive systems re-engineering implementations.

For the 2015 report it has been redefined what “successful” means. Earlier a project was considered successful if it was on budget, on time and on target (e.g. scope). Standish Group admits now that “on target” was never a good measure because it lacked any measure of customer outcome. A lot of projects could meet all these constraints, but still leave the customer unsatisfied – and this is noted by Standish Group. “On target” is now replaced with a measure of customer perceived value, and other additions has been done, so that in all there are 6 factors to consider: on time, on budget, on target, on goal, value and satisfaction. This has resulted in a 7% decrease in the rate of successful projects from the year before.

If we look at the agile vs waterfall metrics this is what the report gives us:

Agile vs. Waterfall

What is very clear is that agile projects are a lot more successful than waterfall projects. Considering all project sizes agile is about 350% more likely to be successful. For large size projects agile projects are 600% more likely to be successful. Another key takeaway is that regardless of approach you really should be doing small projects. If you are trying to scale your projects, then waterfall seems to not scale very well. In any way – choose an agile approach. These numbers have been consistent over time now – abandon your waterfall approach!

Although this report tells us that agile clearly is the way to go, there is still lots of room for improvement. The average number of successful agile projects is at 39%. This leaves 61% of the projects challenged or failed. While this is clearly better than 89% (waterfall), it is still a high number and an indication that there is a lot of bad agile out there.

I have also taken a look at The State of Agile Report 2015 and The 2015 State of Scrum Report. There is a lot of things to take from these surveys, and maybe I’ll address more of my takings on these reports in later posts. For this post however, I am going to focus on the success and failure rates of agile projects.

In The State of Agile Report it is stated that only 1% of the respondents admitted that their agile implementation was unsuccessful. In the email I got from VersionOne (yes, the survey is sponsored by a vendor) the wording is a little different: “only 1% had experienced agile failure”. Up until the 2013 version of this report one of the options for leading reasons for failed agile projects was “None of our agile projects failed”, and it’s a little confusing that this has been removed as an option now, because we don’t really get an indication of the number of failed agile projects anymore (except that only 1% has experienced agile failure). For 2012 and 2013 the numbers where 18% and 15% respectively for this option – so quite low numbers. The State of Scrum Report says the overall success rate of projects delivered using Scrum is 62%. If the Scrum project was deployed and managed through a PMO 93% of them were considered successful.

The numbers from the surveys seems to be miles away from the Chaos Report numbers with regards to the success rate. One explanation is of course that the criteria for calling a project a success is not the same, and probably a lot more stringent in the Chaos Report. The State of Scrum Report even has 52% of their respondents stating that the most common challenge is identifying and measuring the success of Scrum projects. Still, I am having a bit trouble believing that only 1% actually has experienced agile failure and that 93% of Scrum projects deployed and managed through a PMO are considered a success. Having said that, these surveys are by no means scientific and the objectivity of the respondents with regards to the projects they have been on might not be the best.

So despite the numbers in The State of Agile Report 2015 and The 2015 State of Scrum Report being very optimistic for everyone that loves agile, it seems the numbers might be a bit over-optimistic. The Chaos Report probably has a bit more valid numbers – but the overall takeaway should be that agile rocks and that waterfall should already have been thrown away. Having said that, it seems to be a lot of work left before we can be happy about the overall situation. Overall 61% of agile projects are not considered a success – and that is a lot of waste in the long run. We can always do better, and that is exactly what we must do through continuously improving ourselves and our agile practices.

Continuous Integration, Continuous Delivery and Continuous Deployment Continuous Integration, Continuous Delivery and Continuous Deployment - what’s the differences and why should you have these approaches?

An engineering practice that seems to be more or less given if you are using any agile approach is continuous integration (CI), where a designated server or cloud based service polls your repository and kicks off a build whenever a change is detected. The main aim of CI is of course to prevent problems when integrating new code into the codebase, shortening the feedback loop for developers – so that the developers get almost instant feedback on whether or not they broke something with their last commit. One thing is feedback on whether or not the build of the code itself is doing ok after integration – the other is the suite of unit tests that also should run, getting feedback on whether the new code conflicts with existing functionality or have some unwanted side effects. Compare this approach to integration only at the time of release – a situation which begs for integration problems, or “integration hell” as it is called in the early descriptions of XP. When you also consider that a bug found 3 weeks from now takes up to 24 times longer to fix compared to a bug found today, you quickly realize that there is a lot to gain from using CI in your project.

If you take CI a step further and consider continuous delivery – that would be a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing. By deploying the newest changes to a staging environment using complete automation you will most probably get comfortable with the idea of doing the same to production, although this is not a necessary step in continuous delivery – instead a deployment to production is a manual decision, unless you have continuous deployment.

So once you are doing continuous delivery you can take the next step – continuous deployment. By using continuous deployment you will automatically deploy all new changes that passes all the automated tests (unit tests, acceptance tests ++) to production. Both continuous delivery and continuous deployment have similar benefits:

  • Reliable Releases: You learn by doing and get good at what you do over and over again. So if you have continuous delivery your deployment process and scripts are tested repeatedly before deployment to production, and also you get a lot of practice of deploying to production by having continuous deployment. Compare this to the very unreliable process of only deploying to production maybe 3-4 times a year – there is almost always a lot of complications by not deploying often. So many things could have changed since the last time you deployed: not only code or configuration changes, but changes in staffing as well. Half the developers could have moved on to other teams by the time the deployment to production happens. If anything goes wrong (as it usually does in this setting), you are in a world of hurt.
  • Shorter feedback loop: By releasing more frequent the development teams get user feedback more quickly. This enables the team to focus on features that are important to the users and helps building the right product. Doing things like A/B testing also becomes a lot easier, so that the team can really get to know what the user needs and wants.
  • Accelerated Time to Market: Continuous delivery/deployment enables the teams to  quickly deliver the business value inherent in new software releases to customers. This capability certainly could help the company stay a step ahead of the competition and could create a competitive advantage.
  • Improved Productivity and Efficiency: Automation saves time for developers, testers and DevOps
  • The quality of the product increases: Since new changes are continuously being delivered to staging by passing a rigorous set of automated tests the number of bugs that slips through the cracks will decrease. The mentality of a developer working in an environment where they are being held more accountable and where they see their changes being deployed to production continuously is different from the case where a change done now just sits in the codebase waiting to be released at a (often much) later time.

To sum up I would say CI really is more or less mandated if you have any agile approach. It does not take much effort to set it up, so not having CI is really not an option if you want to succeed with agile. Continuous Delivery on the other hand could be more tricky to set up, although a lot of out of the box cloud based services exists. You also really have to put an effort into automated tests, not only unit tests, but also feature level acceptance tests. There is really just not a possibility to have Continuous Delivery without a suite of automated tests. Once you manage the process of Continuous Delivery and pushing all new changes to a staging environment you are really mostly ready for Continuous Deployment. Continuous deployment should perhaps be the goal for most companies that are not constrained by any regulatory or other requirements. It should not be IT limitations that decides whether or not your company should do continuous deployment, but rather whether it is right for your company based on business needs.

cd

 

Definition of Done Driving your work to the done column is important for all Scrum teams - but what does “Done” mean?

A Product Backlog item described as done varies a lot per Scrum team, so it does not necessarily mean the same thing for your team as it does for another team, even if the other team is within your company. The most important thing about “Done” is that everyone must understand what “Done” means. All Scrum teams should have a clear Definition of Done (DoD) that they can use to decide whether work on a story really is complete or not. This way it’s transparent and clear what exactly is expected of everything the team delivers, and it also ensures quality fit for the purpose of the product and the organization.

A typical DoD in the software world could be something like:

  • the code is well written and follows our standards
  • the code has been reviewed or pair-programmed
  • it has been tested with 100% test automation at all appropriate test levels
  • it has been integrated and documented

Over time, as teams mature, it is expected that their DoD will improve and be more stringent so that the quality delivered will have even higher quality. Adding to the example DoD above, the team could maybe add “Deployed to production”. This extra criteria of being deployed to production is not an easy task for a lot of teams, but speed is important and Continuous Delivery is definitively a competitive edge for a lot of organizations nowadays.

According to Jeff Sutherland it’s proven through data analysis of a lot of Scrum teams through Openview Venture Partners, that teams that have a stringent definition of done that is also called “Done, done”, doubles their velocity. “Done, done” means the work has been code completed and is fully tested on feature level. It’s less stringent than the example given, but enough to drive velocity up significantly.

Code completed in Sutherlands case includes all bugfixes – which means that bugs found in a sprint should be fixed in that sprint. This is actually a question I have seen a lot lately, whether or not bugs found in a sprint should be fixed within that sprint or whether the bugs should be estimated and prioritized for later sprints. Of course bugs found in a sprint should be fixed within that sprint – given that you have a DoD which demands code being completed!

To sum up: A DoD is important for creating transparency, so that all team member know what is expected from their delivery. So having a DoD is a lot better than not having one at all. If you in addition have a DoD with the “Done, Done” criteria which means code is completed (with bugfixes) and the work has been fully tested on feature level – your velocity could double! And who doesn’t want that?

dilbert_almost_done