Blog Post - Craig Harper-Ashton, Apr 18 2016

​The Death of the Project Management Triangle

​The Death of the Project Management Triangle

By Craig Harper-Ashton

How modern techniques fundamentally change project thinking

You’ll almost certainly have seen the Project Management Triangle. It’s a seminal project management tool that explains the key drivers of a project, and demonstrates their effect on each other. The triangle is equilateral, with a label on each side: cost, time and scope; with quality in the middle.

In theory, each of these ‘sides’ has an effect on the other two. Pull the scope in one direction, and quality and time may take the strain. The theory behind this certainly stands up to basic scrutiny: if you want software of higher quality, allocate more building and testing resource. Equally, if you have a very tight deadline, you may expect quality to suffer and costs to increase as you add resources to complete all the work within a tight schedule.

Yet as project development techniques evolve, this triangle is becoming less accurate. An Agile sprint seems to offer up a choice of limiting time spent, which may result in reduced scope and/or quality, or scope-led variations that naturally may result in a more relaxed schedule. Although with distinct improvements on previous methods I would still argue the same basic premises with subtle variations.

New methods, such as DevOps and Continuous Delivery however, are challenging this traditional ethos and asserting that by the adoption of some key techniques these primary drivers may not be so conversely proportional.

Automated Detection of Faulty Code

Now, we aren’t suggesting that a silver-bullet has appeared that mysteriously increases development-coding speed, or materially reduces human error. And we are still suggesting there’s an intrinsic link between cost, scope and quality.

But not in the way that you might expect.

One of the most important concepts with new techniques is to maintain high quality code, by using automating testing as much as possible. It’s critical that we detect regression issues or faulty code almost immediately, as this highlights any problem to the developer when they are most familiar with that part of their code.

More traditional approaches detect problems much later in the overall process. This leads to two critical problems:

  • The developer – and we’re assuming it’s the same developer - has to re-learn that area of code
  • More importantly, they have to try to retro-fit quality back in to the code. Retro-fitting quality is not just difficult. It’s not really done at all.

Towards the end of projects, when deadlines are looming, testing and fault resolution becomes more of a numbers game than a quality game. Senior management and customers like to see defect numbers go down, and this perception of quality is inherently dangerous.

Is the quality of the software really increasing, or are sticking plasters being applied?

The Power of Continuous Deployments

How often have you heard, “Well… it works fine in my environment”?

There’s nothing wrong with that statement, and it’s usually correct. But if software doesn’t work in the target, deployed environment, the fact that it works beautifully in a local or test environment is irrelevant. It still has no value.

Continuous Deployments (or very high frequency deployments) are a core concept in modern techniques. The longer software resides in local environments, the more problems can potentially build up, especially when the software is not integrated to other components.

Crucially, the word ‘finished’ takes on a new meaning, too. It used to have a wide spectrum of meaning, especially where ‘hand-offs’ were common. A back-end developer’s interpretation of the word ‘complete’ may be accurate from their perspective, but when a component still requires front-end development, testing, performance testing, vulnerability analysis and integration, it’s far from what a customer would consider to be complete.

DevOps teaches us that hand-offs are not only incredibly inefficient, but they also lead to confusion and loose ends. These all require management time and intervention.

Self-managing, cross-functional teams work together in completing components and ticking all the boxes, and our back-end developer shares responsibility for the automated test being complete (as he does with his own work). As well as being a more robust and comprehensive view, this promotes discussion, collaboration and teamwork, and results in a significant reduction in needless management intervention so that real issues can be dealt with more proactively.

Is It Time to Reinvent the Triangle?

A number of other techniques and concepts build on these efficiencies and principles. But I hope that it demonstrates that we need to reform some of our traditional views of project management – and the triangle model in particular:

  • To increase quality, you do not necessarily, or automatically, need to increase time and cost
  • Decreasing time does not always need more money, and quality doesn’t necessarily have to be compromised
  • A developer whose code has introduced a problem in the application will resolve the issue significantly faster whilst they are most familiar with their code. If the same developer has to revisit and relearn their own code, this can slow progress by a few minutes, a few hours, or even several days.

As I’ve mentioned, these techniques don’t represent a silver bullet. You can’t apply them in the hope that all project problems will go away. They don’t claim to fix all problems, and they don’t necessarily make the development of software applications any easier.

However, they highlight problems much more quickly, remove needless complexity, and reduce administrative burden.

Having adopted many of the techniques on real projects, I’ve seen first-hand the significant improvement of a team’s efficiency, quality of deliverables and velocity in a relatively short amount of time. We’ve increased the frequency in major, large-scale deployments from months to weeks, interim testing to ensure real quality from weeks to days, pre-deployment standard quality checks from days to hours and problem fixes from hours to minutes. Small efficiency gains add up to significant amounts of time that can be filled with developing more features and value for customers and end users.

The key, as with most things, is not to rush in change and sit back and watch the result. Practices should be adopted steadily, and should be allowed time to bed in, with good, communicative feedback and tweaking as necessary.

With some time, a little patience and a positive attitude, you will see some quite amazing results.