Red (R)AG to a Bull?

All (or almost all) programs are as we know always rated RAG (Red Amber Green) via a weekly or bi-weekly Status Report. RAG heatmaps are also used across project and program portfolios, and within workstreams. Most would agree that RAG status is the most or one of the most important ways a program (and its team) are judged.

And yet…this is a colour…..it’s not scientific, well defined, or consistent.

A huge amount of PM and management time and emotional energy can often be expended on RAG debates, and so we wanted to capture a few thoughts based on many years of RAG status meetings at many different Banks …..

Firstly – how do we define a colour ?

Of the many many definitions we’ve seen, two stand out as being useful to both project teams and also their sponsors and stakeholders.

At the business sponsor level, a good top down definition is:

  • Red – program Business Case is under threat or no longer holds
  • Amber – various issues and challenges, being managed, Business Case holds
  • Green – on track per approved Business Case (cost, benefit, IRR etc)

This approach is more powerful than it sounds, and is particularly helpful when senior sponsors are looking at large change portfolios and making top down funding or exception management decisions. Assuming that the original business case (for discretionary programs) has a financial payback or return, then scope change, material delays, or client experience issues will iterate the Business Case and – if this becomes uneconomic or falls below Hurdle Rate, should lead to a program review – and either re-casting or killing the initiative……

At the Program Management level, the best definition we have seen is simply this:

  • Red – we the project team need assistance from Sponsors to resolve issues which we cannot fix within the team. Issues could be scope, time (funding), vendor management, Operations or Client Readiness, or technology infrastructure provision – to name but a few.
  • Amber – we have the usual issues and challenges but we are managing these within the team and within our budget/scope/time envelope (some more thoughts on tolerances below)
  • Green – no material issues to report (and no, we didn’t over-budget or pad the timeline….)

Too much PMO, not enough PM….

One of the most frustrating mis-uses of time a PM can experience is interminable internal debate around RAG definitions, often catalysed by central PMO functions. Central PMO teams like to create consistency and hence create definitions, templates, multi-level milestone trackers, heatmap summaries, and shared drives which (in many cases) are an offline, unlinked copy of the PM weekly status report. The amount of copy/paste effort burned can be truly remarkable.

Is there a less shareholder value added function than central PMO RAG definition guideline development and discussion? Probably, but we can’t think of many. Add to this the truism that most central PMO staffers are not experienced PMs (yet they believe that they are qualified to tell the accountable PM how to report their status) and you have a recipe for frustration, tension, or worse…..

Whilst the drive to manage central PMO groups has its genesis in senior management wanting an independent view of – in particular – red flags – the reverse often results – meaning that the number of programs which run into serious problems is highly correlated to the relative size of the PMO function compared to the core change headcount. A PMO headcount >5% of the total change cohort is a warning sign in our experience.

Secondly – the introduction of Game Theory

This is where the fun starts. The trouble with RAG (which is inherently a simplistic mechanism to describe the status of a complex program) is that a number of unintended consequences and unplanned behaviours tend to arise…..

Scenario A : high level milestones, or overall status, is reported centrally and upwards to a senior level where Red ratings are taken seriously
Response : Sponsors and PMs do not really welcome the invasive questions which arise when Red is reported and therefore tend to underestimate problems or prefer to delay reporting Red flags. And to be fair, the additional attention which can result is not only a distraction for the team (thereby leading to more delays) but can also cause loss of morale or momentum.

Outcome : genuinely Red situations are reported as Amber, avoid senior management attention, and it is only when the ship hits the iceberg does genuine status become clear. As a result of which, senior management demand more centralised reporting, and the cycle continues…….

Scenario B : it’s nearing annual appraisal and bonus cycle time.
Response : it is common for programs to move from Amber-Green or Red-Amber in this window and then to revert in the following quarter….
Outcome : 4-12 weeks during which time status is artificially upgraded and hence remedial action is not taken, where it might have been needed.

Scenario C : different personalities judge status differently.
Response : optimists may want to report Green or Green/Amber throughout; whereas pessimists might prefer Red/Amber unless Green can be 100% proven.
Outcome : since we cannot perfectly predict the future, or unseen roadblocks, we cannot definitively prove On Track or prove 100% Red – and so we rely on the mindset of the PM or Sponsor…..

Scenario D: central definitions of RAG, and tolerances between ratings, are required Response : PM may move effort between phases or workstreams, or funding between fiscal periods, or people between teams – in order to stay below a particular time or budget threshold.
Outcome : where on-budget becomes paramount, the business objective may become obscured or the timeline extended to a point where the business purpose becomes redundant, per our first definition of RAG above.

Any of these sound familiar ?

To conclude

There is no perfect answer to defining a colour in the context of program status; RAG remains, however a ubiquitous and simple means by which to judge status and relative status.

Our experience in the financial markets domain across dozens of change programs have taught us the following lessons:

  1. Simple definitions (which might even be quoted on every status report) are helpful.

  2. Reporting RAG per work stream and in a matrix of Scope/Time/Resources is a more granular reflection of true status than overall RAG.

  3. Avoiding a regular rehash of status reporting via a central PMO function is efficient; just send the PMO team a copy of your regular status report and let them copy and paste as they see fit.

  4. Above all – an experienced and accountable Program Manager should be permitted to judge the status and report honestly – if you’ve hired them to cook the dinner, then let them choose the ingredients and let them report on progress.

Bon appetit.

Simon Bennett
Partner

www.domain-matters.com

Info

Domain Matters Ltd
Registered in England and Wales
Company number 10682935

Registered office
58 Burnt Ash Road
London, England, SE12 8PY