Define repeating patterns which account for much of the failure to deliver intended value from programmes
Recognise underlying causes of the failure patterns
Apply archetypal solutions to each pattern
Record of Failure
There is abundant research documenting failure of programmes to deliver value, together with views concerning the reasons. Research commissioned by Computer Associates1  concludes that the main cause of budget overspend is poor forecasting. The report also suggests that scope creep, together with problems relating to interdependencies and conflicts between multiple projects, are also significant factors.
These problems also reflect a lack of visibility and effective control by executives over programmes. According to the same research, 40 per cent of IT directors lack adequate visibility regarding the projects that they are implementing. This is blamed on the failure to utilise appropriate tools to manage and measure initiatives. Compounded by the complexity of today’s programme portfolios, poor measurement undermines attempts to identify strategic projects. The report found that for 60 per cent of the firms surveyed, less than 50 per cent of their programmes were considered to be strategic.
A significant finding was that only a quarter of companies surveyed appeared to carry out a ROI-based calculation in order to link projects to the business value they expected to deliver. This observation was also echoed by recent research by IAG Consulting,2  who concluded that firms with poor business analysis capability have three times more project failures than successes.
A major factor in programme failure is inability to link programmes to business value and then manage them effectively to deliver the value.
When we combine research and best practice with our own experience, spanning hundreds of programmes and applications across both the private and public sectors, two disciplines stand out as both the source of, and solution to, the failure of programmes to deliver intended value:
Baseline business case: prior to implementation, gaining approval based on precise causal linkage between new business functionality delivered by the programme and value outcomes to stakeholders
Value realisation: managing the realisation of actual value delivered during and post implementation against the baseline
Repeating Failure Patterns and Solutions
A central message of this book is that we cause programmes to fail in delivering intended value though inadequate application of these two key disciplines. However, in order to find effective solutions we need to apply principles covered in previous chapters to define precisely and specifically how we cause this failure. To this end, we have identified six repeating failure patterns. The first five relate to baseline business case and last concerns value realisation:
Failure pattern 1: inadequate specification of stakeholder outcomes
Failure pattern 2: unrealistic quantification of benefits
Failure pattern 3: poor causal linkage between programme phases and benefits
Failure pattern 4: poor value alignment
Failure pattern 5: imprecise criteria for success and inadequate provision for risk
Failure pattern 6: inadequate tracking of benefits and overall programme value
Failure Pattern 1: Inadequate Specification of Stakeholder Outcomes
Despite advances in processes and tools, programme management is still often narrowly focused on programme outputs rather than the stakeholder outcomes that the outputs are intended to deliver. Emphasis remains on delivering against functional requirements, on time and within budget. Two vital elements are omitted; delivery of benefits, in the form of stakeholder outcomes, and the overall value of the programme. Both these elements should be reflected in a robust business case, reflecting the financial case of undertaking the programme.
There are two parts to this repeating failure pattern. First, requirements capture is poor and technically, rather than business driven. Although business users are consulted, their involvement is often insufficient, typically due to other commitments or lack of ownership, and the process is not performed effectively. This leads to a set of requirements that have not been fully reviewed by key users, that is, beneficiaries, who are subsequently expected to realise the benefits. As a result, programmes are progressed against requirements that are poorly defined, loosely owned or just plain wrong.
Secondly, even if the requirements are clearly defined, with full involvement and ownership from the business community, and subsequently delivered, this still may not lead to intended stakeholder outcomes. This is because the functional requirements result in outputs that do not deliver the required outcomes. To illustrate, consider a programme which increases the number of surgical operations conducted in a health authority responding to government efficiency targets. The focus on number of operations leads to a boost in productivity, by performing more simple procedures whilst pushing the more demanding operations to the back of the queue. Despite meeting the target by increasing the number of operations it is able to perform, that is its outputs, the programme does not necessarily achieve the desired outcomes of the neediest stakeholders, such as people waiting for complex and expensive operations.
Functional outputs, even when adequately defined and achieved, often fail to deliver stakeholder outcomes.
The inability to specify stakeholder outcomes, and the business capabilities needed to cause them, is a failure of both process and mindset. When a new programme is set up there is usually a scoping study to determine the feasibility of the change programme. If approved, the next phase is often detailed requirements capture comprising workshops and interviews. This generates a user requirements document from which the functional specification and system design can be derived. Despite increasing use of methodologies for defining programmes and benefits, it is still rare for requirements to be challenged rigorously against their ability to deliver stakeholder outcomes. This mindset failure is manifest in a premise that if the requirements are correctly specified, benefits will follow automatically. Often, they do not.
What are intended stakeholder outcomes, what new business capabilities are required to deliver them and who is accountable for success?
The solution to this recurring problem is to define precise intended stakeholder outcomes, that is benefits, specify the new business capabilities, that is programme deliverables, needed to cause the benefits and assign clear ownership to both deliverables and benefits. Outcomes are expressed as objectives within the Balanced Scorecard and cause and effect chains which link the objectives, called themes, define the strategy for achieving the vision. The shift in thinking from a functional focus to outcome-centred awareness and the definition of strategic themes is expanded in Chapter 5.
Failure Pattern 2: Unrealistic Quantification of Benefits
Another reason that benefits are not realised in practice is because we have not determined precisely the drivers of value which must change in order to cause the intended outcomes. These drivers translate into business performance measures in the Balanced Scorecard, the poor definition of which results in unrealistic quantification of benefits.
As discussed in Chapter 2, DCF techniques measure programme value in the context of achieving acceptable NPV, yield, and payback. However, often in an attempt to meet predefined criteria, projected benefits are manipulated, even fabricated, because they are difficult to define and/or quantify. As DCF analyses demand benefits to be quantified financially, business cases tend to include only hard benefits, such as reduced headcount, and omit benefits perceived to be intangible. Consequently, only a proportion of the total benefits related to any programme are accounted for. For example, consider a business case for an automated loan system within a bank. It is likely that reduced manual labour is central to the financial case. Less likely to be included is the extra revenue generated from customers by virtue of significantly reduced response times, or the effect it would have on staff who, no longer engaged in manual processes, can redirect effort into value added activities.
The problem is that often tangible benefits alone are insufficient to make the business case viable. There may be awareness of potential intangible benefits but it is assumed that they cannot be meaningfully quantified. Consequently, there is a tendency to inflate the estimated tangible benefits, and leave the intangibles unquantified. This leads to a double whammy; inflated tangible benefits are not delivered and potential intangible benefits squandered because they are not measured and tracked. ‘What is not measured cannot be managed.’
Poor quantification of benefits leads to manipulation and failure to capture full potential programme value.
What business performance drivers must change in order to cause intended stakeholder outcomes?
The solution is to ensure that strategic themes in the Balanced Scorecard are defined explicitly through the linkage between drivers and benefits using cause and effect mapping and subsequent dynamics modelling of the business. Causal mapping will be explained, and dynamics modelling introduced, in Chapter 6.
Failure Pattern 3: Poor Causal linkage Between Programme Phases and Benefits
This problem is encapsulated in the most feared question from a CEO, ‘OK, so precisely how much money will I get and when will I get it?’ Suppose that stakeholder outcomes are defined, together with the causal linkage to key business drivers. There is another level of linkage that is frequently neglected, the explicit causal connection between specific programme phases and the business drivers that result in benefits. The term phase is used to include any programme breakdown, such as projects, work packages, stages and so on, which output partial or complete deliverables. Although phase costs may be monitored closely, the business benefits attributable to each phase are not generally measured and managed with anything like the same rigour, if at all.
This raises two issues. The first relates to the magnitude of benefits attributable to each phase. Some phases deliver more benefits than others and without quantifying the contribution of each phase, it is not possible to optimise the programme structure around value. The second problem concerns benefits timing. Later, we will demonstrate that even modest slips in a programme schedule can have greater impact on value than significant cost escalation. What might appear as a relatively immaterial slip in a milestone may have critical consequences for the programme value as a whole, due to dependencies between phases. This is especially true of large, high-risk programmes with substantial front-end investment and slow build up of benefits; the CFO’s nightmare.
What level of benefits and costs are attributable to each specific programme phase and when will the value be delivered?
The solution to this problem has two parts. We first link deliverables to drivers and benefits through quantified causal threads. Then we attribute benefits across programme phases in a process called benefits attribution. At all times, we ensure that the causal linkage is traced to the strategic themes in the Balanced Scorecard. These techniques are covered in Chapter 7.
Failure Pattern 4: Poor Value Alignment
Alignment refers to an optimum condition whereby all business resources are directed to achieving the vision, of which there are two aspects: business alignment and programme value alignment. Business alignment refers to convergence of objectives and measures at all levels within the organisation, providing a clear line of sight on the vision from any viewpoint. Programme value alignment is concerned with directing this business convergence to programmes, which are optimised for realising greatest benefits most quickly at least cost and minimum acceptable risk.
Poor business alignment results in the different parts of the business pursuing local objectives and targets which oppose each other and counter the vision. This is often caused by conflicting criteria for success. For example, a procurement department may have measures focusing on minimising price, which drive demanding contracts with suppliers. In response, suppliers cut corners on quality in order to maintain their own margins, and this conflicts with quality measures within operations.
Poor programme value alignment is manifested as sub-optimal programme design from a value perspective. This is often as a result of a technical rather than a value focus. For example, suppose a retail bank is implementing new products across its branch network. From a technical perspective, the entire communications infrastructure would be completed before training staff and launching the products for sale. However, there may well be opportunities to implement key branches first or launch products using partially manual processes, which bring forward significant cash inflows.
What is the optimum business and programme alignment that delivers greatest value, most quickly at acceptable risk?
The solution for business alignment is to cascade the Balanced Scorecard to management, operations and even individuals, across the business and extend strategic themes through all these levels. The solution for programme value alignment is modelling to ensure that programme deliverables cause positive changes in key drivers and benefits through themes. Changes in drivers and benefits are then attributed to phases so that we can determine the optimum structure and sequence of phases for value. The key output from the programme value alignment process is an implementation strategy. We explore value alignment in Chapter 8.
Failure Pattern 5: Imprecise Criteria for Success and Inadequate Provision for Risk
The next two patterns are very closely related. Failure pattern 5 concerns the definition of precise criteria for success, together with assessment of risk, and the final pattern tracking the degree to which value is delivered against these criteria. To put them into the appropriate context, it is useful to compare best practice for managing programme outputs with a typical approach to ensuring that benefits are delivered.
For functional outputs, well-managed programmes follow a structured method built around two disciplines, ensuring that defined requirements are fully incorporated and that they are implemented correctly. The first discipline is verification, the second is validation. Requirements are defined and translated into a design, which is built and tested against the defined requirements. Any changes are, or should be, properly assessed and documented as part of a formal change control process. As individual modules are built they are tested against test scripts which can be traced back to approved requirements. Once complete, the system is tested as a whole for both technical compliance and user acceptance. Test scripts contain precise criteria by which compliance with requirements is measured.
Now consider how programme benefits are typically managed. The scrutiny applied to technical requirements is rarely extended to benefits and business cases do not contain the level of precision afforded to functional requirements. Also, it is even rarer for benefits to be assigned measures against which the business case can be tested.
A similar comparison can be made in relation to risk. Where technical functionality is concerned, models and/or prototypes are often built in order to test compliance and reduce inherent risk. For example, in aerospace this is taken to extremes; simulation and prototyping is used to test new aircraft to destruction. Not only is performance validated under normal operation, but more importantly, destruction testing determines how to deal with extraordinary conditions and what it takes to break all the fail safes. This same rigour is generally not applied for benefits and value in change programmes, but why not, when value is business-critical and the tools are there?
What are the precise criteria through that we can be certain of delivering intended value and what would it take to destroy the financial viability of the programme?
The solution to this failure pattern is to define very specific measures by which to assess actual and forecast benefits and overall value at all stages in the programme development and implementation. These measures take three forms; measures that tell us whether programme deliverables are on track, measures that we use to track drivers and benefits and measures that track overall programme value. We use models with dynamic causality to destruction test the financial case and build a Balanced Scorecard comprising the most critical measures. The ‘test script’ for benefits is the Balanced Scorecard.
Failure Pattern 6: Inadequate Tracking of Benefits and Overall Programme Value
Closely linked to the previous failure pattern is inadequate monitoring and assurance
of success, which relates to the inability to track value delivered during and post implementation. Oakland and Tanner3  explain, ‘The research indicated that this is an area where there is scope for improvement within many organisations. In particular, the area of setting clear measurable objectives for the change and evaluating their achievement may be singled out for attention.’ In the previous pattern we stated that appropriate measures for programme success were often lacking. However, even if measures are in place, they are only of use if supported by a rigorous value realisation process which monitors actual and forecast status against the baseline business case. This is rarely done.
More commonly, after approval of the business case, the emphasis shifts to delivery of compliant technical outputs on time and within budget. The implicit assumption is that projected benefits will happen automatically. However, as we stated earlier, delivery against requirements is no guarantee that intended benefits and overall value will be realised. The business case is seldom, if ever revisited. Sometimes a Post Implementation Review (PIR) is conducted to determine whether benefits were actually achieved, but by this time the horse has bolted. What is absent is the perpetual tracking of programme benefits and overall value, together with corrective action needed to remain on purpose and on value.
During and post implementation, how do we define the status of the programme value, correct negative variances and exploit positive changes?
Tracking for value is incorporated as an integral part of the entire programme management process, which involves the monitoring and correction of three aspects of the programme – deliverables, drivers and benefits and overall programme value. Deliverables are tracked using advanced earned value techniques to complement best practice project management. Drivers and benefits are tracked using the Balanced Scorecard. Programme value is tracked using DCF analysis. Deliverables, drivers, benefits and programme value are all linked dynamically using causal models, which are vital in assessing the impact of changes and directing effective action.
Research, best practice and direct experience converge on the need for excellence in two disciplines and to redress the failure of programmes to deliver intended value: building baseline business cases and value realisation
Inadequate specification of stakeholder outcomes
Unrealistic quantification of benefits
Poor causal linkage between programme phases and benefits
Poor value alignment
Imprecise criteria for success and inadequate provision for risk
Inadequate tracking of benefit and overall programme value
Through analysis of these causal patterns we can define archetypal solutions which are now explored in greater depth in Part II