GPM First
Chapter 4 of Customer-Centric Project Management (978-1-4094-4312-4) by Elizabeth Harrin and Phil Peplow

Measuring Project Performance


There is a lack of consensus among practitioners and academics on the way to assess project performance and on the elusive concept of value. In this chapter we will look at current measures of project performance and explain why these approaches are no longer robust enough for projects that face the challenges mentioned in Chapter 3.

Traditionally, project performance measurement methods fall into two groups: economic and pragmatic (Aubry and Hobbs 2011). Economic measurement models are based on financial metrics and consider whether or not a project has achieved the expected financial value. Examples of economic measurement criteria are return on investment (ROI), return on capital employed (ROCE) and the use of balanced scorecards.

The challenge with these metrics is that they are all retrospective. You can forecast what you predict the ROI for your project will be (in fact you should: this will help inform the business case). However, the true cost and the true return will only be known once the project is complete, and in many cases, only several months after project completion. In complex projects operating in an environment with a fluid internal political landscape, things change fast. If ROI is a key measure you could potentially spend a lot of time recalculating your forecast in response to a fluctuating organizational environment.

Pragmatic measurement models consider elements outside economic returns. These look at whether or not a project has delivered against specified success criteria. Typically, these success criteria will be defined at the beginning of the project. When the project is in the closure phrase, these success criteria will be resurrected from a forgotten file and compared against the achievements of the project. Again, this is not practical for complex projects that may have their requirements tweaked as they progress.

With pragmatic measurement models, project managers are not encouraged to deviate from the success criteria that have been agreed at the outset. They – and the project team – are expecting to be assessed against those criteria. There is little room, if any at all, to revisit and amend the success criteria as the project progresses. These create artificial boundaries for the project manager to work within, limiting the opportunities for creative thinking and employing professional judgement to the challenges the project presents as it progresses.

While there are a number of different models to determine both success and value, there is little agreement on a clear definition of what success or value looks like in a project environment. ‘There does not seem to be a particular value component that is recognized consistently from any one project management implementation or context to another,’ conclude Thomas and Mullaly (2008: 301) after researching more than 65 organizations around the world. Aubry and Hobbs (2011) agree. ‘There is no consensus on the way to assess either performance or the value of project management,’ they report.

The Elusiveness Of ‘Success’

Customer-centric project management posits that for any given project, the ‘real’ definition of success and value will change as that project evolves, especially in project environments addressing the challenges of fluid organizations, outsourcing, distributed and multi-generational teams, complex projects and low customer-engagement. While academics will continue to spend years debating common definitions for success and value, we ask the question: why should there be only one way to determine success?

Success means different things to different people, something that as individuals we have known and accepted in our professional, educational and family lives for years. When it comes to projects, however, researchers are keen to be able to pinpoint those elusive criteria that create ‘success’ or ‘value,’ both of which are highly subjective terms.

The fact that success varies from project to project and customer to customer is to be expected in complex organizations, as things are only ‘worth’ what someone thinks they are worth. Compare this to buying a house. The house is only worth what someone will pay for it. Knowledge and experience go into estimating the value, but in the end, one individual will consider a country cottage with wisteria around the door as worth more to them than a similar-sized property in a residential area close to a good school.

Interpreting Success Criteria: The Scottish Parliament Project

The construction of the Scottish Parliament building is an example of how different stakeholder groups interpret success differently. Donald Dewar was Secretary of State for Scotland in 1997 as work began on the new building to house the Scottish Parliament. It was decided that an architectural competition would be held to select the design of the new building. During the announcement, Dewar commented that architectural quality, value for money, accessibility and a design ‘worthy of the hopes and aspirations of the Scottish people’ 1 [9] were important. In other words, these could be interpreted as the critical success factors for the project. The contest entrants were issued with a building user brief which also stressed the importance of a stunning architectural design, saying it must ‘reflect the Parliament’s status’, ‘promote good environmental practice’ and ‘be an important symbol for Scotland’ while offering value for money. 2 [10] From the beginning, there was a focus on the design and the budget, and the potential for conflict between the two.

When the procurement process started, architects were bidding to deliver something within a specification of 20,740 m2 and a budget of £50m. At this point, there were no guidelines on quality or timescale, although by the time programme plans had been produced at a strategic level, the team was forecasting a practical completion date of June 2001.

The project suffered cost over-runs and delays, and the Holyrood Inquiry, led by The Rt Hon Lord Fraser of Carmyllie, was set up to investigate. It concluded that the difficulties lay in establishing priorities. What was more important? The cost of the project, or the quality of the building? Or something else, like the delivery date or the size of the building? From the behaviour of the project team (in its widest sense), Lord Fraser concluded that actually time and quality were the two priorities, and cost was not really a concern. The procurement process had gone ahead without a realistic budget estimate.

The inquiry found that revisions to the design should have been carried out with user requirements in mind, but these were missing from the project documentation. The design team was trying to deliver something, but had no clear idea of what the client actually wanted. The principles of collaborative project management were not evident and there was no focus on customer centricity. The main problem seemed to be lack of communication. This took several forms:

  • Messages were filtered for political reasons before being passed on to management. For example, a quantity surveyor produced an estimate of £89m, including a margin in case of risk; after all, this was a one-of-a-kind project, and it was highly likely there would be some unforeseen circumstances. The actual estimate passed up the line did not include this risk prediction, and was given as £62m. It is possible that the person passing the information up the line saw adherence to budget as one of the key criteria for this project.

  • Unclear vocabulary used in project reports. Terminology (project management jargon and all other business vocabularies) only works when everyone using it has a common understanding of what it means. Is ‘estimate’ the same as ‘forecast’? The misunderstandings (whether intentional or not) that this uncommon vocabulary had on the team did not help advance the project and may have led to different groups having different views about what the project was actually going to deliver.

  • Absence of project reports. There were concerns over whether or not the project would be completed within budget as early as November 1998. However, it took until March 1999 before Donald Dewar was given formal warning of any potential cost rise. If budget was a key success criterion for the project, it seems it was not communicated adequately to those responsible for reporting against it – or they were providing the information to the wrong stakeholders.

  • Lack of general communication at all levels. The project manager, Bill Armstrong, led the project until December 1998. He then resigned, but the senior stakeholders were not aware of his departure until it was covered in the media in January 1999. This is symptomatic of the way in which the overall relationships worked between the stakeholders on this project.


The Members of the Scottish Parliament met in their new building for the first time in September 2004 – three years after the initial planned practical completion date. If delivery date was a measure of success for this construction programme, it certainly failed. Businesses in the locale suffered disruption for longer than expected during the build and Members of the Scottish Parliament who expected to be sitting in their new building three years prior could have seen the delays as an inconvenience and the construction project as a failure.

The cost of the building rose significantly from the initial figure quoted in the procurement process. In 2004 the estimated final cost reported to the Finance Committee was £431m. Value for money is inherently subjective, but it is clear that if cost was a measure of success for this construction programme, the debate between project team members about whether success was achieved would go on late into the night.

However, the building had attracted 100,000 visitors by November that year. 3 [11] It has won nine design awards. 4 [12] The three main parts of the building have been rated as ‘excellent’ for environmental performance and the development has increased biodiversity in the area. 5 [13] The carbon footprint of the Holyrood complex has been reduced by 12 per cent in 5 years. 6 [14] It is clear that the success criteria of accessibility, stunning design and good environmental practice have been met. To the local community who visited the building, the architectural community who have praised it and the citizens who benefit from the decreasing impact the building has on the environment, the project has succeeded.

Project success looks different to different project stakeholders.

The Problem of Post-Implementation Reviews

Not every project has a full-scale public inquiry to assess how the deliverables turned out. The vehicle for determining success in most cases is the post-implementation review (PIR), also known as a project post-mortem or post-project review. This is often the only opportunity to assess success, although some organizations adopt a more robust method of benefits tracking.

There are two issues with PIRs: they only happen at the end of projects and they mainly focus on the project management principles and methods used. They don’t make the distinction between the success of the project and the success of the project management effort, and they mainly focus on the latter.

‘Postmortems are the central mechanism for continual improvement of project processes,’ writes Moore (2010). ‘Without a feedback mechanism, such as a postmortem, any process improvement is little more than informed guesswork … If you are using a workflow to underpin projects, the postmortem should be one of the final steps in that workflow.’

Whether you call the meeting a PIR, a post-mortem or a post-project review, the thing that all the terms have in common is the word ‘post’. 7 [15] In other words, they come after the project has completed, or very near the end. Atkinson (1999) even argues that you should delay assessing success until well after the project is completed, so that the longer-term benefits can be included in the discussion.

Sometimes customers will be asked to feed into the project evaluation process (Littau et al. 2010), but at that point it is too late to do anything practical about their comments. If they complain that they weren’t kept up to date, you cannot go back in time and provide more information on a regular basis. It is a case of, ‘How can I help you now it is too late?’ In fact, research from South Africa shows that project sponsors prefer a proactive approach to feedback over the PIR process. They chose to work collaboratively with the project manager during the project to ensure that their expectations were met (Sewchurran and Barron 2008).

Another team of academics from the UK, Norway and Australia carried out a research project in 2009 into the early warning signs that indicate that complex projects are going off course. ‘Post-project review and assessment cannot change what has already happened and hence does not provide any useful early warning signal to the completed project,’ comment Klakegg et al. (2010). They also conclude that ‘human concerns can be a valuable source of early warning signals’ and advocate for discussions with stakeholders to uncover issues around the project’s health. A customer-centric approach builds on this by making these discussions a regular part of stakeholder engagement, even if the team is split across many locations.

Of course, PIR discussions are immensely valuable for continuous process improvement, and we are not advocating that you stop using this technique. Focusing on project management principles and methods used is essential to improve organizational project management processes. Could we have done better risk management? What scheduling lessons were learned? A good PIR meeting should discuss what went well and what did not go so well with this project. However, this approach takes the customer, and often outsourcing partners, out of the equation. There is no room in the process improvement discussion for whether the project team delivered a result that the customer thought was valuable, regardless of the processes used, or whether they were satisfactorily engaged throughout the project lifecycle.

Aside from the process topics during a PIR, it is also an opportunity to discuss statistics and metrics related to the project. If PIR discussions do include metrics, these are normally backward-looking. What was the percentage of effort spent on testing? How many days did it take the quality team to audit the deliverables? These metrics and calculations can then be incorporated into future projects so that initiatives going forward have the benefit of experience and hindsight. However, that doesn’t help the project customer of this particular project. The ‘now’ moment – the moment that the customer most cares about – is over for their project. The customer’s role in post-project discussions is simply to help you improve your working practices so that other people can benefit. And in many cases the minutes from the PIR meeting are filed away and never looked at again. Nobody benefits from this kind of routine post-mortem, as the organizational knowledge is not appropriately shared.

Customer-centric project management helps your current customer to benefit from incremental, tailored improvements to the project along the way, and it can be done alongside traditional post-mortem meetings. We will see an example of how this works in practice in Chapter 6.

Beyond the PIR: Defining Success Differently

‘There is no unequivocal definition of project success,’ writes Wake (2008). ‘Cost, schedule, specification are often used to declare it. Think about it. This isn’t success, it’s compliance. If you want success, then it is normally expressed in the opinions of yourself, your peers, subordinates or bosses, but not necessarily at all levels at the same time.’ We could spend a lot of time debating success criteria. In fact, many academics already have. The definition of project success has widened over the years. Geoghegan (2008), Dvir et al. (1998), White and Fortune (2002), Hyväri (2006), Shenhar and Dvir (2007) and others have all undertaken insightful research into success criteria. However, we share the view of Herzog (2001) that the definition of success is a variable defined by the person you interview.

As we saw, the PIR process can include both the discussion of successes and failures (euphemistically called deltas at some of the places we have worked) as well as project metrics. In our experience, it is rare for customer satisfaction to be one of those metrics, but we believe it should be. To the customer at least, nothing else matters. Therefore we need a framework to help define what value and success looks like to an individual project customer, and a way to consistently measure that and provide worthwhile metrics on an ongoing basis.

The idea of caring about and measuring customer satisfaction is not new. As we have seen, other industries have done this for some time. There is already some movement towards incorporating customer satisfaction metrics into project management. These metrics include measuring return business, the management of controlled disruption, adherence to quality standards and number of customer objectives met (Bowles 2011). These are good measures but they mainly focus on the consumers who buy the end product. Dvir et al. (1998), Atkinson (1999) and Shenhar and Dvir (2007) have discussed customer satisfaction as a criterion for project success, and Zhai et al. (2009) discuss its positive contribution to the perception of project management value. The PMBOK® Guide (2008) defines the success of projects as product and project quality, timeliness, budget compliance and ‘degree of customer satisfaction’. We assume that these are internal customers, although it provides no further guidance on how to establish or assess customer satisfaction. Managing Successful Projects with PRINCE2 (2009) says that a key success factor is that the user finds the deliverable acceptable, but concludes that the only way to do this is to be clear up front about expectations on both sides so that success can be assessed at the end.

This is the fundamental problem: customers do not assess success at the end. They assess it as they participate in the project lifecycle, in their ‘now’ moments. All the talk of measuring customer satisfaction is a step in the right direction, but until project management takes on board the fact that customers do not assess success in the same way that we do, the methodologies available will continue to advocate using metrics that are only available at the end of the project, and therefore cannot contribute to shaping the project management effort.

Working with the new paradigm of customer-centric project management means accepting that levels of satisfaction change as opportunities and risks present themselves during the project lifecycle. Assessing customer satisfaction only at the end of the project is not adequate.

The Exceed process, as we will see in the next chapter, helps teams define success for internal customers so that they can act in a more customer-centric way. Project customers are the group that needs to be happy before the end product ever gets to a consumer.

Key Points:

  • ‘Value’ can be an elusive commodity.

  • Success is a variable defined by the people you ask, when you ask them.

  • Collaborative project management is not enough to deliver a successful result in an environment where the definition of success is not limited to measures of time, cost and quality.

  • PIRs and project post-mortems focus more on process, and less on what was delivered.

  • Customer satisfaction is rarely assessed during a PIR, and even if it is, this retrospective view is not adequate. Customer-centric project managers should accept that satisfaction levels change as opportunities and risks present themselves during the project.

Submit your own content for publication

Submit content