GPM First
Chapter 4 of Project Reviews, Assurance and Governance (978-0-5660-8807-0) by Graham Oakes

The Review Parameters

Chapter 4

Chapter 3 introduced a simple model to help think about the way we set up reviews. This chapter discusses the parameters of that model in more detail, and looks at their influence on the way we conduct reviews. It then explores how these parameters vary across the project lifecycle, informing the type of review that we might conduct at any given point.

Outputs

We undertake reviews in order to achieve some outcome, so it is useful to start with outputs – the information that the review is intended to produce. Reviews typically produce two types of output:

  • Information to inform a decision about the project: This may be a decision as to whether to continue investment in the project, for example a gate review that informs a go/no-go decision. It may be a decision about when to announce a new product, or about whether to initiate contingency actions on a dependent project. This information is primarily intended to help stakeholders outside the project team.

  • Information to inform the way we run the project: This may be recommendations to improve project management processes, or identification of risks that need to be managed, or escalation of issues that need attention by the sponsor, and so on. This information is intended primarily for people within the project team.

 

As the saying goes, no-one can serve two masters. Reviews that try to support both internal and external stakeholders risk serving neither well. As we set up reviews, therefore, it pays to identify which stakeholders represent our primary audience and what information they need to inform their decisions and actions. This then determines the type of information we gather, the way we analyse it and the way we present our findings. (Of course, much of the information we produce may turn out to be useful to both sets of stakeholders. That shouldn’t prevent us focusing on our primary audience. Being clear about this primary audience doesn’t just help us focus our energy: it also helps us manage the expectations of other people we talk to during the course of the review.)

Reviews may also produce other outputs. They may, for example, trigger escalation processes if they come across certain types of information. Evidence of fraud or criminal activity is the obvious (although, fortunately, uncommon) escalation point – if we uncover such evidence, we need to notify the appropriate authorities. Likewise, if we find evidence of substantial problems with a project, we may need to escalate outside the ‘standard’ reporting lines. Again, it helps to be clear up front about what sort of issues we will need to escalate, and how we will escalate them, both for our own effectiveness and for the sake of managing expectations.

Finally, the visibility into project status and operations that reviews provide may help instil confidence that the project portfolio is under control. This may be less tangible than the other outputs, but if it helps the organization to address additional opportunities, then it’s an invaluable outcome.

Control Parameters

Chism (2007) notes that three things are at the heart of an effective peer review process: criteria, evidence and standards. Criteria and standards appear in the systems model as control parameters. Standards are subdivided into baseline (that which is agreed for this project prior to the review) and reference models (which are agreed more widely, for example by organizational policy). Together, these define the reference points against which we will review the project. Criteria then define how we will assess whether the project remains aligned to these standards. We will come to evidence gathering in Chapter 6.

Baseline

The baseline defines those parts of the project that have been agreed prior to the review. The baseline may be axiomatic or it may have been covered by earlier reviews. For example, the organization’s business strategy is typically axiomatic: we don’t question it during a project review, but rather we review the project’s business case to confirm it’s aligned to the overall strategy. Likewise, once the business case has been reviewed and approved, subsequent reviews don’t need to re-examine it in detail: they can focus on questions such as whether the project plans can feasibly deliver the objectives it defines.

These examples illustrate how we use the baseline as a point of reference: we review the current state of the project to confirm it’s consistent with (‘aligned to’, ‘can feasibly deliver’) the baseline. We can’t necessarily ignore the baseline completely as we do this (in the above example, we may want confirm that any assumptions contained in the business case still hold at subsequent reviews), but understanding our baseline helps us focus our energy elsewhere.

Reference Models

Reference models contain the body of standards, policies, commonly agreed practices, and so on, that apply to projects of this type in this organization. For a construction project, they might include elements such as building regulations, planning guidelines and organizational procurement policies. For a software engineering project, they may include architectural principles, coding standards, user interface design guidelines, security policies, lifecycle models, and so on.

We typically review the project to assess whether it conforms to these reference models. This can engender a lot of debate. Often we have multiple, overlapping standards to choose from: people will have different opinions as to which ones apply. Likewise, every project is different and people will have opinions as to how the general standards need to be varied to suit the circumstances of this specific project.

These debates eat time. It generally doesn’t help to get caught up in them. The best way I know to maintain focus during the review is to agree up front which reference models apply for this project, and how and why they have been varied. This is done either when setting the initial terms of reference with the review sponsor, or when having the initial meeting with the project leadership. A project that can’t agree its reference models probably has other communication problems to contend with too, so failure to agree on reference models is itself useful information for reviewers. (Audits, by the way, generally don’t have this problem – the auditors get to define which reference models they’re going to assess the project against.)

Reviewers also have their own toolkit of reference models. We use checklists to capture useful practices for reviews – useful questions to ask, or common problems that occur on projects in our organization, or such like. These are a type of reference model. Chapter 5 discusses how we can use these reference models as an element of organizational memory, capturing common issues and good practices that we can disseminate to other projects through the review process.

Criteria

The baseline and reference models can represent a large body of information in themselves. Trying to assess a project against every applicable standard, for example, would probably take longer than the project. Criteria tell us which elements of the baseline and which reference models we want to assess the project against, and how we want to make that assessment. For example:

  • The review may focus on assessing the project’s progress against the baseline schedule and budget.

  • It may focus on assessing whether the project’s outputs remain aligned to the objectives contained in the baseline business case.

  • We may be more interested in whether particular project management processes and controls are consistent with ‘best practice’. In this case, we’re probably not interested in every possible process – trying to evaluate a project against the entire PMBOK is pretty well intractable. In thinking about our criteria, we’re making a judgement as to which are most likely to be important for this project. If it’s building a complex system for a rapidly changing environment, we may be most interested in configuration management and change control, for example.

  • Or we may be interested primarily in whether deliverables are likely to meet applicable quality standards. Again, we probably can’t assess against every possible standard, rather we choose which ones are most important to this phase of the project.

 

The number of possibilities is endless. The important thing is that we think about where we want to focus up front, so that we can plan our resources appropriately and manage expectations with our stakeholders. My experience is that reviews add most value when they focus on no more than two or three criteria. Beyond that, they start to skate across the surface, never gathering sufficient depth of information to create actionable findings in any area.

Where does this leave us when someone asks us to ‘just do a general review of the project’? As I noted earlier, this can generally be reframed to focus on two criteria: are the objectives well defined and understood, and are the primary risks to those objectives being managed appropriately? In other words:

  • Our baseline is the organization’s business strategy and the business case for the project. Our review aims to confirm that the business case identifies clear objectives, and that these objectives are driving stakeholder actions.

  • Our reference models focus on risk management processes and associated tools (such as checklists). Our review aims to check that these are being used effectively on the project.

 

In the course of our initial interviews, we may uncover risks that aren’t being managed. In that case, we might choose to initiate another iteration of reviewing to drill down into the most threatening risks. The criteria for these latter reviews would then be tailored to each specific risk.

This example also illustrates how we can cover more than two or three criteria: break the problem down into several reviews, or several iterations within a single review, each of which focuses on a small number of criteria. This pattern of an initial broad-ranging review with follow-up activities to drill into specific risks is common. An alternative pattern of deep reviews to assure the quality of specific deliverables followed by a broad review to assess the overall progress and status of the project is also common.

As well as agreeing the criteria and their associated reference models and baseline, it is worthwhile thinking about the degree of rigour with which we will apply the criteria. For example, what tolerances can we live with when deciding if delays or budget overruns are material to our assessment? Do we expect the project to comply rigidly with a standard, or is there latitude for variation? How much latitude? Tightly defined standards generally make the review easier: it will be easier to come to an objective assessment of whether the standards are being met. However, if the project is trying to do something innovative, it may need more latitude to adapt its approach as it learns. We need to adapt our review approach accordingly.

In the wider context, if reviewers get a reputation for rigidly applying standards without adaptation to specific project circumstances, they are likely to experience resistance from project managers and teams. Conversely, reviewers who are seen to be helpful in tailoring standards to match the project’s situation will be welcomed as useful advisers. This opens up lines of communication and makes it much easier to conduct effective reviews.

Inputs

Once we’ve established what we’re trying to achieve and our overall approach, we can start to identify the specific inputs we’ll need. Inputs may include:

  • Documents and supporting materials: These are especially useful for preliminary reading to understand the project and develop an interview protocol.

  • Interviews: These are often our main tool for probing and understanding what is actually happening on the project.

  • Status reports, risk registers, self-assessment checklists and associated artefacts: These are all useful for telling us about the perceived current state of the project.

  • Workshops: As an alternative or supplement to interviews, workshops can be a good way to gather and analyse information.

  • Prototypes and other examples: A good way to get a handle on project outputs and deliverables.

  • Observation: You can learn a lot about what is happening on a project simply by observing how people are working, their communication patterns, the ad hoc meetings they are calling to address issues, and so on.

 

The bulk of our planning will revolve around identifying which mix of these inputs we need to assess the project against the agreed criteria. Understanding the criteria helps us narrow down the inputs to a manageable subset. (This is often an iterative process: the inputs available to us may influence what we can realistically achieve.)

Review Execution – the Analysis Loop

The main phase of the review is about gathering and analysing information. This can be framed as a process of generating hypotheses and then gathering the information needed to confirm or refute them (e.g. see the case studyReview Techniques in the Education Sector).

Our criteria drive the initial set of hypotheses that we will investigate. For example, where we are assessing the project’s progress against the baseline schedule and budget, we might start with the hypothesis that the project is on schedule and then look for evidence to suggest that it isn’t. Chapter 6 discusses this process in more detail.

Factors such as the following will also influence the way in which we plan and execute this analysis loop:

  • Are there any constraints on our access to the project team and documentation? Geographically distributed teams or tight security requirements, for example, can have a substantial impact on scheduling. Even simple logistical details such as availability of meeting rooms can have a strong influence on our plans.

  • Is this a one-off review or health check, or is it part of an ongoing programme of reviews or assurance? In the latter case, outputs from earlier reviews may influence our investigations, for example, to check that issues identified during those reviews have been addressed.

  • Will we need specialist expertise for certain parts of the review and, if so, how will we coordinate with the relevant experts?

  • Will we hold intermediate checkpoints with the review sponsor and/ or project manager? These are a useful way to confirm that we are covering the right territory and are making reasonable interpretations of the information we’re gathering. However, we need both to schedule time for them, and to schedule preparation time to analyse and structure our findings so that we can make best use of them.

 

It’s easy to underestimate the amount of time that needs to be invested in analysis, report writing and communicating with stakeholders. My rule of thumb is that every hour of interviewing generates about two hours of analysis and other activities. (That means, for example, that 12 one-hour interviews imply a total effort of 36 hours, or three intense days during the main phase of the review plus additional time to write the final report. Every time I try to cut corners, perhaps reducing my time to analyse interview notes in order to schedule additional interviews, I end up regretting it.)

Feedback Loops

Figure 3.1 (page 57) shows two feedback loops. The first of these, ‘Recommendations to improve project’, is the standard loop whereby the review team delivers recommendations to the project team and other stakeholders. The second loop, ‘Feedback to improve reference models’ is the organizational learning loop that we will discuss in Chapter 5. Given that the reference models effectively constitute our organization’s memory of good practices, this feedback can be especially powerful: when reviewers improve and disseminate these models, they can affect a wide range of projects.

Revisiting Review Types

Chapter 2 discussed the different types of review we can undertake. This section revisits that discussion to explore how the above parameters interact with the type of review.

Timing of Reviews

As we move through the project lifecycle, we review and agree additional elements of the project – plans, specifications, designs, and so on. Thus successive reviews build from the baseline established by earlier reviews. Table 4.1 illustrates the series of event-based reviews that this might lead to. (This is only illustrative: the specific sequence of reviews will be tailored to an organization’s lifecycle, policies and business strategy. The gates described in the Formal Gateways in the Public Sector case study illustrate another sequence of reviews.) Reviewers might also use a subset of a checklist such as that in Table 5.2 (pages 100–117) to guide these reviews, choosing the questions relevant to each stage of the project.

In the phase between the Architecture and Release gate reviews in Table 4.1, the organization might then conduct periodic status and risk reviews. These would use the project plan as a baseline, gathering evidence such as completion of milestones and deliverables to assess whether progress is consistent with this baseline. (Separate quality reviews of the deliverables might also inform these assessments.) Likewise, they might aim to assess whether the project’s risk identification and management processes are operating effectively, as evidenced by a complete and actively maintained risk register. (Of the case studies, both Earthquakes in the Project Portfolio and Assuring Fixed Price Implementation used approaches similar to this.)

If concerns arise about the project or about changes in the external environment at any point, the organization might also initiate one-off health checks. The parameters for these would be tailored to the specific circumstances.

 
Table 4.1 A sequence of event-based gate reviews

Review

Objectives

Baseline

Reference models

Criteria

Business case

Confirm business case is viable and aligned to overall strategy

Organizational strategy

Standard for return on investment (ROI) calculations and sensitivity analysis

ROI appropriately calculated and above hurdle rate; Alignment to overall strategy

Project definition

Confirm project can deliver the agreed business case

Business case

Project planning guidelines; Risk management guidelines

Alignment to business case Conformance to planning guidelines; Completeness of risk analysis

Specification

Confirm systems can deliver project objectives

Business case

Systems specification guidelines and standards

Consistency with objectives in business case; Conformance to guidelines

Architecture

Confirm systems can deliver project objectives

Specification

Enterprise architecture

Consistency with specification; Alignment to enterprise architecture principles

Release

Confirm systems can be safely released into operations

Specification Business case

Operations manual

Consistency with specification; Continued viability of business case; Conformance to operations principles and standards

Degree of Independence and Formality

These are largely independent of the parameters in Figure 3.1 (page 57) they are set more by the degree of assurance required and the amount that we are prepared to invest to obtain it.

Attributes being Reviewed

Table 4.2 illustrates baseline and reference models that might apply when reviewing the project attributes discussed in Chapter 2. (Again, this is only an illustrative list. Every organization will build its own list of assets and standards over time.)

 
Table 4.2 Examples of standards applying to different attributes of the project

Attribute

Baseline

Reference models

Objectives

  • Organizational strategy

  • Business case

  • Methods such as sensitivity and options analysis

Status

  • Project plan, schedule and budgets

  • Earlier status reports

  • Organizational standards for status tracking and reporting

  • Standard techniques and metrics such as Earned Value

Risk

  • Business case

  • Project plan

  • Earlier versions of risk register

  • Organizational standards for risk management

  • Standard approaches and models for risk management

  • Management of Risk (M_o_R; OGC, 2002)

  • Checklists of lessons learned from earlier projects

Quality

  • Quality plan

  • Specifications

  • Test plans

  • Relevant ISO, IEEE and other standards

  • Relevant regulations and legislation

  • Organizational policies, standards and guidelines

Process

  • Project plan

Compliance

 

  • Relevant quality or process standards and policies from above lists

  • Organizational policies and standards

  • Relevant regulations and legislation

Assurance or Audit?

The separation between baseline and reference models also helps clarify the distinction between assurance and audit. We noted in the Chapter 2 that audit focuses on compliance with policies and standards. Thus, as a rough and ready rule, we might expect auditors to focus on assessing the project against reference models. Assurance teams are likely to give more weight to the baseline – is the project still doing what it set out to do, and is that still worth doing? (This is by no means absolute: the two functions will be interested in both types of standard. They simply have a different overall focus.)

Submit your own content for publication

Submit content