GPM First
Chapter 6 of Project Reviews, Assurance and Governance (978-0-5660-8807-0) by Graham Oakes

The Importance of Evidence

Chapter 6

Evidence is at the heart of a review. Unless we can gather clear evidence of what’s actually happening on the project, we will have little hope of persuading people to act on our findings. And rightly so: if we don’t have clear evidence, then how can we be sure our findings are well founded? Of course, clear objectives, agreed reference models, relevant checklists, and so on, are all important, but only because they guide the evidence-gathering process. This chapter looks at that process and some of the techniques it employs.

The Need for Evidence

Evidence backs our findings and recommendations in three important ways:

  • Understanding: People need sufficient information to understand what we’re saying. To do this, we need to build a clear picture of the project’s background and current state, of the issues that it’s facing and of the likely impact of these issues. We also need to build a clear chain of reasoning from these observations to the underlying root causes and hence to our recommendations. (We may also need to clarify terminology and provide support for non-specialists to understand it, but that’s a separate issue.)

  • Acceptance: People may need additional information in order to believe what we’re saying. For example, they may need independent verification of certain details, or sufficient data to demonstrate that trends are statistically meaningful, or confirmation by accepted experts that recommendations are well founded. Providing this information is especially important when our findings are contentious or when they challenge fundamental assumptions or expectations within our audience.

  • Action: Finally, people may need different information again in order to act on what we’re saying. For example, whereas our findings and recommendations may be based on trends and clusters of issues, people may need a detailed picture of the specific issues and where and when they are occurring in order to address them, and to track the effectiveness of their actions.


In each case, different people may need different levels of detail and different types of information. Some people only want the executive summary, while others thrive on as much detail as we can give them. Some people find numbers and metrics compelling. Others are more likely to believe diagrams. Others again will pay most attention to quotes and personal testimony from people on the project team. As we interact with stakeholders during the review, we need to look for signs as to what sort of information they prefer: this will help us to gather evidence and frame our findings appropriately. (This is also an area we should discuss with the review’s sponsor as we negotiate our terms of reference.)

Several other factors influence the amount and type of evidence that we need to gather:

  • The reputation of the review team: If reviewers are trusted by other stakeholders, the burden of evidence they require is likely to be lower. (Reviewers need to maintain their reputation, however. This means gathering sufficient evidence to assure themselves that their findings are well founded. It’s easy enough for any team to overlook important facts or slip into groupthink under deadline and other pressures.)

  • The size, complexity and importance of the project: It’s easier to overlook or misunderstand important facts on large, complex programmes. The impact of mistakes is greater on mission-critical projects. In such cases, it makes sense to require a higher standard of evidence.

  • The significance of the issues identified by the review team: Recommendations for substantial or contentious changes need to be backed by appropriate evidence.

  • The degree of urgency of the issues: Gathering detailed evidence takes time. In some cases it may be necessary to act on partial information. (The case study Weeding the Project Portfolio touches on the dangers of trying to act with insufficient evidence.)

  • The political situation: Some review teams operate in politically charged environments (see the case study Review Techniques in the Education Sector: school inspections are a highly charged subject and hence inspection teams must gather evidence carefully) or in situations where litigation is pending or threatened. This raises the standard of evidence which is required.


Review teams need to balance these factors against the available budget and the potential benefits of investing resources elsewhere. In some cases, for example, we may have bad feelings about a project but it’s simply not important enough to justify gathering the evidence needed to clarify our intuition. Instead, we might let the project proceed until the issues become clearer of their own accord.

The Analysis Loop

How do we gather this evidence? Figure 3.1 (page 57) shows an analysis loop at the core of the review process. Figure 6.1 (page 126) expands this loop to show three stages to gathering evidence:

  1. Gathering raw data: We read documents, conduct interviews, analyse plans and deliverables, and so on. This gives us the basic data to build a picture of what is happening on the project.

  2. Structuring this data: We search for patterns and trends in the data. We look for inconsistencies both within the data itself and by comparison to our reference models. As we do this, we identify issues and begin to classify and cluster them.

  3. Generating hypotheses: We postulate underlying problems and root causes that might be driving the observed issues and trends. This in turn may lead to additional data gathering in order to clarify these underlying factors.


This model frames evidence gathering as a process of hypothesis testing: as we go around the loop, we generate hypotheses about the project and gather the data needed to confirm or refute these hypotheses. In doing this, we automatically gather the evidence needed to back up our findings and recommendations. (Hammersley and Atkinson (1995) discuss the relationship between observation and hypothesis testing in the context of ethnographic studies. Framing a review as an ethnographic study of the project gives many useful insights to the evidence gathering process.)


Figure 6.1 The analysis loop



The process is drawn as a loop, with no explicit starting point. Sometimes we will start with hypotheses. For example, the review may have been initiated because the sponsor is concerned about some aspect of the project. We will then be seeking data to test the reality underlying this concern. Or our organization may regularly experience certain issues on projects, and we will be checking whether any of these issues apply to this particular project. Every question in a checklist embeds hypotheses about issues that may be affecting the project. (OFSTED inspections, discussed in the Reviews in the Education Sector case study, are explicitly framed to begin with hypotheses.)

Other times we will start with data gathering. We try to keep our minds open as we observe the project and gather data about its progress, processes, communication patterns, deliverables, and so on. Hypotheses then emerge from this data. (In practice, observing ‘everything’ is difficult: we will probably focus in certain areas. Framing the process as hypothesis testing at least forces us to be explicit about, and hence to manage, these potential biases.)

Iteration Planning

The analysis loop also emphasizes that reviews tend to be iterative. as we learn from the initial data we gather, we may identify other areas for investigation. For example, during our initial interviews we may hear of additional people we need to talk to in order to understand an issue and clarify the facts behind it.

This being the case, it helps to plan for iteration from the outset. Considerations here include:

  • Balancing breadth and depth: When conducting ‘general’ reviews (e.g. focusing on objectives and risks, as discussed earlier), we need to gain an overview across the entire project while also going into sufficient depth to convincingly describe the key issues. This can be handled by scheduling an initial broad iteration to assess the project and identify potential issues, followed by one or more focused iterations, each addressing a specific issue.

  • Managing dependencies: There is often a natural order in which to review aspects of the project. For example, requirements might be reviewed in order to establish the baseline for design. Or we may want to review the quality and completeness of key deliverables before we review the status of the overall project. These dependencies will inform our iteration planning.

  • Clarifying and confirming data: There is almost always scope to follow up our initial data gathering – we need to clarify things people say to us during interviews, talk to other people our interviewees refer us to, examine metrics and logs to explore the reality behind people’s perceptions, and so on. It makes sense to schedule some time to do this.


On a major review, these iterations would probably be fairly formal, perhaps with scheduled checkpoints to review each iteration and plan the next. On smaller reviews, they might be less formal. Either way, considering the number and type of iterations helps us plan our time and resources.

Issues Log

I find it useful to record the information gathered during the analysis loop into a central issues log (see Table 6.1 for an example), typically managed as a spreadsheet or small database for each review. As we examine documents and conduct interviews, we record any questions, inconsistencies, risks or other issues into this log, with a reference back to their source (e.g. the original document or interview notes). This structure helps in several ways:

  • It brings all the potential issues into a single place, making subsequent analysis and pattern recognition easier.

  • It helps trace classes of issues back to the original raw data. As we classify and cluster issues, we record this into the relevant column (or columns – it is sometimes useful to classify issues against multiple axes). Thus we have a trail from cluster to issue and hence to the original source.

  • It can be extended easily. On more complex reviews, we may build models of the relationships between clusters of issues, perhaps capturing causal relationships and interdependencies. These can be captured in additional columns, retaining traceability to the original data.

  • In environments where protecting the confidentiality of interviewees is important, it helps anonymize the issues. We can strip off the leading columns before delivering the log to the review’s sponsor (while retaining the Issue ID for any future follow up with the original source).


A central log is particularly useful on large reviews where reviewers may be operating in parallel strands. However, I find the discipline of structuring my analysis in this way useful even on small reviews: it helps me organize my notes and separate raw data from inference and hypothesis.

Table 6.1 Issues log structure


Raised by


ID #




Date the issue was identified

Who identified it

Source from which it was identified (e.g. interview notes, document ID)

Issue ID

























Information-Gathering Techniques

Evidence comes from a variety of sources – documents, interviews, observation of the team and environment, and so on. This section discusses some of the most common information-gathering techniques.

Document Review

Most reviews probably start with documents. If we can use them to gain some understanding of the project’s context and history before we meet with the project team, we need waste less of their time as we get up to speed with the project. Documents can also help us identify which areas of the project we want to focus on, who we need to talk to in each area, and what questions we want to ask them. Table 6.2 illustrates some of the purposes for document review.

Factors such as the following determine the value which can be gained from document review:

  • How much documentation is the project producing? Some projects produce extensive documentation trails; some produce very little. This may or may not be an issue in itself, but it certainly influences the amount of time we should put into reviewing documents.


    Table 6.2 Document review


    Useful documents

    Understand project context and objectives

    • Project Charter

    • Project Brief

    • Business Case

    • Project Inception Document


    Understand project approach

    • Project Plan and associated documents

    • Specification and design documents


    Understand progress and current state of project

    • Current schedule and associated items

    • Status reports

    • Risk register

    • Issue log

    • Outputs from earlier reviews

    • Outputs from quality assurance activities


    Understand what actually happened on a project (e.g. for a retrospective)

    • All the above items

    • Project diaries

      (Logs of daily events, recorded as they happen, can be useful inputs to retrospectives. The difference between people’s recollection of events and the actual events can be very instructive.)


  • Why are the documents being produced? If the project is producing them merely to tick the boxes in a methodology, for example, then they may tell us little about how the project is actually being run. (However, if a lot of effort is being expended to create documents that no-one is using, that says something about the project.)

  • Are the documents still current? If the project team is not maintaining them, they tell us little about the current state of the project. (Again, signs like this tell us something about the project.)

  • Were the documents ever an accurate reflection of the project? Even if the documents are up to date, they are only likely to capture the generally accepted view of the project. There may be issues that people are overlooking, or that they aren’t prepared to raise in documentation. (Otherwise, why would we conduct reviews?)

  • How else is the project recording status and decisions? All projects have a variety of formal and informal mechanisms for recording and communicating information. We probably want to focus our attention on the mechanisms that are actually being used.


When reviewing documents, it is worthwhile considering:

  • Inconsistencies within and between documents: How did they arise – is it simply that the situation changed as the documents were written, or are there more fundamental differences of perspective? How material are the differences – could different parts of the project be working from a different understanding of objectives or interfaces, for example?

  • Gaps and omissions: What documents would you expect for a project of this type – are they all available? Are there aspects of the project that aren’t covered in the business case and plans? Are there organizational standards or guidelines that should have been referenced? Are there risks that you’d expect to see but that aren’t documented?

  • Underlying rationale and assumptions: People often only record the final results of decisions. Were other options considered and why were they rejected? What assumptions underpinned these decisions? Do these assumptions still hold? Even if the authors of the documents are no longer on the project team, it may be worth talking to them to understand this rationale. Likewise, it can be worth talking to people who conducted previous reviews – even if they wrote comprehensive reports, there may be general impressions and concerns that they couldn’t easily document.

  • Change history: How has the document changed over time? What does this tell you about the project and external influences on it? How stable is the document now – if it’s still changing rapidly, how might this affect other parts of the project?

  • Comments and marginalia: What do these tell you about different perspectives and opinions within the project team? What do they tell you about how thoroughly the document has been reviewed?


Document review is a great way to improve the quality of the documents themselves (see Gilb and Graham, 1993, or Weigers, 2002). In order to assess the status and viability of a project, it provides a useful starting point, but we generally need to go to interviews to probe more deeply. Before we look at interviews, however, I want to discuss a couple of underlying skills.


Listening and observation are fundamental skills for reviewers. They’re the main ways we gather information about the state of the project. Good listening skills help us in a number of ways.

For a start, active listening is fundamental to interviewing (which we’ll discuss below). Thompson (2002) suggests that the most successful interviews are those where the interviewer talks for about 25 per cent of the time. Interviewers need to say things to keep the conversation flowing and on course, but the bulk of their time is spent listening.

Beyond that, listening adds substantial value in its own right. One of the biggest challenges for many project managers is creating time to think through their status and options. A reviewer who is prepared to sit and listen to them for a while, providing a sounding board for their thinking, can help them understand and solve their project’s issues for themselves. (And project managers are more likely to implement their own solutions than those recommended by reviewers, no matter how brilliant.) As a by-product of being a sounding board, reviewers will also probably get most of the information they need to conduct an effective review.

Conversely, weak listening skills can seriously undermine a reviewer. I’ve seen interviews degenerate along the following lines:

Reviewer: Have you tried X?

Interviewee: Yep – it didn’t work.

Reviewer: Have you tried Y?

Interviewee: Yep – it didn’t work.

Reviewer: Have you tried z?

Interviewee: Thought about it, but it won’t work for the following reasons …

By now, the interviewee is convinced that the reviewer has nothing to offer and is wasting their time. They’re much less likely to share information with this reviewer than they were at the start of the interview.

Over time, if you prove to be a good listener, people will become more comfortable about inviting your ideas and suggestions. By this time you will probably have gathered enough information to be able to make meaningful recommendations. People will listen to these recommendations because they feel you’ve listened to them. Until then, a reviewer’s role is to listen and observe, and hence to identify what is really happening on the project.

Listening skills can’t be learned from a book. However, here are some pointers to think about:

  • Eliminate distractions: Wherever possible, plan to conduct meetings and interviews in a space where you won’t be distracted by interruptions, ambient noise, visual clutter, and so on.

  • Concentrate on what the other person is saying, not on your next question or suggestion: Focus on the facts of the situation they’re describing – try to understand who was involved, when it happened and how it relates to the rest of the project, for example. If you start to think about how to ‘solve’ this situation, then you cease listening. Likewise if you begin to think about the next question on your checklist.

  • Reflect back: Paraphrase key points and ask clarifying questions. This shows you are listening, confirms you’ve heard correctly, and keeps you focused on what the other person is saying.

  • Prepare, but go with the flow: If you’ve prepared and internalized the interview protocol, as discussed below, then you’ll be able to find questions that both follow from what the other person is saying and further your overall objectives. If you focus too much on a checklist of questions, then you’ll keep breaking the flow of the conversation and give them the sense they’re not really being listened to.

  • Don’t be afraid to pause: Let people catch their breath and think of the next thing to say. It often happens that someone responds to a question, then pauses to gather their thoughts, then says something significant in a second burst of information. The first response says what’s at the top of their mind or what they think they should be saying. The second burst is about underlying concerns or things that are harder to express. If you ask a fresh question at the first pause, you miss this deeper information.

    (The pause probably feels longer to you than to them. Three seconds of silence while you’re waiting for them to say something can seem like an eternity. From their perspective, three seconds to collect their thoughts on a complex subject feels like no time at all. Pausing to assimilate what they’ve said and collect your own thoughts is fine too.)

  • Take notes: This demonstrates that you value what they’re saying. It can also create thinking time: a couple of seconds to collect your thoughts as you finish the notes. However, don’t let note taking get in the way of making eye contact and other signs that you’re listening. (I find it hard to take notes on a computer for this reason: the screen and keyboard demand more attention than a pen and paper. I also find that transcribing handwritten notes gives me an opportunity to reflect on what was said. It’s the beginning of my analysis.)

  • Observe their body language: Tone of voice, facial expression, nervous twitches – these are all part of the message. If you don’t understand them, it may be worthwhile asking: ‘You grimaced then – why’s that?’

  • Manage your own body language: I’ve been interviewed by people who looked out the window as they asked questions: it didn’t encourage me to give more than the barest bones of information. Make appropriate eye contact with the person you’re interviewing (‘appropriate’ means different things in different cultures). Give non-verbal signs that you’re listening: nod your head, chuckle at their jokes. When pausing to collect your thoughts, look reflective.

  • Casual conversations matter too. These pointers apply to formal interviews and meetings, but also to informal encounters and corridor conversations.



Observation extends the information we gain from document review and interviews in a variety of ways. Body language gives insight into people’s answers to questions. Workspaces reflect the pressures the project is experiencing. Interaction patterns illuminate the actual, as opposed to documented, governance and communication structures.

During interviews, it’s worth noticing people’s reactions to questions. Which ones surprise them? Do some topics make them angry, or frightened, or excited? Reading body language can be difficult, especially on multicultural projects, but questions that elicit strong responses of any kind are probably worth following up. You can at least ask why they’re having that response: this may clarify the significance of their answers.

As you move through the project environment, look for signs of bottlenecks in support structures and processes: people queuing to access equipment, for example, or having trouble finding tools. Flow diagrams pinned to walls may suggest processes that are under stress. It can be worth asking someone to explain the diagram, and why they put so much effort into developing it. Cramped office space might indicate that the project team has grown well beyond the planned complement. Pizza boxes could be signs that people are working late nights. For some people, untidy desks and overflowing rubbish bins are the norm; for others they drain morale. None of these things may be problems in themselves, but they all suggest areas to explore further.

Likewise, the way the project team works together may give indicators as to how decisions are really made. Are there alliances or rivalries? Whose opinions are respected and whose are disregarded? Who is given time to talk during meetings, and who is silenced? Most teams have internal tensions. As reviewers, we need to make a judgement as to whether these tensions are impairing effective decision making and communications. Observation alone may not answer this question, but it may suggest where to probe further.

It’s also worth observing what isn’t happening. What activities would you expect to happen on a project of this sort? Are people doing all the things they claim to be doing in interviews and project documentation? Are they doing things that they haven’t mentioned in those sources?

Finally, are you being shielded from any areas of the project? For example, are there people it’s difficult to get access to? Is this because they’re overloaded, or because they have unorthodox opinions?


Most project reviews use interviews and related meetings as their primary information gathering tool. This is because, at some level, people generally know what is happening on their project. Sometimes this knowledge is dispersed across several people and no-one has had time to integrate it. Sometimes someone can see a looming issue but doesn’t fully realize its significance, or doesn’t know how to articulate it effectively. By talking to people, we aim to gather their knowledge and build it into a clear picture of what is happening.

On a small review, we may do this within a single meeting. On a large review, we may undertake several dozen interviews. Either way, a similar process and skills apply. In particular, the listening and observation skills discussed above are crucial. This section looks a little more deeply at the interview process. For more detail on this subject, it is worth going to books such as Thompson (2002) or Hammersley and Atkinson (1995).

The Interview Process

Interviews can be seen as informed, structured conversations about the project. If they’re well conducted, people will feel at ease and even be glad of the opportunity to structure their own thinking and convey important messages to key stakeholders. They’ll be happy to provide the information that reviewers need. Conversely, if interviews are poorly managed, people will feel they’re wasting their time and will give minimal information. Likewise, if we try to interrogate people, they’ll resist. This just makes it harder to gather information.

Such a conversation doesn’t simply happen: we need to make it happen. Table 6.3 describes eight stages for planning and conducting an interview. Naturally, the amount of effort we need to put into each stage depends on the scope of the review, but a successful interview will probably go through most of the following eight stages to some degree:

  1. Planning: Identify overall objectives for the interviews, and hence plan their coverage and logistics. Begin to develop the interview protocol.

  2. Interview preparation: Use document reviews, self-assessment checklists and other techniques to build a deeper understanding of the project and hence refine the interview protocol. Brief interviewees.

  3. Pre-interview: Prepare for each specific interview, for example set up the room and review the objectives.

  4. Opening: Set context for the interview, for example make introductions and explain objectives.

  5. Mid-interview: Conduct the conversation itself, guided by the interview protocol.

  6. Closing: Close and sum up. Agree next steps.

  7. Post-interview: Record interview notes and other observations. Schedule follow-up activities. If interviewing in pairs or as a team, debrief with other interviewers. Confirm accuracy of the interview notes.

  8. Analysis and reporting: Integrate notes across all interviews. Hence develop final analysis and reports.


This process may suggest that interviewing is complex. It is. Good interviewers bring a range of skills to bear in order to manage themselves, the environment and their interviewees. It’s worthwhile seeking as many opportunities as possible to practice and develop these skills. (This is one area where dedicated review teams have an advantage over part-time reviewers: they have ample scope to practice these skills.)

Table 6.3 Interview process



1. Planning


  • Ensure that interviews focus on the overall objectives for the review, for example by asking relevant questions.

  • Ensure that interviews make good use of everyone’s time, for example that they gather the information needed without undue need for follow-up interviews, and that logistics run smoothly.

  • Minimize stress on interviewers and interviewees, for example by using an appropriate environment and projecting a sense that the interviews are being conducted professionally.

  • Begin to establish good working relationships with interviewees and gatekeepers, and hence gain their cooperation.




  • Define objectives for the interviews, and how these contribute to the overall objectives for the review. Objectives may be relational as well as informational: we may need to see certain people simply to establish a relationship or otherwise manage politics.

  • Identify coverage for the interviews – who we want to interview and how. Factors to consider here include:

    • – The range of people to interview (see Chapter 3).

    • – The order in which to interview them. (e.g. do we see the senior stakeholders first, in order to understand overall context? Are there specialist areas we’d like to cover at an early point?)

    • – The length of each interview. (We may only get very short slots with senior executives: how do we make the most of them? Likewise, we may only need short slots when checking facts with specialists. For other people, we may want longer sessions. However, 60 or 90 minutes is the pragmatic maximum for most interviews: beyond this, people tend to run out of energy.)

    • – The number of people in each interview (see ‘Interviewing in Groups’, below).

    • – Leave time to transcribe and analyse interview notes, confirm and clarify facts with interviewees, schedule follow-up interviews with people identified by the initial analysis, and so on. I find that if I try to schedule more than a dozen interviews in a three day period, I run out of time to perform these other activities effectively.


  • Begin to develop the interview protocol. (This defines the questions you will ask in order to achieve the interview’s objectives, and how you will ask these questions. See the discussion below.)

  • Establish contact with administrators and gatekeepers. You will probably need to work with office or project administrators to book rooms, schedule interviews with project team members and arrange facilities. Access to some interviewees may also be mediated by ‘gatekeepers’, for example, personal assistants who control the diaries of senior executives or account managers who control access to external stakeholders. It is worth establishing good relationships with these people, for a number of reasons:

    • – They know how to make the logistics happen in this environment.

    • – They may be able to make it easier to access some stakeholders.

    • – They may influence the perception of stakeholders. The way a personal assistant or account manager introduces you can do much to colour the subsequent relationship with an executive.

    • – They may know a lot about what is really happening on the project. Most communications and gossip get channelled through these people at some point.

    • – Beware, however, that some gatekeepers can have a negative influence. For example, they may steer you towards the people they have the best relationship with, rather than the most knowledgeable people.


  • Arrange logistics for the interviews. These include:

    • – Booking rooms. Interviewing in a dedicated room gives you control over privacy and interruptions, and allows you to assemble any materials you might need – flip charts and pens, for example. Going to the interviewee’s office gives you less control, but people may be more relaxed in their own space, and it provides an opportunity to observe their work environment. Each has its advantages.

    • – Arranging room layout and furniture. You’ll need a table to write on, and an appropriate number and configuration of chairs. A visible clock helps manage timekeeping. Do you want to provide coffee or water or other refreshments? What else might you need?

    • – Arranging other facilities. Do you need special equipment for the interviews (perhaps network connections or a projector)?

    • – It’s surprising how much effort it can take to arrange these things(see Chapter 7), especially when working outside your own office environment. However, smooth logistics and a well-managed interview environment help set interviewees at ease, signalling that you are managing the process and won’t waste their time.


  • Schedule the interviews. Again, this can take a lot of effort as you negotiate diaries. Be clear about where you can be flexible, and where you need to be firm. (e.g. you may need to see the project manager at an early point, but can be more flexible about the order in which you meet technical members of the project team.)

  • Avoid back-to-back interviews if at all possible. You’ll need time to debrief one interview and collect your thoughts for the next. You may need to manage interviewees who overrun or arrive late. You will also need meal and comfort breaks.


2. Interview preparation


  • Understand the project in order to ask relevant questions.

  • Ensure that interviewees are prepared for the interview so that, for example, they have relevant information to hand.

  • Convey professionalism, so that interviewees will be at ease and will be more likely to accept the review’s findings.




  • Review project overview documentation.

  • Review relevant detailed documentation.

  • Prepare and disseminate a briefing pack for the project team and other interviewees. This may cover:

    • – objectives of the review;

    • – overview of the approach being used;

    • – review schedule;

    • – objectives and agenda for individual interviews;

    • – protocols for confidentiality, following up interview results, communicating the review’s outputs, and so on;

    • – anticipated outputs from the review.

    • – It’s often helpful to provide the briefing pack when you book time in people’s diaries, so they know what the meeting entails. However, you may not be able to finalize the briefing until all interviews are scheduled. In this case, it may be worthwhile to break the briefing into an introduction (review objectives and approach) and details(interview schedule and objectives).


  • Send self-assessment questionnaires to the project team, then assemble and analyse their responses. Factors to consider here include:

    • – Self-assessments are a good way to gather information about the project and the team’s perceptions of it. They also help interviewees to prepare, for example, by alerting them to the type of information you are seeking. This means they can come to interviews ready with all the necessary information.

    • – It’s unlikely that everyone will fill in a self-assessment (unless you have power to mandate its completion). Some people will appreciate the opportunity to use a self-assessment to gather their thoughts. Others won’t. You may be able to encourage the latter to complete the assessment as a follow-up to the interview.

    • – Long questionnaires are less likely to be completed than short ones. It may be useful to send different questionnaires to different people – overview questions to the project manager and more focused questions to technical specialists, for example.

    • – Another possible strategy is to circulate the questionnaire in advance, then to complete it together at the interview. (This reduces the time available for more wide-ranging discussion during the interview.)

    • – People will need time to fill in the questionnaire. However, if it is circulated too far in advance, they may forget some of the details by the time they get to their interview.


  • Confirm logistics for interviews (time, location, agenda). If people need to travel to the interview location, it may be worthwhile to assemble maps and directions. Likewise, if conducting a conference call, ensure that the dial-in details are correct and that there are no misunderstandings about time zones and such like.

  • Use the information gathered during this stage to refine the interview protocol. For example, some questions may be adequately answered by the self-assessments, while new questions come into focus.


3. Pre-interview


  • Be prepared to conduct the interview. (This preparation both helps us to respond to whatever might be said during the interview, and signals to the interviewee that we respect their time and contribution.)




  • Review objectives for this interview, and how they contribute to the overall objectives for the review.

  • Review the interviewee’s role and background, and any information they provided during the Interview Preparation phase.

  • Ensure the room is tidy and arranged appropriately.

  • Ensure you have all the necessary facilities to hand (pens, paper, checklists and discussion materials, and so on).

  • Clear your mind and be alert and ready to conduct the interview.


4. Opening


  • Set the interviewee at ease.

  • Establish objectives and overall direction for the interview.




  • Interviewers introduce themselves and their roles (for the interview, for the review, and within the wider organization).

  • Describe objectives for this interview, and how these relate to the review’s overall objectives. (It may be worth exploring the interviewee’s reaction to these objectives: do they see the need for such a review, for example?)

  • Check whether the interviewee has any questions. (This helps set them at ease. Their questions may also tell us something about the project.)

  • Confirm logistics. How long will the interview last? How will you follow up afterwards?

  • Confirm note-taking and confidentiality protocols. How will you review your notes with the interviewee to confirm you’ve heard correctly? Are their responses confidential, or will they be shared with other people?

  • Ask some simple questions to set them at their ease. (e.g. how long have you been on the project? What is your role?)

  • Be prepared to deal with common issues, for example:

    • – People arrive late and flustered. You will need to set them at ease. You may also need to re-plan the interview to focus on key questions, or reschedule it.

    • – Problems with the room or other logistics – extraneous noise or double bookings, for example. Do you take time to find another room, or continue the interview despite the imperfections? It helps to have options up your sleeve.

    • – Don’t be afraid to take a couple of minutes to calm down after dealing with such things. It may be better to take five minutes to have a coffee together, rather than to start the interview while you’re still stressed.



5. Mid-interview


  • Gather information




  • Ask questions, as per the interview protocol (see below).

  • Record notes (see below).

  • Listen actively and observe (see above).


6. Closing


  • Respect the interviewee’s time by closing on schedule.

  • Confirm we’ve heard their main messages.

  • Take a final opportunity to gather information that hasn’t yet been covered.

  • Open up the opportunity to continue the conversation, perhaps by email or other channels.

  • Close on a friendly note, so the interviewee conveys a positive message to their colleagues.





  • I like to recap the main messages I’ve heard from the interview so far, then ask a few meta-questions, such as:

    • – Is there anything else we should be asking?

    • – What else would you ask if you were reviewing this project?

    • – Have any of the questions we’ve asked surprised you?

    • – Is there anyone else we should be talking to?


  • Explain how the interviewee can contact you to pass on any additional information that occurs to them.

  • Explain next steps: how you will confirm your interview notes, how findings from the review will be disseminated.

  • Thank them for their time and contribution.

  • Effective closing is very important. If the interviewee feels their time has been respected and you’ve listened to their contribution, they’re more likely to accept and act on the findings from the review. It is better to cut the mid-interview short than to rush the closing stage. If necessary, schedule a follow-on interview to cover the remaining ground.


7. Post-interview


  • Confirm and record notes from the interview.

  • Perform any follow-up activities.




  • Debrief with other interviewers. Discuss your overall impressions and check whether they noted anything that you missed. Discuss differences and identify any activities needed to clarify them.

  • Log any artefacts (documents or other materials) received during the interview.

  • Transcribe interview notes (see the discussion on recording notes, below). It’s generally best to do this as soon as possible after the interview, while it is still fresh in your mind. Some interviewees remember important points after the interview, so be prepared to integrate these into your notes also.

  • Confirm your notes with the interviewee. You will probably have two types of information within your notes: what the interviewee said, and what you observed or interpreted during the interview. It’s worth checking that you heard and recorded the former accurately, and didn’t miss any key points. (This may also prompt them to recall further information.) You may wish to keep observations and interpretations separate at this point.

  • Schedule any follow-up activities, for example:

    • – Additional interviews to confirm or cross-check points raised by this interview.

    • – Interviews with additional people identified during this interview.


  • I generally reckon that each hour of interviewing will lead to two hours of recording and analysing notes, confirming and cross-checking facts, updating the issues log, and so on.


8. Analysis and reporting


  • Commence analysis of information received during the interview.

  • Integrate this information with that from other sources.

  • Develop final findings and recommendations.





  • Extract key points from the interview notes into the central issues log.

  • Consider how these points relate to those made in earlier interviews. Are trends or clusters of issues starting to emerge? Does this suggest additional questions you might want to ask, or additional people you may need to talk to?

  • Compare the points identified during interviews with those raised in the initial self-assessments and document reviews. How well does the project team’s perception of their status align to that of the review team?

  • Be prepared to explore different ways of classifying and clustering issues. There is generally more than one way to cluster a set of issues: different schemes may highlight different aspects of the project. Likewise, initial impressions may need to be revised in the light of subsequent interviews.

  • Be aware that thinking about clusters and classification can bias your interviews: you may begin to focus on seeking information that confirms your hypotheses, and hence miss other issues. However, you will probably begin to generate hypotheses no matter what you do, so it’s best to make them explicit. That way you can manage your biases and cross-check your thinking with other members of the review team.

  • There’s a natural tendency to focus on problems as we do this. However, it’s also worthwhile noting what is working well on the project: this will help gain buy-in from the project team, and may be worth disseminating to other projects.

  • On larger reviews, it is probably worthwhile scheduling a daily wrap-up meeting to integrate your findings and undertake this analysis. It may be worthwhile to invite members of the project leadership to this meeting. For example, they may be able to confirm the accuracy of facts and suggest additional people to interview to cross-check the analysis. Involving them in the process also helps build their buy-in to your findings and recommendations.

  • Reporting is discussed in more detail in Chapter 8 and 9.


The Interview Protocol

The interview protocol captures our thinking about the information we want to gather and the questions we will ask in order to gather it. Time spent developing this protocol helps us to manage our priorities during each interview. For example, it helps us to steer interviewees towards the most important areas of the project and ensures that we don’t forget important topics. It also helps manage the timing of interviews: in a 60-minute interview, we probably can’t cover more than a dozen major questions (with supplementary clarifying questions).

As discussed in Chapter 3 and 4, the review’s Terms of Reference define the type of information we wish to gather. As we develop the interview protocol, we will combine techniques from five main classes to help us get at this information:

  1. Closed questions: These elicit a short, factual answer (‘yes’, ‘753’, ‘3rd of March’). They are a good way to gather and validate basic facts about the project. The response is generally unambiguous, and it is easy to compare and analyse responses from multiple sources. On the other hand, closed questions don’t give much scope to probe and explore.

  2. Open questions: These elicit a longer, narrative answer. For example, ‘Describe how people update the configuration baseline’. Such questions can generate a lot of information about what is happening on the project. However, they can also generate a lot of noise, for example as people wander off topic. The responses require more effort to clarify and analyse than those from closed questions.

  3. Clarifying questions: These follow up an earlier answer, for example to check you’ve heard correctly or to clarify specific details. They often restate or paraphrase the earlier response, for example ‘You mentioned that creating a configuration baseline takes too long – which parts of the process are particularly lengthy, please?’ Clarifying questions may be open or closed.

    By their nature, specific clarifying questions are hard to prepare in advance. However, it’s worthwhile thinking about the type of response you might get to other questions, and hence what type of follow-up questions might be needed. Useful clarification techniques include:

    • – Seeking other viewpoints, for example, ‘How would person X describe that process?’ (As well as drawing out fresh insights, this may probe people’s awareness of divergences within the team.)

    • – Seeking the data behind opinions, for example, ‘What are they doing that causes you to think they’re not committed to the schedule?’

    • – Seeking underlying assumptions, for example, ‘Under what circumstances does that process not apply?’


  4. Meta-questions: These are designed to elicit questions, either directly or indirectly. For example, ‘If you had the original project manager here now, what questions would you like to ask her?’ or ‘Are there any questions you expected us to ask?’ Such questions can be a good way to draw out areas of uncertainty and doubt. They also help capture useful questions for future interviews.

  5. Observation: It can be useful to ask people to demonstrate processes or systems. For example, ‘Could you show us how you create a new baseline, please?’ This may draw out issues that are so ingrained that people no longer notice them, or highlight divergences between the actual and the documented process.


Most interviews use a mix of question types. If you ask too many questions of the same type (e.g. a long succession of closed questions; or a succession of short, open questions followed by long answers from the interviewee), the interview starts to feel more like an interrogation than a conversation. A typical interview might:

  • open with a meta-question (‘Do you have any questions?’) to set the interviewee at their ease;

  • ask a couple of simple, closed questions to establish basic facts (‘When did you start on the project?’);

  • move into a sequence of broader, open questions interspersed with open and closed clarifying questions and the occasional observation question;

  • end with a couple of meta-questions.


Our position in the analysis loop may influence this mix of questions. We noted earlier that we may enter the analysis loop with some initial hypotheses, or we may begin with more open-ended data gathering. This leads to two broad schools of interviewing: directive and non-directive.

In directive interviews, we adhere closely to a checklist of questions. If discussion strays from this list, we pull it back on course by asking the next question. This works well when we are clear about the ground we want to cover, and want to ensure we cover the same ground in multiple interviews. It’s a good way to gather statistical survey data, for example, or to gather the facts necessary to confirm or refute a particular hypothesis. Closed questions will predominate.

Non-directive interviews, by contrast, tend to employ more open questions, using each question as a trigger for conversation. Once the conversation has started, we use clarifying questions to maintain the interviewee’s flow, letting them tell us what is important. People often have a good sense of what they’re concerned about, even if they sometimes have trouble articulating it. Non-directive interviews are a good way to draw out these concerns. There is a risk that the interviewee will meander in no certain direction, so we still need to use the protocol to keep ourselves broadly within the bounds of our objectives.

In practice, most interviews fall somewhere in between these two extremes. However, reviews may well start out with relatively non-directive interviews as they explore the general state of the project, then move towards more directive interviews as they seek to clarify and confirm their hypotheses.

Three final points about the interview protocol:

  • Although developing the interview protocol is an excellent way to prepare for interviews, we shouldn’t let it straightjacket us once we are in the interviews themselves. For a start, what we learn in the initial interviews may cause us to revise the protocol as we proceed. More importantly, the protocol shouldn’t get in the way of listening to each interviewee. If they are talking freely and giving us useful information about the project, it generally makes sense to go with the energy, occasionally nudging them back on course if necessary. I prefer to spend a lot of time developing the protocol, but don’t actually take it into interviews. That way I will have internalized the questions and can focus my attention on the interviewee.

  • Likewise, a good interview protocol is no substitute for basic listening and observation skills. Concentrate on what the interviewee is saying. Reflect back and clarify. Don’t be afraid to allow silence and pauses so they can collect their thoughts and get a second wind.

  • We noted earlier that you may need to interview some people for relational and political reasons. This doesn’t mean you should abandon the protocol: use these people to give additional perspectives on the project. The worst thing you can do in a politically driven interview is appear to not take their viewpoint seriously.


Working Together

Some interviews are one-on-one affairs. Frequently, however, interviewers work together in pairs. Sometimes we will interview more than one person at the same time. What are the considerations here?

Interviewing in pairs allows us to divide responsibilities – typically one person takes notes while the other asks questions. This offers a number of advantages:

  • The questioner can focus all their attention on the interviewee.

  • The note taker can keep track of inconsistencies and areas that need clarification. (This means that the questioner needs to hand control over to the note taker occasionally, so they can ask clarifying questions. The pair needs to develop a protocol for doing this.)

  • With two interviewers, we are also more likely to catch subtle nuances in the interviewee’s responses and body language.

  • Debriefing together at the end of the interview can bring out things that neither interviewer might have noticed separately.


The chief disadvantage is that we can cover fewer interviews than when working separately. In many cases, the increased effectiveness of our interviews outweighs this disadvantage.

Before conducting a pair interview, clarify how you will work together. Who is taking notes and who is asking questions? How will you deal with pauses and silences? Do you need to agree a signal to hand control from one interviewer to the other (when the note-taker has some clarifying questions, for example)?

Likewise, it can sometimes be useful to interview more than one person at the same time. When we have several people together, they may spark off each other – one person’s observations draw additional comments from other interviewees, or differences of opinion emerge that would have been difficult to elicit in individual interviews. We may also observe interaction patterns that tell us much about how people work together on the project.

Such group interviews also allow us to cover more people in a given number of interviews. Their principle disadvantage is that they sacrifice privacy: people may be more circumspect about some topics in the presence of their peers or managers. Or the more vocal people in the group may dominate, making it difficult to elicit the opinions of quieter team members.

Workshops are used less commonly on project reviews, but they are a natural extension to group interviews. They can be a good way to drill into complex issues where you need to bring multiple perspectives to bear. They also allow us to observe interactions amongst a broad group of people on the project team. Because a workshop involves many people, it wastes a lot of time if it’s not planned carefully. Likewise, running a workshop requires appropriate facilitation skills. (Retrospectives often make good use of workshops – see the case study Post-Project Reviews and Retrospectives.)

Electronic Interviews

Geographically dispersed projects are increasingly common. This means we often need to conduct electronic interviews – conference calls, videoconferences and online conferences. Most of the above discussion applies equally well to such interviews. There are a few additional points you may need to consider:

  • We typically use facial cues and body language to manage our conversations: for example, who talks when. If these cues are absent, as on conference calls, then we need to agree protocols for identifying who should talk next, handling interruptions, and so on.

  • Technical glitches can seriously reduce the effectiveness of the interview. It pays to set up the technology well in advance, and to spend some time practising with it. This is especially true for online conferences which require software to be downloaded or installed.

  • Likewise, transmission delays (especially common on videoconferences) and voice quality issues (e.g. when using mobile phones or cheap Voice over Internet Protcol (VoIP) services such as Skype) can affect the effectiveness of interviews. Again, it pays to practise with the service in advance.

  • If these technical factors combine with cultural factors (strong accents, need to work in non-native language, different norms for handling interruptions), the interview can be doubly difficult. It may be better to be less ambitious with the technology (conference call rather then videoconference, say), reschedule the interview until everyone can access a landline, and send out preparatory materials well in advance.


Most of this comes down to preparation. Electronic communications need more preparation than face-to-face communications because they make it harder to deal with any problems that might arise. In many organizations, these communication modes are now the norm and hence people accommodate them well, but there are still a significant number of organizations and projects where this isn’t yet the case.

Recoding Notes

Note taking is an important part of interviewing. The way we take notes influences the interview itself, and well-organized notes make subsequent analysis much easier. In situations where a strong evidential chain is required (especially when a project is subject to litigation), poor note taking can seriously damage the utility of our findings. As we develop the interview protocol, therefore, it’s worth thinking about what notes we will take and how we’ll take them.

Looking first to what notes to take, I favour taking copious notes, recording as much as possible of what I hear and observe during the interview. There are a number of reasons for this:

  • It can be very hard to know what is important. What sounds like a throwaway comment in one interview can become significant in the light of subsequent interviews. So it’s worth recording the comment.

  • Quotes from members of the project team are often very powerful in the final report: they make issues seem more real. Extensive notes make it more likely we’ll have relevant comments to back our analysis.

  • Detailed notes make it easier to identify small inconsistencies within and across different people’s stories. Many of these inconsistencies will be unimportant, but some of them may provide a lead into important issues. (This is especially true if people are trying to withhold information.)

  • Recording selectively can reinforce biases. If we record some things and not others, interviewees may start to notice and steer their answers towards the things we record. This can mean we don’t hear about certain issues.

  • Taking notes can be a way to buy thinking time. As I record the information, I also gain some time to think about it and identify follow-on questions.


Of course, this increases my subsequent transcription and analysis effort, and can make it difficult to engage fully with the interviewee, but I generally find this note taking to be invaluable.

Coming to how we take notes. There are many ways to do this – within boxes on checklists, in notebooks, direct into a computer, into a tape recorder. Here are some of the factors that affect the way we take notes:

  • Recording against a checklist works well when conducting directive interviews. I find it much less useful for non-directive interviews, where we rarely follow the anticipated order of questions.

  • Recording electronically (tape, MP3 or otherwise) allows us to go back to the raw data to clarify exactly what was said. However, transcribing and analysing recordings entails substantial effort (far more than transcribing handwritten notes), and even then the recording won’t necessarily capture body language and facial expressions. Finally, some people may be concerned or self-conscious about being recorded in this way.

  • Recording notes direct to a computer reduces subsequent transcription effort. That said, I find that trying to use a keyboard and screen pulls my attention away from the interviewee.

  • That brings us back to pen and paper, my preferred medium. With practice, it’s possible to capture the gist of what’s said during the interview as well as supporting observations and interpretations. I’d then transcribe these to electronic format as soon as possible after the interview, both to create a record that can be shared with the interviewee and review team, and because my rapidly taken notes tend to be fairly illegible. This transcribing takes time, but it helps me to reflect on what was said during the interview and hence commence my analysis.

  • While taking notes, it’s worth separating facts from interpretations. A common way to do this is to separate our notes into two columns. In one column, we record what people say and what we observe. In the other, we record our corresponding interpretations. For instance, that someone hesitated frequently while they described a process is a fact; our interpretation might be that they don’t appear to understand the process. By separating fact from interpretation, we make it easer to confirm the former with the interviewee. We also make it easier to keep the two separate during our analysis.


One final point on note taking: if a review is legally sensitive in any way (e.g. performing a health check on a project where the client/contractor relationship has broken down; or if you suspect that illicit activity has taken place), you should take legal advice on how to record and retain your notes. They may be discoverable in any subsequent legal proceedings: failure to collect and retain them appropriately could prejudice those proceedings.

Submit your own content for publication

Submit content