GPM First
Chapter 7 of Communicating Projects (978-1-4094-5319-2) by Ann Pilkington

Research and Evaluation


Communication as a discipline has moved on considerably in recent years to be much more than a ‘seat-of-the-pants’ soft skill to something that is planned strategically and contributes to business benefits. Without effective research and evaluation, communication is set to fail.

There are three roles for research in communication during the project lifecycle, which are also illustrated in Figure 7.1:

  • input research which informs the communication strategy;

  • monitoring research during the implementation of communication activity to check if it is working;

  • close out research, evaluation of the communication feeding into lessons learned.


Being able to evidence the success of communication activity is what sets good communication practice apart from bad and good communication starts with research.

This chapter makes the case for research and sets out best practice in research techniques. It acts as a guide to selecting the most appropriate research approach for the task in hand and addresses some of the blockers to conducting research in project communication. Research can be highly scientific and many excellent books have been written about it, so this chapter can only provide an introduction to the topic. Nevertheless, there is enough guidance given to enable the project communicator to carry out effective research and evaluation.

Figure 7.1 Different stages of research for project communication


Input Research

Research needs to happen right at the start of strategy formation. It is hard to set a sensible measure within an objective if the current position is not known. Of course sometimes the current position will be obvious. If the project is in the initiation phase it is probably safe to assume that awareness is minimal. This doesn’t mean however that research isn’t needed. It will be helpful to understand stakeholder attitudes before embarking on a communication programme. For example, if the project will be changing ways of working, is there a general acceptance that this needs to happen? Or have stakeholders already been vocal on similar changes? An objective to raise awareness alone isn’t appropriate if stakeholders are likely to be hostile, there will need to be objectives around attitude change as well.

Research at the input stage has two roles:

  • Understanding the starting position: this will be achieved through a communication audit.

  • Setting benchmarks: using research to inform objective setting.


Communication audits are a sensible and fairly essential first step for the project communicator. An audit should seek to understand factors including:

  • Which existing channels work best for the stakeholders that have been identified?

  • Whether there are any gaps, are there stakeholders for whom no channel exists at present?

  • What stakeholders presently think, do or feel.

  • The type of communication content that stakeholders are interested in.

  • The methods of communication preferred (for example electronic, face to face, print or social media).


The communication audit stage will draw on a number of research methodologies. Original research will probably be required, but existing knowledge – known as ‘desk research’ – should be exploited first such as lessons learned logs and research done by other projects and programmes. Do this desk research first and then use original (‘primary’) research to fill any gaps.

Carrying out original research sounds daunting but doesn’t need to be and can be as simple as calling up some stakeholders, and asking them for their views, or using the knowledge of colleagues on other projects or in central corporate communication. Where the project is part of a programme, check what knowledge exists at programme level to save duplication of effort.

Research for Monitoring and Evaluation

The communication function needs to be able to demonstrate return on investment. This doesn’t have to mean a financial return. On projects, the communication function is there to support the delivery of milestones and the achievement of benefits, so it needs to be able to demonstrate that it has done that. If the function can’t evaluate and prove its success why should anyone ever view it as more than a function that simply ‘sends out stuff’?

Across the communication industry there is much debate about the best way to evaluate communication activity. In terms of external communication – public and media relations – it has often been based on measures such as ‘advertising equivalent’ (known as AVE) or opportunities to see (known as ‘OTS’). AVE is a tool that measures media coverage, for example a press cutting’s physical size is measured and a formula applied to work out how much that piece of coverage would have cost to buy. Opportunities to see draws from advertising and sets out to estimate how many people may have seen a piece of coverage. The issue with both of these evaluation methods is that they say nothing about what was actually achieved. For example, did consumers look more favourably on a product or company as a result of reading an article or seeing a TV interview? Of course being confident that the communication activity alone achieved a result is difficult to judge. Many other influencing factors are in play such as the experiences of friends and family, personal experience or associated advertising. While these debates are ongoing in the world of public relations, there are useful lessons here for the project communicator who should avoid taking the same limited view.

Project communication should not be judged only by how much activity is undertaken, how nice the posters look or how busy the team seems to be. While there is some value in knowing how much activity has been carried out, this is no indicator of quality or outcomes.

Evaluation, like the objectives it is designed to evaluate, should address both outputs and outcomes. An output measure might be how many newsletters were issued or how many visits there have been to a web page. Measuring outcomes involves looking at whether stakeholders thought or did something differently as a result.

Evaluation matters for a number of reasons:

  • When done on an ongoing basis, it will identify communication activity that isn’t working and enable a change to be made before its too late.

  • It feeds into project lessons learned activity, which is more meaningful and useful when based on evidence.

  • It proves the worth of communication activity, helping it to achieve the status that it deserves and needs in order to deliver more strategic activity leading to better outcomes.


However, a lack of evaluation is not surprising; there are a number of blockers to effective evaluation of communication and these are discussed in more depth below:

  • not knowing what to measure;

  • lack of skill in research methods;

  • a fear that if something is shown not to be working this will be looked on negatively by the project;

  • no interest or support from project leadership;

  • no time;

  • no budget.


Knowing What to Measure

The answer to the question ‘What to measure’ is simple, head back to the objectives that were set in the communication strategy – these are what need to be measured. This emphasises the importance of setting SMART objectives. If the objective has been set properly, it will be clear what needs to be researched. If the objective is for a percentage of a stakeholder community to believe that the project is well managed, then the task is to find out if this is what they think both during and after the communication activity has taken place.

Getting Skilled in Research

Desk research may reveal that other projects have done something similar or a central corporate communication team may have data that can be drawn on – for example, from an organisation-wide staff survey or stakeholder and media research.

If nothing exists then ‘primary research’ will need to be conducted which essentially means doing original research specifically for the task in hand.

There are different types of research methodology that can be used (see Table 7.1). Each has its strengths and limitations and it is important to select the approach that is most appropriate to the situation. (Methodology is the term given to the broad approach that will be used for the research, for example qualitative or quantitative. The research method is the actual tool that will be used, for example a survey or a focus group.)

Table 7.1 Different types of research methodology

Type of research


Good for

Associated research methods


Concerned with words and interpretation of what is said. The person doing the research is usually quite involved.

In-depth understanding, analysing feelings and attitudes. Can help to explain why something is happening.

Focus groups, interviews


Concerned with figures and facts. The person doing the research is usually distant from the subject being researched.

Large-scale research, finding out how many people have done something, for example hits on a website. Can help to explain what is happening.


Water cooler research

Anecdotal evidence is useful too – there is often no substitute for the intelligence gathered at the water cooler! But while it is useful, treat it with caution because the views expressed may not be representative of the majority. However, this type of informal ‘temperature check’ can be helpful and provide themes to explore in more detail through more structured research.

Quantitative research: surveys

Surveys are good for:

  • capturing a lot of data quickly;

  • analysing data quickly and cheaply;

  • finding out what is happening.


With surveys, the most important work happens at the start with the design of the questionnaire. Get this wrong and the data will have no value because it doesn’t provide the information needed, or people will be confused and unable to complete it. Test the survey on some colleagues first and then on people who know nothing about the project. It is worth spending some time to get it right. Really think about what information is needed and make sure that your questions actually provide the data needed. Some demographic information can be collected, for example, job title, location and age. This can help to identify differences between groups of stakeholders which can be very useful when it comes to designing communication activities. Once the communication activity is underway, it will help to identify any areas where the communication may not be working as well for example within a particular set of stakeholders or geographical location.

Surveys can be done on paper or using online tools. There are lots of free online tools available, although free versions will have limitations. IT compatibility will need to be checked, particularly if working in a secure environment where some outside systems may be blocked.

Tips for good surveys:

  • Keep it short – ten questions is probably the maximum.

  • Use closed questions as far as possible and provide answers from which people can select. Open questions (where people can write their own answer) take longer to analyse and aren’t as well suited to a survey method.

  • Write an engaging covering note that includes information on the number of questions and approximate completion time.

  • Use Plain English – avoid project jargon and business speak.

  • Let people know whether it is confidential and how the data is going to be used.

  • Offer to share the results with the people that you asked to respond.

  • Set a time limit so people know the date by which they must complete it.


A survey can go to everyone in a stakeholder group but it doesn’t necessarily have to, a ‘sample’ of the population can be selected. However if the population is small, it makes sense to go out to everyone. Best practice on surveys is that a sample size should be a minimum of 30 people (Denscombe 2010). So, if the communication objective is to raise awareness of something – perhaps a new IT system – among finance staff within an organisation, it would make sense to issue a survey to all finance staff. However, if the objective concerned all employees, then it may be difficult to get a survey to everyone so the survey could be sent to a sample group of staff. However, not everyone will respond. Response rates to surveys vary and can be as low as 10 per cent. The response rate will influence the validity of the results – clearly the more responses received the better, so if response rates are low, be careful when drawing conclusions from the results – additional checks may be needed.

Qualitative research: interviews

Interviews are good for:

  • understanding why something is happening;

  • in-depth analysis of a situation;

  • exploring in depth something that may have been revealed in a survey;

  • revealing themes that can be tested through a survey.


Interviews work well when they are semi-structured. This means having a guide to the questions to be asked, but being prepared to explore other themes if they arise. An interview guide contains some key questions that should be asked in each interview to ensure consistency. However, if a new theme arises in one interview, it can be incorporated in the guide for future interviews.

Tips for good interviews:

  • Pick a suitable setting – in the office with the phone ringing and other distractions isn’t conducive to a good interview.

  • If conducted by phone, ensure that the time has been booked with the interview subject so that they have no distractions.

  • Ensure the interviewee knows in advance what the interview is about, how long it will take and how their views will be used. Offer and respect any requests for anonymity.

  • Record the interview if possible – taking notes will be difficult and get in the way of asking questions. Get the permission of the interview to do this, some may not be comfortable being recorded and this should be respected.

  • Put personal views to one side and try not to lead the interviewee.

  • When the questions have been covered, ask the interviewee if there is anything additional that he or she would like to cover.

  • Offer to share the final findings.


Qualitative research: focus groups

Focus groups are good for:

  • getting a consensus view;

  • prompting debate;

  • hearing a number of different views.


Focus groups need careful facilitation to ensure that everyone’s voice is heard. That can mean managing those who are more confident to speak out and encouraging those who are less inclined to speak up.

Tips for good focus groups:

  • Think about having two people to run the session – one to facilitate and the other to take notes.

  • Eight to ten people is a good number.

  • Use a venue that is conducive to the session and that is away from office distractions.

  • Ensure that everyone’s voices are heard.

  • Plan in advance how you are going to capture the outputs – for example, verbatim comments, or capturing themes on flip charts as you go?

  • Put your own views and feelings to one side.


Desk and document research

Using existing research and documents is a cost effective way of doing the initial input research that is needed to inform the communication strategy. There are lots of possible sources:

  • lessons learned logs;

  • research from other projects or programmes;

  • research available from a central communication team such as staff survey results and opinion surveys conducted with stakeholders.


While this form of research is a valid and sensible approach, it does need to be approached with caution and the validity of the research and documents being used needs to be checked. When doing desk research ask:

  • Are the findings presented based on a well articulated and justified research strategy?

  • If just conclusions are presented, are the actual research findings available to review? Make sure that the findings are based on evidence.

  • Did the author of the document have a particular objective in mind when writing it? Was the research presented in a particular way in order to achieve a predetermined outcome? If so, this will have an impact on how useful the research is to others.

  • Did the original researcher acknowledge any limitations to the research? If so, consider whether this influences the usefulness of the findings.


The Chartered Institute of Public Relations Measurement Matrix for internal communication

A useful summary measurement principles and practice for internal communication has been developed by The UK Chartered Institute of Public Relations special interest group, CIPR Inside. It provides a useful guide to measuring and evaluating communication. See Figure 7.2.

Figure 7.2 The CIPR Inside Communication Measurement Matrix


Presenting the Findings

Everyone wants to be seen as doing a good job, so it is understandable that there can be concern about presenting the results of research that show something isn’t working. However, thoughtful presentation of the research findings should help to avoid them being met with criticism. This means addressing head-on any problems that have been found but also discussing how this is going to be addressed. It is always better to present solutions than problems.

Research findings should be presented honestly, after all the whole point of the exercise is to ensure that the communication is successful. Reporting that everything is going well when it isn’t (or nobody really knows) will undermine the credibility of the communication function when the project fails to hit its milestones or deliver its benefits because of ineffective communication. It also does a dis-service to future projects that may want to draw on the lessons learned because the same mistakes will be made and the success of those future projects put at risk.

It can be tempting to look for and present only the positives, after all nobody wants what may be perceived as failure to be exposed. However, this will be of no help to the project. The objective of research is to help ensure that communication will be effective and ignoring problems won’t achieve this. The key is to present solutions alongside any issues and demonstrate how the intervention has been informed by the research. Of course the communication being off track may be partly or entirely due to circumstances outside of the communication function’s control. Picking this up through research and addressing the situation is good practice. Consider whether what has been identified should be included in the issues log. When the communication presents an evidence-based and planned approach it is more likely to be taken seriously and can even set the standard by which other functions will be judged.

When reporting the findings of research and evaluation there are three important factors to include:

  • explaining and justifying the methodology chosen;

  • explaining and justifying the sample – who was surveyed, interviewed and why?

  • the limitations of the research.


These points matter because they are areas where others could criticise the research. While criticisms may be entirely valid, covering these three points addresses any criticism head on and makes it more likely that the work will be respected, as will the author. There is an additional, wider benefit too in that the work will be of more value to others in the future as part of lessons learned.

Those reading the findings of the research may not be familiar with research methodology so it can be helpful to set out the characteristics of the approach chosen and explain why it was considered the most suitable methodology and/or method.

A discussion of limitations simply means acknowledging anything that may have influenced the results. For example, if the number of responses to a survey was quite low, this should be acknowledged so that those reading the findings understand that they need to be treated with some caution. The research method itself will also have limitations. For example with interviews and focus groups there will be always be the possibility that the interviewer or facilitator has had an effect on what has said, even if this wasn’t the intention.

Gaining Support for Research and Evaluation

Support for research and evaluation can be gained during the strategic planning stages. It is all part of objective setting. If the project leadership is in agreement with the objectives set, they should be encouraged to hold the communication function to account in delivering against them. This can seem daunting for communicators, but it demonstrates that communication is there to contribute to project success and is a measurable activity rather than a soft skill that nobody really understands and can be seen as dispensable.

Finding Time for Research

Although research takes time and other resources, it will actually save time in the long run. This is because research at the input stage prevents the project embarking on communication activity that is unlikely to deliver the required outcomes. As the communication strategy is implemented, research will help to identify the activities that are less effective or ineffective and these can be stopped and resources diverted into something more effective. So the question should not be whether the project can afford the time and money to carry out research, but whether it can afford not to.

Research on a Budget

Carrying out research and evaluation of communication need not be time consuming or costly, but to skimp on resources for this important area of communication practice is a false economy. What is the point in carrying on with a communication activity that isn’t working? That is where the real waste of resources lies. Setting good objectives for communication activity and measuring against them will often result in less communication activity being undertaken, but with the activity that is done being more likely to produce the desired results. That has to make good business sense.

The case for research can be made as part of the communication strategy with budget or resources being allocated to it. There are a number of ways of making research cost effective:

  • ‘Piggy backing’ on research being conducted by other projects or centrally within the wider organisations (where relevant). It may be possible to have questions added to a survey that somebody else is doing for example. For research outside the organisation this could be done through an ‘omnibus’ survey which collates questions from a number of sources into one survey making it much more cost effective.

  • Using free online tools for surveys. As already mentioned, there are a number of free online survey tools available. (Ensure that whatever is chosen is compatible with the systems that stakeholders are using.)

  • Surveys are a quick and cheap way to gather data – particularly when using free online survey tools. However, a research method should not be chosen based on cost alone – it has to be right for the research question.

  • Could social media be used? The key is to ensure that feedback is gathered in a structured way to make analysis easier.

  • Use project team colleagues to gather feedback as part of day-today stakeholder engagement.



This chapter has explained the important role that research plays in the communication planning process and made the case for effective evaluation of project communication. Evaluation needs to look at outputs (how much activity was carried out) and outcomes (what stakeholders did as a result). The results of any evaluation research should always be presented honestly to avoid wasting resources on ineffective activity and to ensure that lessons learned logs for future projects are genuinely useful.

Submit your own content for publication

Submit content