GPM First
Chapter 3 of The Strategic Alliance Handbook (978-0-5660-8779-0) by Mike Nevin

Benchmarking Alliance Relationships

CHAPTER 3

There is no doubt that for many people the most pragmatic use of the alliance best practice framework is the ability it gives them to benchmark their relationships against an objective standard, answering such questions as:

  1. How is the individual relationship performing?

  2. What is going well?

  3. What is going badly?

  4. How do we compare with our industry?

  5. How do we compare with best in class?

  6. What would we have to do to improve?

 

For those organisations that benchmark their relationships regularly, additional questions can be asked and answered:

  1. What is the change (good or bad) since the last benchmark?

  2. What has been the commercial impact (on our bottom line) of the changes in the benchmark score?

  3. What effort have we had to expend to achieve the commercial results?

  4. Cost/value – was the effort worthwhile in commercial terms?

 

To help practitioners decide whether an alliance benchmarking programme would be useful for their organisations, this chapter discusses the following questions:

  1. Why is it important to benchmark your alliance relationships?

  2. What should the process of benchmarking include?

  3. What should the output look like?

  4. What are some of the practical issues in benchmarking alliances?

  5. What are the benefits of instigating an alliance benchmarking programme?

 

Why is it Important to Benchmark Your Alliance Relationships?

As far back as 2002, research was conducted by Ard-Pieter de Man and Geert Duysters (both at the time at the University of Eindhoven) to assess the impact measuring alliances can have on their performance.1[7] The research gave an insight into the most valuable tools alliance managers can use. Their list (in order of importance) is replicated below. Companies with alliance success rates of 60% or higher reported using these tools, whereas companies reporting success rates lower than 40% did not.

  1. alliance database;

  2. joint evaluation of alliances with the partner;

  3. standard partner selection approach;

  4. intranet – employees have access to company specific alliance resources;

  5. responsibility for alliances lies with the strategy function;

  6. transfer of knowledge about national differences to international alliances;

  7. alliance managers exchange experiences;

  8. alliance department;

  9. alliance managers;

  10. evaluation of individual alliances;

  11. alliance metrics.

You can see that joint evaluation of alliance performance with the partner is rated as the second highest factor contributing to success (incidentally, the highest-rated factor was the existence or otherwise of an alliance database). You might also notice that the definition of the measuring process includes the phrase ‘joint’ – in other words, it is important to discuss and agree the scores with your partner(s) before taking any remedial action.

The reason this is important is that the process elicits the hidden areas of misalignment in the relationship, where one partner has a markedly different view of the relationship to the other. Left unattended, these areas of misalignment will damage the relationship because both/all parties will have a different view of what constitutes reality.

An example might help to make this clearer. Imagine that two partners agree to assess their alliance relationship and that they both agree that trust will be an important criterion in this measurement process. What happens if one partner scores trust as high (let’s say 80/100) and the other scores trust low (let’s say 15/100). The reality is that without an objective assessment of the factor, both partners might assume that the other sees the relationship as they do, in which case in the present circumstance one partner is sharing information openly and trusting the other partner’s attitudes and behaviours, while the other is more suspicious and is not sharing anything like as much information with the trusting partner.

In such a situation the relationship will progress at the pace of the lesser score. What is required is a process by which both parties agree to make public their feelings and discuss the implications of the widely differing scores. Indeed, this is the first stage in assessing benchmark scores that ABP would recommend (see later in this chapter).

Should any further evidence that benchmarking is important be necessary, a cursory examination of the literature available on the Internet shows without doubt that benchmarking is an important tool in an ongoing business improvement programme.

Also referred to as ‘best practice benchmarking’ or ‘process benchmarking’, this process is used in management, and particularly strategic management, in which organisations evaluate various aspects of their processes in relation to best practice companies’ processes, usually within a peer group defined for the purposes of comparison. This then allows organisations to develop plans for how to make improvements or adapt specific best practices, usually with the aim of increasing some aspect of performance. Benchmarking may be a one-off event, but is often treated as a continuous process in which organisations continually seek to improve their practices.

In 2008 The Global Benchmarking Network, a network of benchmarking centres representing 22 countries, commissioned a comprehensive survey on benchmarking. Over 450 organisations from over forty countries responded. The results showed that:

  1. Of 20 improvement tools, mission and vision statements and customer (client) surveys were the most frequently used (by 77% of organisations), followed by SWOT analysis (72%) and informal benchmarking (68%). Performance benchmarking was used by 49% and best practice benchmarking by 39%.

  2. The tools that were likely to increase in popularity the most over the next three years were performance benchmarking, informal benchmarking, SWOT analysis and best practice benchmarking. Over 60% of organisations that were not currently using these tools indicated they were likely to use them over the next three years.

 

There seems little doubt, then, that benchmarking processes and performance in general business areas is a beneficial action that leads to improved performance. ABP’s contention is that the same applies in the alliance management field, and that companies should benchmark their relationships regularly (at least once a year).

What Should the Process of Benchmarking Include?

If we agree that benchmarking our alliances is a beneficial and productive exercise to complete, then naturally our attention as practitioners turns to the question, ‘How should we design and execute the benchmarking exercise?’

There is no single benchmarking process that has been universally adopted. The wide appeal and acceptance of benchmarking has led to the emergence of a variety of benchmarking methodologies. One seminal book is Robert Boxwell Jr’s Benchmarking for Competitive Advantage.2[8] Robert Camp (who wrote one of the earliest books on benchmarking in 19893[9]) developed a 12-stage approach to benchmarking:

  1. Select the subject.

  2. Define the process.

  3. Identify potential partners.

  4. Identify data sources.

  5. Collect data and select partners.

  6. Determine the gap.

  7. Establish process differences.

  8. Target future performance.

  9. Communicate.

  10. Adjust the goals.

  11. Implement.

  12. Review and recalibrate.

 

Looking at the list above, you can see that it represents a considerable effort on the part of individual organisations. This is why they turn to the ABP Database, in which steps 1, 2, 4, 5 and 6 have already been determined for them.

Looking at the list above also helps to highlight the distinction between conducting healthchecks and conducting benchmarks. Most organisations that are reasonably serious about conducting alliance programmes will perform some form of regular healthcheck of their alliances, but the critical factor in benchmarking is the capture and collation of external performance data – something that is very difficult for individual companies to accomplish.

Let’s now consider how the alliance best practice benchmarking process works with reference to the standard process identified above.

 

1 Select Subject

This is relatively easy because the subject is the alliance relationship in question. However, care must be exercised here to determine with clarity what is the agreed scope of the relationship. For example, many organisations have alliance relationships in place in multiple geographic locations or individual countries; while it is possible to conduct a global benchmark for the relationship, the results are generally less valuable than conducting a benchmark country-by-country. This is because the particular aspects of the relationship under review will vary dramatically from country to country.

 

2 Define the Process

Again, this is relatively easy if the company is using the ABP approach, in which case the process is as follows:

  1. Decide the scope.

  2. Decide the key stakeholders to provide data from both/all sides of the relationship.

  3. Decide the range of CSFs to be measured.

  4. Capture the data, either by interview or online questionnaires.

  5. Benchmark the results against the ABP Database.

  6. Produce the benchmarking report.

 

 

3 Identify Potential Partners

This should be simplicity itself, because presumably the exercise is being conducted partnership-by-partnership.

 

4 Identify Data Sources

The identification of suitable data sources is important in order to capture balanced data. It is likely that the data will be captured at three levels: Strategic, Managerial and Operational.

 

5 Collect Data and Select Partners

This can be done either online or by telephone or face-to-face interviews. Face-to-face interviews usually provide the highest-quality data but take the longest (and are therefore more costly), whereas online data capture is quick and easy (but note that interviewees completing online questionnaires may need guidance in advance to help them understand the answering process).

 

6 Determine the Gap

This gap represents the difference in scores between the relationship in question and the comparator set (either industry best in class or world class).

 

7 Establish Process Differences

This step involves a deeper understanding of ‘What is the comparator doing that we are not doing to achieve such higher scores?’

 

8 Target Future Performance

Given a good understanding of current resources and the ability to deploy those resources, it should be possible to target certain performance improvements and to plan accordingly. Obviously, the lower the relative score, the greater the degree of possible improvement.

 

9 Communicate

Communication should involve the whole team completing the benchmark questionnaire and those affected by the proposed improvement actions. A useful additional tool here is the RACI process or RACI charting.

 

10 Adjust the Goals

Obviously, part of the benchmarking process involves adaptation for future improvement, so this stage identifies and documents clear goals that both sides will strive to achieve. Clearly, these goals may need adjustment if they are either (a) achieved or (b) prove too difficult to achieve.

 

11. Implement

In the benchmarking context, this means that the new approach or process to the CSF in question needs to be incorporated into the working practices of the relationship in question.

 

12. Review and Recalibrate

Although companies don’t always conduct benchmarking exercises on an ongoing basis (usually annually), there is evidence that it would be a beneficial to do so. In essence, this creates a continuous improvement process that continually improves alliance performance and focuses annually on the lesser-performing areas.

A NOTE ON SCORING

Obviously the accuracy of the scoring in benchmarks is extremely important. In the ABP scoring model, scores are allocated from ‘0 = Not in existence, zero, none’ and so on to ‘100 = Perfection’. We have found that scoring in such a way allows individuals to exercise suitable sophistication and nuance between different aspects of the issue in question which is missing in more simplistic scoring systems (for example, ‘1–5’ or ‘Totally Unsatisfactory–Fully Satisfactory’).

Giving interviewees a set of texts to represent each of the scores also improves the objectivity of the scoring.

Finally, it is useful to have some way of assessing data integrity in the scores posted (it would be useful, for example, to know whether the benchmark questionnaire was completed in 30 minutes or three minutes!).

The ABP system incorporates a series of data validation techniques to identify a data integrity score. This score represents the degree to which the organisation can rely on the data captured, and is usually expressed as a number adjacent to 100%. So, for example, a score of 100% data accuracy (somewhat unlikely) indicates that there is no conflict in any of the answers given. A score of 85% data integrity indicates that some answers may be questionable based on a comparison with a paired set of control questions in the questionnaire. Experience has shown us that data with a score of greater than 80% can reliably be used for key decisions.

For an example of an ABP benchmarking questionnaire, see Appendix 6, ‘Useful Additional Resources’.

What Should the Output Look Like?

We at ABP believe that to be useful, a measuring process (and set of questions) should:

  • be holistic and integrated, representing all aspects of the relationship;

  • be simple to understand (intuitive);

  • provide striking insights;

  • reflect multiple dimensions of the relationship;

  • reflect both/all sides’ views;

  • be objective and numeric;

  • be rigorously researched;

  • be applicable to action planning;

  • be capable of being used for benchmarking;

  • enable action to be taken linked to the insights;

  • be simple to operate;

  • encourage involvement and commitment from all key stakeholders;

  • be amenable to regular refreshing.

 

As a consequence, we have designed our most common output graph to demonstrate benchmark findings as shown in Figure 3.1.

 
Figure 3.1 Example of an ABP benchmarking chart

graphics/fig3_1.jpg

The chart can be produced simply enough using Microsoft’s Excel program, and it is commonly known as a spider chart or spider diagram due to its resemblance to a spider’s web.

What the chart shows is the 52 common success factors measured on a scale of 0–100, with 0 at the centre and 100 at the outer rim.

The different dimensions of the benchmark are shown by the coding letter:

  • Co = Commercial

  • T = Technical

  • S = Strategic

  • Cu = Cultural

  • O = Operational.

 

The two lines show the scores of two partners in the same relationship, with the areas of alignment and misalignment clearly identifiable. The third outer line shows a chosen comparator assessment from the Database which represents best in class for that particular industry (the best example of an alliance type which is most closely comparable to the relationship under examination in the ABP Database).

Areas for potential improvement can be clearly seen, and improvement action plans can focus on one or more of the five dimensions, or could incorporate a range of the factors involved.

In practice, teams looking to develop improvement action plans would typically focus on the lower-scoring areas first.

For a full example of a typical ABP benchmarking report, please see Appendix 6, ‘Additional Useful Resources’.

What are Some of the Practical Issues in Benchmarking Alliances?

A number of practical issues, challenges or questions usually arise when conducting an alliance benchmarking exercise.

Can you benchmark across industries?

One of the advantages of benchmarking alliances to a set series of predetermined criteria is that comparison across industries is possible. Thus an alliance executive in the high-tech sector can learn a lot from how his alliance colleague in the pharmaceutical industry establishes and develops trust, for example.

But care needs to be exercised when looking cross-industry because (of course) industry characteristics will have a major bearing on the appropriateness or otherwise of the comparison.

Take, for example, common success factor Co2, ‘Due diligence’. This is the method by which organisations pick their prospective alliance partners; while the process is valid in both the high-tech and pharmaceutical business sectors, the actual conduct of the due diligence exercise, the complexity and the length of time it takes will vary dramatically.

The Debate Between Subjective and Objective

A common discussion point that arises when conducting a benchmarking exercise is the distinction between subjective and objective data capture. Detractors of the approach will claim that all the results are ultimately based on subjective data, and are thus flawed.

While it is true that the data captured are necessarily subjective (they are the individual views of the interviewees on which the assessments are based), it’s not fair to say that the process is not objective. The inclusion of clear scoring guidelines and the request for examples of the behaviour, process, task or approach being examined mean that in practice the results are usually seen as valid and representative by the vast majority of interviewees who take part in these exercises.

Another important consideration in this area is encompassed by the phrase ‘perception is reality’ – this means that even though one partner might feel that a particular common success factor scores highly, if the other partner does not, then that partner will act accordingly and the lower score is ‘the truth’ as far as the second partner is concerned. This is a very important insight into managing alliance relationships, and alliance executives should be aware of its existence at all times. It is not enough to be doing a good job; your partner needs to feel that you are doing a good job.

Is There any Industry Standard Data That can be Used for Comparison Purposes?

This is a question clients frequently ask, the idea being that in the fast-moving consumer goods business sector the ‘correct’ or usual score for ‘business-to-business alignment’ is 65/100.

Unfortunately, it is almost impossible to create industry standard scores because the range of variable factors affecting the outcome is so broad. For example, it can encompass:

  • the type of alliance concerned;

  • the stage the alliance has reached;

  • the relative maturity of both partnering organisations;

  • the economic climate in the industry sector in question;

  • the prime purpose of the alliance;

  • the collaboration skills of the executives involved on both sides;

  • the degree of senior executive support on both sides;

  • the existence (or otherwise) of dedicated resources for the alliance;

  • the degree of trust that is present in the relationship.

 

However, certain average scores can be identified for sectors as a whole for certain common success factors, which gives companies an opportunity to match their internal scores against industry standards, while also giving them insights into alliance maturity as expressed in that relationship.

What Should we Benchmark Against?

Another common question that clients ask is: ‘Who or what should we benchmark our alliances against?’ This question is easier to answer. An appropriate comparator set would be a group of alliances with the following characteristics:

  • The alliances should be in your business sector.

  • The prime purpose of the alliances should be the same as yours (for example, business development, growth, innovation or cost reduction).

  • The alliances should be of a similar size and complexity to your own.

  • The alliance set you use for comparison should include examples that are both better and worse than your situation.

  • The alliances you use for comparison should be in the same geographic location or country, or at least the same region (for example, Asia-Pacific, the Americas, Africa, Latin America, Far East or Middle East).

 

Submit your own content for publication

Submit content