GPM First
Powered by Gower
Chapter 16 of Intelligent Internal Control and Risk Management (978-0-5660-8799-8) by Matthew Leitch

Helpful Alternatives to Unhelpful Ideas

16

This book describes an approach to risk control that emphasizes ideas like creativity, value, distributed use of multiple methods, and human skill alongside corporate processes. In this chapter I will consider some of the unhelpful ideas that can delay progress, for example by making people feel they cannot put numbers on risk, or have to do risk control one way and no other.

These unhelpful ideas are linked to each other, encourage poor practices, and block better practices. We need to understand them, recognize the problems they cause, and know how to counter them.

Contrasting practices

Here is a caricature of an organization that is approaching risk control in the spirit advocated in this book:

The organization’s vision is that its people learn to manage risk and control better and that their increasingly skilled behaviour is supported by a range of appropriate and ever-improving processes, most of which are decentralized.

Typically, these involve going from unstructured analyses and designs to increasingly well-structured views based on explicit models. They begin early and continue throughout projects and other activities. Time and skill go into controls design activities.

Numbers and pictures are used to clarify risk perceptions and make rational decisions about controls, taking into consideration economic, cultural, and strategic factors as well as risk.

The approach is continuously improved using data and open, honest conversations. A by-product of this is continuous monitoring of the performance and contribution of risk control.

All management information is presented along with uncertainty shown in one of a number of approved ways, so it is never forgotten. Board members see all risk, but structured at a level and in a way that makes sense for them, and informed in part by analyses from others in the organization.

 

This is natural, pervasive, hard to ignore, open to alternatives, and difficult to dislike. Over time we should expect to see substantial changes to behaviour and processes resulting in increased value.

Now consider a contrasting caricature.

The organization’s vision is that all risk control is driven by a single mechanism that generates controls and is known as the risk management system. It is one giant process stretching throughout the company and is identical in every respect wherever it operates. Its crude mechanics are prescribed and must be followed.

Workshops were once held at which people called out risks which were listed on a risk register as they were suggested and then rated for their probability of occurrence and impact if they occurred, both on a scale of None – Low – Medium – High.

The risks were then displayed on a probability-impact matrix coloured red, yellow, and green, and any risks on a yellow or red square needed control actions to be planned and taken until they moved to green. Controls were selected from the four Ts (Terminate, Transfer, Tolerate, or Treat) with no explicit design work at all and no further learning about risks planned.

This was the only consideration in deciding if a control was worthwhile and risk is treated as the only source of control requirements.

Subsequently, progress with risk has been assessed using a self-certification database that asks hundreds of individuals to confirm that they have operated their allotted controls during a period and that things are under control. The status of risk has been reported on special risk reports, separately from other management information, and considered in meetings dedicated to risk and control. The board has concerned itself only with the top ten risks.

On projects, risk analysis begins once the project has clear objectives and a basic plan, but ends during feasibility work because it is believed that at that point the risks should all be fully understood.

With this combination of techniques I would expect to find that the risk register database is a mass of ill-defined, overlapping risks with potential gaps (but too messy to assess); that more advanced forms of risk-thinking have carried on in secret or been driven out; that few good new ideas for control system improvements have been generated; that risk assessments would be inaccurate if they were not meaningless; that areas for improvement have been quietly covered up; and that life goes on as usual with risk control sitting in a dusty box, unloved in a forgotten corner.

Six strategies to promote better practices

Having better, more exciting ideas for control mechanisms is a big part of improving value from risk control. However, it is not the full story.

I believe that unhelpful and incorrect ideas are part of the reason why organizations sometimes implement practices similar to those in the second caricature. Many of these beliefs come from textbook theory, but not all.

However, the link between beliefs and actions is not straightforward. There are inconsistencies between beliefs and between behaviours and beliefs. Theories about risk and uncertainty are often controversial and hard to understand.

After decades of largely futile arguments I have concluded that debates in risk control, like politics, are usually too complicated and fraught with misconceptions to be resolved by conversation. Therefore, directly tackling those unhelpful beliefs should usually be a strategy of last resort.

However, there are several other strategies to try first, usually relying on the fact that most people are rational and intelligent if given the right conditions.

First Strategy: Just Describe the Design

The first strategy to consider should be simply to describe the design of control you have in mind. This cuts the risk of activating any unhelpful ideas that might be present.

Probably the main reason that people employ any of the unhappy practices in the second caricature is that they are not aware of better alternatives. Explaining alternatives is enough with many people for them to want to do something better instead.

Sometimes the new practices you are suggesting will be more consistent with a person’s beliefs than their current practices so there is a chance that they will see your design and immediately feel happy with it.

For example, it is common in guidance on risk management to talk of ‘identifying’ risks and to make no comment on what makes a good breakdown of risk and how to create one. This is accepted by many people and reflected in the practices sometimes adopted by organizations.

However, there is a puzzle. The word ‘identifying’ implies that the risks exist already rather than being chosen subsets of the uncertainty attached to our thinking about the world. If risks exist already then there is no choice about how to divide our uncertainty into risks, which is also implied by the lack of guidance on how to make choices.

All this implies that people who talk and act this way believe that risks are givens and so there is only one way to write a correct/valid list of risks in a given situation and with a given set of objectives (if you ignore alternative descriptions and orders). But, in fact, nearly everyone1 [28] believes that alternative lists are possible and some will be more useful than others. They believe this for a variety of good reasons.

In this example beliefs and actions are not consistent and it is actions that are lagging behind, so introducing a new set of actions may be all that is needed.

Another example concerns audit evidence on the effectiveness of controls. The received wisdom, repeated in textbooks, training courses, and regulations, is that controls effectiveness is evaluated by doing two things:

  1. establishing that the design of the controls is such that if they were operated as designed the system would be effective; and

  2. testing that individual controls have operated as intended.

 

However, other forms of evidence are also relevant, such as data on the actual level of inherent risk and process health statistics. These powerful forms of evidence do not appear in the standard theory and yet almost everyone instinctively understands that they are relevant to evaluating controls effectiveness.

Once again beliefs are ahead of actions, but in this case there is an explicit, established body of theory that, if people think of it, could stand in the way. This is a reason for not mentioning that theory and just letting common sense take over.

Second Strategy: Put the Design in a Helpful Context

Sometimes beliefs in one context are inconsistent with beliefs in another. For example, some people, when asked to give a probability number for an event that has not happened before, feel very uncomfortable and believe they cannot do it because they don’t know the probability. And yet, in the context of betting on the outcome of sports events, those same people are able to evaluate odds and place bets. In principle this is the same task, but in a different context.

This leads to the second strategy, which is to mention a context that will help people select the appropriate theory from alternatives they have in their minds.

When it comes to putting numbers on probabilities it is a good idea to liken it to betting on sports because this helps people get in touch with their beliefs about situations where there is high uncertainty but numbers are still used, and to good effect. It is a mistake to refer to school-book situations like tossing a coin or dealing cards from a well-shuffled deck. For most people these examples cue ideas that are quite different. Business situations are much more like betting on horse races than betting on dice.

A variation on this second strategy is to point to other behaviours that people already perform that reflect supportive theory.

Third Strategy: Mention Supportive Beliefs

Many good designs for controls are based on simple observations and beliefs that most people agree with but that are inconsistent with theory behind some competing controls.

If people are interested in the theory behind a design then another way to avoid activating unhelpful beliefs is to explain that theory, but avoid mentioning competing theories.

For example, the observation that risk control requirements vary enormously between different applications of control-generating controls implies that the controls used to generate controls in different applications will probably need to be different in some ways. No problem there, and who could disagree? However, if you start instead by saying that standardization of controls is not all it’s cracked up to be then theories about standardization will awaken and the conversation can soon be sidetracked into a maze of misconceptions about standardization, communication of risks, and building risk analyses at different organizational levels.

Similarly, the observation that people at different levels of an organization look at things from a different perspective implies that the ideal analysis of risk at each level will also be different (e.g. different levels of detail, different concerns, different models). Again, this is common sense. However, if you start instead with criticism of the idea of rolling up risks from one level for presentation to the next then the debate can quickly get bogged down in the assumption that the only thing different about the perspective at different levels is the amount of detail shown.

Fourth Strategy: Ask Questions to Firm up Weak Theory

On many issues related to risk and uncertainty most people do not have an opinion. They may be going along with some poor practice because they have been told it is expected without ever questioning seriously whether it makes sense. Not everyone needs to be an expert so there is little wrong with this.

It also gives us another way to get people thinking from helpful beliefs instead of incorrect or irrelevant theories. We can just ask them to think carefully and choose from a range of alternatives.

For example, according to most guidance and textbooks on risk management and internal control the purpose of these activities is to ensure that organizations achieve their objectives (which are givens, even though they may need some clarification). Indeed risk management and internal control can be designed on that limited basis, but if you have an idea for managing risk and uncertainty during the shaping of objectives then you need to argue otherwise.

Most non-specialists do not have a clear view on what they think risk control is for but you can help them form a view by giving them a free choice between options such as these:

  • Achieve original objectives: The role of risk/uncertainty management is to help the organization achieve the objectives/targets it set at the start of a year or a project. Uncertainty/risk management has no role in setting or revising those objectives.

  • Achieve given objectives: The role of risk/uncertainty management is to help the organization achieve its objectives/targets, though these may change during a year or project. Uncertainty/risk management has no role in setting or revising those objectives/targets.

  • Perform well: The role of risk/uncertainty management is to help the organization perform well, and that includes helping to set and revise objectives/targets.

 

Most people2 [29] choose the last option, perform well, and hardly anyone is tempted by the first, which is the literal interpretation of the textbook answer. So, in this case, there is no need to give arguments in favour of going beyond the traditional textbook view because, given a free choice, that’s what most people decide for themselves.

Here’s another example. People often write and talk about risk as if risks are almost the same as physical objects: singular things with their own existence much like an apple, orange, or number 85 bus. They talk of ‘identifying’ risks (like identifying a rare bird perhaps), do little to define the boundaries of a risk and separate it from others, and think it appropriate to rate the impact if the risk happened as if that impact has just one possible level.

Does this make sense? Does anyone genuinely believe that risks are like physical objects in these ways? Probably not, and this might be another situation where it is worth giving people a free choice between options such as:

  • Risks are external objects: Risks are a fact of nature, part of the world around us, and as such they exist already for us to discover, have their own natural definitions, and exist no matter how much information we have.

  • Risks are given uncertainties: Risks are a product of our uncertainties, and these arise from lack of knowledge as well as the inherent practical difficulty of predicting the world. However, we have no choice about how risks are defined.

  • Risks are uncertainties we can define: Risks are a product of our uncertainties, and these arise from lack of knowledge as well as the inherent practical difficulty of predicting the world. We can choose how we structure our uncertainty into risks.

 

I do not know what the popular answer would be but strongly suspect that the first, and most unhelpful, option would not be popular when subjected to concentrated thought.

Fifth Strategy: Direct Comparison

Suppose that, despite giving a positive rationale for a design and mentioning contexts in which that rationale is familiar to people, there is still resistance from someone on unfounded theoretical grounds. What then?

Or, suppose someone is arguing for a design and has given a rationale that includes misconceptions that could also block another, better design. What can be done?

Quite often these debates are not about whose idea is the one that should be implemented, but whose idea is even fit to be evaluated against the others with a view to implementation.

In this situation a reasonable approach is usually to argue for fair consideration of the alternative design options available. This might be done by pointing out that a design has worked elsewhere, or is liked by some reasonable people, or has been worked out carefully, or is worthy of consideration for some other reason. Often it involves pointing out that the thinking behind it is different but still a legitimate alternative opinion.

For example, someone who believes that ‘risk appetite’ is an inbuilt part of corporate personality may be reluctant to consider techniques for weighing risk in decisions that do not involve a risk appetite statement or similar system of thresholds. For them risk appetite exists naturally in the form of an upper limit and must be articulated if meaningful risk management is to be done. From this point of view working with a risk appetite is logically essential.

However, for others ‘risk appetite’ is a synonym of ‘risk limit’ and means a system of limits that is as much an invention as budgetary control (and works in basically the same way). From this point of view working on risk appetite is not essential and other techniques can be considered.3 [30]

Controls designed from both perspectives deserve consideration. This is perhaps best done by looking at what would happen if the alternative control procedures were implemented because this approach side-steps the theoretical difference.

Another example concerns probability impact matrices (PIMs). These are two dimensional grids where one axis represents the probability of a risk happening and the other dimension represents some measure of the impact of the risk if it does happen.

PIMs have become so common and are so often promoted in books and regulations that some people have come to the point where they see PIMs as the only way to characterize risks, even arguing that other techniques are just PIMs in disguise, which they are not.

If you have an alternative to PIMs that you would like to try then the first stage is simply to get acceptance that alternatives could exist and that they could be at least as acceptable to users as PIMs. My research has shown that well-designed alternatives put up against PIMs in a fair trial are about equally acceptable to users, despite the greater familiarity of PIMs. I found that cumulative probability statements are about as acceptable to users as PIMs. Cumulative probabilities were more popular with risk experts. New graphical forms of cumulative probability statements may achieve even higher usability but that research still needs to be done.

Sixth Strategy: Contradiction

The last resort is to engage in direct argument.

Good arguments exist to refute many unhelpful and incorrect ideas that block improved risk control but using them is not easy.

Common problem areas

Here are some common practical problem areas with suggestions on beliefs that may be contributing to them, and ideas on how to tackle them.

Messy Risk Registers

Risk registers are a very common way to format information about risks and controls but many have become a confusing mess of ill-defined items where the extent of unintended gaps and overlaps is impossible to determine.

This is the typical result of trying to analyse risk without having a model to which uncertainty attaches, but many people are unaware that using models is a valid alternative. I also suspect that another reason for not bothering more with clear definitions of risk items and not trying harder to progress towards structured models is a cluster of hazy, half-conscious beliefs about risks based on the notion that they are like physical objects.

The cluster of beliefs looks something like this: risks are like physical objects. They exist already. They have their own obvious, natural boundaries (like an orange has a skin) that separate them from each other. Therefore, beyond putting risks into categories, there is no need for structuring or decision-making about how to structure them. Nor is there a need to write down clear definitions of risks. As long as a good enough name is used other people will know what was intended.

As I suggested earlier, and illustrated with research, these beliefs are reflected in language and in the content of guidance but most people, if asked directly if they agree with them, will say they do not. Therefore just inviting people to think carefully for themselves about the true nature of risk/uncertainty should be helpful.

If we knew everything would we ever face risk or uncertainty? Some believe that at least some aspects of the world are inherently impossible to predict. In other words, randomness is real. However, this is at the sub-atomic, quantum level and the debate is almost impossible to follow unless you understand the mathematics and principles of quantum physics very deeply.

In everyday life the reason we don’t know what will happen is just that we don’t know enough and even if we did we could not use all that information.

Even if we can’t agree on whether risk is purely a matter of ignorance it is certainly true that ignorance is the big part of it and something we can often work to reduce.

If you say ‘area of uncertainty’ instead of ‘risk’ it tends to help people to think differently about risk. It is a reminder that ignorance is a key part of the problem and that the areas listed are one choice out of many, each area needing clear definition.

Systematic Understatement of Risk Levels

Underestimating risk is nearly always a problem because of our tendency to view the future with mental blinkers. The techniques we use to make estimates should help to counter that bias. Unfortunately, some common techniques actually make things worse.

The problem, again, is those probability impact matrices. The probability of some event happening is fairly well defined, but the impact needs care. What is the impact of ‘Loss of market share’ or ‘Injury due to accident’ or ‘Client initiated design changes’? Obviously it depends on how much market share is lost, how many injuries there are and how severe each one is, and how many client changes there are and how far reaching.

An event is really a set of outcomes, each with a (probably uncertain) impact associated. Therefore the ‘impact’ of such a set needs to be specified properly. One good interpretation is that it is what mathematicians call the ‘expected’ impact, which is the probability weighted average of all the possible outcomes.

When weighing risk in decisions it is usually necessary to summarize the risk down to one value, perhaps as a number, so the weakness here is just summarizing too early, while we are still considering all the consequences and potential responses.

However, there is another, much more serious problem and this is where the systematic understatement of risk levels comes in. Most instructions to people when making impact judgements with PIMs do not mention the need to reach some kind of average. They just ask for ‘the impact’ as if there is only one to consider. Consequently, we just think of whatever comes to mind first and judge that. All other outcomes get left behind.

For example, if the risk is ‘Loss of market share’ and this has been judged to be a risk with low impact then the possibility of losing a lot gets removed from consideration completely, even if it is possible, though less likely, than a small loss.

Why are the instructions worded as they are and why do we accept them without question? Again, the main reason is probably that the technique has been recommended and the job is just to get on with it.

However, the technique implies that risks are singular things with only one impact. That is why there is no need to explain what to do about risks with multiple potential impacts.

As with the idea that risks are like physical things, this is a belief that only needs to be thought about clearly and put up against other ideas to be dispelled. It is obvious that each risk relates to multiple possible futures and so needs to be characterized with that in mind. The instructions must be changed or a different method used.

Low Quantity and Quality of New Control Ideas

The main reason that people struggle to come up with good new ideas for controls is lack of knowledge of control techniques. Simply learning about good alternatives should help most people to produce more valuable ideas for their own purposes.

However, there seems to be more to it than that. The major guidance documents on internal control and risk management have very little material on the design of controls which suggests an underlying belief that design is either not important or too easy to need attention. If mentioned at all it is portrayed as a simple matter of selecting from a small number of obvious options.

This belief that design is trivial may be a legacy of the audit influence on risk control, but wherever it has come from it is now entrenched in several influential documents. Most old school risk management processes are some variation on the picture shown in Figure 16.1, with boxes representing activities and arrows representing data flows or the main direction of inferences.

As you can see the word ‘design’ does not appear at all and we have to assume it is taken care of somewhere in the ‘Plan risk responses’ box. COSO’s influential framework for internal control is typical in that design is not mentioned at all, even though the framework clearly sets out to be comprehensive.

In the daily life of risk control specialists a lot of time and energy goes into design work, and shopping around for existing designs, because it is complex and important. Ultimately the value of any risk control exercise is limited by the value of the best controls that anyone can think of.

If Figure 16.1 is redrawn to give space to activities in proportion to their importance and resource consumption it ends up looking more like Figure 16.2, in which ‘Plan risk responses’ has been replaced by a cluster of new boxes, which are tinted here to highlight them.

In Figure 16.2 design and planning are separated, design is top down, and there is research into design options (which includes shopping).

Clearly, controls design is not a trivial matter. It requires skill, time, and supportive processes and tools. There’s always a way to improve if we want to.

The second strategy – that of introducing a helpful context – can be used to encourage a more energetic interest in good design. While few people are expert professional designers, most of us have some understanding of creative design and problem solving, and recognize it as part of some activities. Mention controls design in the same sentence as architecture, systems development, and the design of consumer products.

Another reason for weak ideas is a tendency to think that risk is the only source of requirements on controls. (Alternatively, this may be stated as control objectives, though they are just the flip side of risks.) Look at many methods for generating controls and the layouts and procedures progress from risks to controls with no explicit consideration of any other factors that might influence the design of controls. This may be another legacy of the audit influence.

 
Figure 16.1 Risk management process with no design of controls

graphics/fig16_1.jpg

 
Figure 16.2 Risk management process with design of controls expanded

graphics/fig16_2.jpg

These methods make it too easy to think of a control that would address a risk and then think the job is done, even though the control idea would be costly, take too long to implement, and conflicts with the organization’s culture. The more other factors are taken into consideration the more challenging and rewarding design becomes.

A third reason for weak ideas is the belief that individual controls either work or they do not. This is prevalent in auditing and has become part of the lore of reviews under Section 404 of the Sarbanes-Oxley Act. It encourages people to think that one control per risk is ideal and that two controls for one risk represents duplication.

One way to tackle this is to refer to the well-established principle that no control system provides a guarantee, and illustrate this by mentioning collusion and human error. Similarly, no individual control is totally reliable but we can do better using multi-layered designs. Often it is more efficient to fix minor deficiencies in the performance of one control by adding another layer than by gold plating one control.

Reluctance to Use Numbers

Quantification is one of the most controversial topics in risk control. Skill with numbers and mathematical techniques varies so much from one person to another that what is obvious to one person is baffling to another. While one person is busy mastering Lebesgue integrals another may feel that multiplication using a calculator is the height of mathematical sophistication and best avoided if possible.

To some extent, debates about how far to take quantification are really debates about how much mathematical skill is to be required of people and how much that skill is to be rewarded. In some jobs mathematical skill is highly rewarded but in most work settings the majority prefer to see mathematical skill as a bad thing, suggesting lack of social skills and inadequate practical knowledge.

Risk control can be done better using the right mathematical techniques, if you have the skill, and almost always is better done using numbers regardless of skill. Some of the reasons that people reject numbers are based on misconceptions.

My research on this agrees with others. It shows that people generally find numerical descriptions of risk more informative and want to receive them. The problem is more with giving descriptions in numerical terms.

One reaction is to say ‘I can’t put a number on the probability because I don’t know what it is’. This can probably be traced back to our experiences of probability in school, which are usually based on situations involving tossing coins, throwing dice, and drawing coloured balls ‘at random’ from large bags.

In these settings prediction is made impossible by design, so that the best that can be done is to work with the proportion of heads, or sixes, or red balls that would be expected if you repeated the experiment a vast number of times. For convenience the proportions are assumed to be known without requiring any research. For example, the coin is described as ‘fair’ which means by definition that the probability of heads is the same as tails and is 0.5.

Situations where the long-run relative frequencies of different outcomes are not known in this way are not discussed, which is a pity because they are much more common in everyday life.

Philosophers are still thrashing this out but from a practical point of view betting on sport shows that people can and do work with risk numbers for situations where long-run relative frequencies are not given, and that people who do this with more skill do better (e.g. keep more money) than others.

It works; we just can’t agree on why.

As suggested earlier in this chapter it is helpful to use betting on sports as a context to cue relevant beliefs about putting numbers on risk and to avoid mentioning dice.

Experiments with prediction markets suggest that, in future, organizations may be able to harness the views of employees on defined risks of interest by subsidizing betting on probabilities of different outcomes. The technology exists and performance can be excellent in the right circumstances. The setting helps people get over their theoretical worries.

When asked to give probabilities many people feel that giving an exact number suggests they have more knowledge than they really do. There are several ways to deal with this including asking for ranges and using separate ratings of confidence.

Another alternative is to allow freely chosen words or phrases such as ‘fairly certain’, ‘possible’, and ‘unlikely’ or to provide a menu of such phrases so that people can express whatever view they have without using numbers directly. (People do not have consistent meanings for these phrases across different situations, so it is not the most reliable technique.)

One technique that does not solve the problem is to make people select words on a scale such as ‘high, medium, low’ where the range is divided into non-overlapping buckets. Contrary to most perceptions, this technique still requires accurate knowledge of probabilities because it requires people to make choices near the fixed boundaries of the buckets.

The logic of this is hard to follow and explain so in practice it is best just to describe a design with numbers that allows people to express their doubts honestly.

Disappointing Levels of Embedding

Most people believe that embedding is a good thing and recognize embedding when they see it. For example, which of these is a better example of successful embedding of risk control?

  • Scenario planning is used to think about possible futures and develop plans.

  • Plans are made then lists of risks are written and possible control actions considered.

Most people see the first approach as more embedded.

Unfortunately, what often happens is that procedures invented to meet regulatory requirements for assurance are made more frequent, delegated further, and by familiarity become seen as business as usual, from which it is a short step to describing them as ‘embedded’.

As usual the main reason for this is probably lack of alternative ideas, but there can also be unhelpful beliefs lurking beneath the surface. There seems to be a tendency to see risks as a special category of thing with a separate existence rather than as uncertainty attached to other knowledge.

For example, occasionally organizations try to merge their risk register with a list of their critical success factors (or objectives, or metrics). The source documents will have been developed separately but are surprisingly similar. One might say ‘CSF 21: low churn’ while the other says ‘Risk of high churn’. In the merged document these two will usually appear as separate items, though under the same heading.

An embedded way to look at this is to say that churn is something the organization is interested in and wants to reduce, but it is uncertain what future churn will be. The risk is an attribute of the construct of churn and inseparable from it.

An appropriate way to deal with this belief is usually to describe control designs and introduce supportive theory. People already recognize and appreciate proper embedding when they see it and just need to hear practical proposals for action.

Submit your own content for publication

Submit content