There is no future without risk.
Swiss Re (2004: 5), The Risk Landscape of the Future
He either fears his fate too much or his deserts are small,
Who dare not put it to the touch to win or lose it all.
James Graham (1612–1650), Scottish general
In introducing one of finance’s most important papers, William Sharpe (1964: 425), recipient of the 1990 Nobel Prize in Economics, wrote:
One of the problems which has plagued those attempting to predict the behaviour of capital markets is the absence of a body of positive microeconomic theory dealing with conditions of risk.
From a slightly different perspective, Lubatkin and O’Neill (1987: 685) wrote: ‘Very little is known about the relationship between corporate strategies and corporate uncertainty, or risk.’ Although these and other authors have illuminated their topic, there is still no comprehensive theory of firm risk. This work is designed to help fill part of that gap.
In firms, ‘risk’ is not one concept, but a mixture of three. The first is what might occur: this is a hazard and can range from trivial (minor workplace accident) to catastrophic (explosive destruction of a factory); it is the nature and quantum of a possible adverse event. The second component of risk is its probability of occurrence: for some workplaces such as building sites, the chances of an accident may be near 100 per cent unless workers watch their step. The third element of risk is the extent to which we can control or manage it. Trips are largely within individuals’ control. But what about destruction of the workplace? Avoiding that is perhaps only practicable by forming some judgement about operations and standards, and keeping out of facilities which look lethal. To further complicate risk, these elements can be overlain by environmental influences, and the way risk is framed or presented.
Although risk does not have a theoretical place in corporate strategy, it is intuitively obvious that its influence on firm performance is such that it should be an important factor in decision making. Every year, for instance, it seems that a global company is brought to its knees in spectacular fashion by an unexpected event: Barings Bank and Metallgesellschaft, Texaco and Westinghouse, Enron and Union Carbide are huge organizations that were pushed towards (or into) bankruptcy by strategic errors.
Even without a disaster, corporations are extremely fragile: the average life of an S&P 500 firm is between 10 and 15 years (Foster and Kaplan, 2001); and the 10 year survival rate for new firms which have listed in the US since 1980 is no more than 38 per cent (Fama and French, 2003). Company fragility seems to be rising. Of the 500 companies that made up the Fortune 500 in 1955, 238 had dropped out by 1980; in the five years to 1990, 143 dropped out. Thus the mean annual ‘death rate’ of the largest US companies rose from 0.2 per cent in 1955–1980 to 5 per cent in 1985–1990 (Pascale, 1990). My own research has shown that out of 500 companies in the S&P 500 Index at the end of 1997, 129 were no longer listed at the end of 2003 due to mergers (117 firms), bankruptcy (7) and delisting (5): the annual death rate was 4.3 per cent.
Risk is so pervasive and important that its management offers the prospect of great benefit. To set the scene, Figure 1.1 shows my model of firm risk and its key working assumptions. In brief, observable corporate outcomes – such as profitability and risk – are impacted by relatively stable structural features of the firm that are largely determined exogenously (such as size, ownership and operations); and by more fluid strategic factors that are largely endogenous (such as markets, products and organization structure). These factors, in turn, are shaped by the influences of Politics, Economy, Society and Technology (PEST), and by competitors, consumers and the investment community. External pressures encourage firms to change their tactics and strategies and thus induce moral hazard; and performance – particularly below an acceptable level – can feed back to induce changes in strategy. This is a resource and knowledge-based view of the firm, with risk as a key variable in determining shareholder value.
This framework will be followed in developing a body of theory behind firm risk, and showing how it can be managed strategically to add value. The figure gives a new perspective to corporate performance with its contention that risk and return are determined by a set of common underlying factors in the firm’s structure and strategy. This explains the conventional assumption of a direct causal link between risk and return, which seems true across most disciplines. The model here goes further, however, in proposing that risk shares the same drivers as other performance measures, including returns. As we will see, this makes any return-risk correlation spurious because each arises independently from shared underlying drivers. Most importantly, this negates a common management and investor assumption that taking greater risk will lead to higher returns.
The concept of risk throughout the book is firm risk. Although measured in different ways, it is a probability measure that reflects the actual or projected occurrence of unwanted or downside outcomes that arise from factors that are unique to a firm or a group of firms. Thus it excludes systematic risks that arise from market-wide factors, such as recessions or equity market collapse. Firm risk incorporates idiosyncratic risk (sometimes called non-systematic or diversifiable risk) from finance which is the standard deviation of returns that arise from firm-specific factors and can be eliminated by diversification. Firm risk is also measured by loss in value, and by the frequency of incidents that contribute to value loss.
The analysis combines the perspectives of corporate finance and corporate strategy, and uses ‘risk’ with its dictionary meaning as the possibility of an adverse outcome. This is the way that risk is understood by managers, and also by key observers such as the Royal Society (1992) which defined risk as ‘the chance that an adverse event will occur during a stated period’. However, we will see that the exact definition of risk is not critical because its various guises – operational risk, downside risk, variance in accounting measures of performance, and so on – are correlated (as would be expected from Figure 1.1).
The topic of corporate risk is important economically. I agree with Micklethwait and Wooldridge (2003) that the world’s greatest invention is the company, especially as companies generate most of the developed world’s wealth and produce most of its goods and services. Understanding how risk acts on their output offers the promise of a quantum improvement in economic performance, which – in keeping with Sharpe – has languished without explanatory theory.
Risk in firms is the downside of complex operations and strategic decisions: modern companies run a vast spectrum of risks, with many of such magnitude as to be potential sources of bankruptcy. Because the nature and quantum of risk shift along with changes in a firm’s markets, technology, products, processes and locations, risk is never stable or under control, and hence is challenging to manage. Fortunately, though, risk shares many of the attributes of other managerial concepts that focus on value, including knowledge, quality and decision efficiency. Improving any one of these often improves the others, so risk management is a powerful tool in improving firm strategy and performance.
Conventional risk management techniques assume that companies face two general types of risk. The first is operational and arises in the products and services they produce, and within their organization and processes. These risks are managed mechanically using audits, checklists and actions that reduce risk; the best approaches are broad ranging and strive for Enterprise-wide Risk Management which DeLoach (2000: xiii) in a book of that name defined as:
A structured and disciplined approach: it aligns strategy, processes, people, technology and knowledge with the purpose of evaluating and managing the uncertainties the enterprise faces as it creates value.
The second type of risk that companies face is financial and is sourced in the uncertainties of markets, and the possibility of loss through damage to assets. These are typically managed using market-based instruments such as insurance or futures.
This leads to four approaches to managing business risks that were first categorized by Mehr and Hedges (1963): avoidance; transfer through insurance, sharing or hedging; retention through self-insurance and diversification, and reduction through Enterprise Risk Management (ERM). These techniques have proved immensely successful in relation to identifiable hazards – such as workplace dangers and product defects – which might be called point-sourced risks. As a result of tougher legislation and heightened stakeholder expectations, the incidence of these risks has fallen by between 30 and 90 per cent since the 1970s. Conversely the frequency of many firm-level strategic risks – crises, major disasters, corporate collapses – has risen by a factor of two or more in the same period. This suggests that conventional risk management techniques have reached a point of diminishing returns: they are effective against point-sourced risks, but unable to stem the steady rise in strategic risks.
Despite this, corporate risk management remains unsophisticated. For a start, finance and management experts have little interest in any cause-and-effect links between risk and organizations’ decisions. Finance ignores risks that are specific to an individual firm on the assumption that investors can eliminate it through portfolio diversification, and thus ‘only market risk is rewarded’ (Damodaran, 2001: 172). But this argument is not complete. For instance, market imperfections and frictions make it less efficient for investors to manage risk than firms. Secondly firms have information and resources that are superior to investors and make them better placed to manage risk. Third is the fact that many risks – particularly what might be called business or operational risks – only have a negative result and are not offset by portfolio diversification. Risk management should be a central concern to investors, and exactly this is shown by surveys of their preferences (for example, Olsen and Troughton, 2000).
Similar gaps arise within management theory which rarely allows any significance to risk and assumes that decision making follows a disciplined process which defines objectives, validates data, explores options, ranks priorities and monitors outcomes. When risks emerge at the firm level, it is generally accepted that they are neutered using conventional risk management techniques of assessing the probability and consequences of failure, and then exploring alternatives to dangerous paths (Kouradi, 1999). Systematic biases and inadequate data are rarely mentioned; and the role of chance is dismissed. For instance, the popular Porter (1980) model explains a firm’s performance through its relative competitive position which is an outcome of strategic decisions relating to markets, competitors, suppliers and customers. Risk has no contributory role, and is dismissed by Porter (1985: 470) as ‘a function of how poorly a strategy will perform if the “wrong” scenario occurs’. Oldfield and Santomero (1997: 12) share a similar view: ‘Operating problems are small probability events for well-run organizations.’ Even behavioural economics that has recently been applied to firm performance (for example, Lovallo and Kahneman, 2003) generally only attends to the shortcomings of managers’ cognitive processes. This, however, completely ignores the huge amount of effort that most firms and managers put into risk reduction.
Another important weakness in risk management is that theory and practice are largely blind to the possibility that managing risks can lift returns. As a result sophisticated risk management practices are uncommon in firms. An example is the archetypal risk machines of banks and insurance companies. Neither considers the full spectrum of business risks being faced by their customers and shareholders: both look to statistics and certainty, rather than what can be done to optimize risk, and concentrate on either risk avoidance or market-based risk management tools. Thus an otherwise comprehensive review of risk management in banking by Bessis (1998) devoted just a few pages to operational risks.
Corporate risk management is equally weak, largely because it tends to be fragmented. Take, for instance, a typical gold mining company, whose most obvious exposures are operating risks to real assets. Control is achieved qualitatively, through some combination of reduction, retention and sharing that becomes a reductionist process of loss prevention. Responses are managed by health and safety professionals, through due diligence by senior management, and via inspections by employees and auditors. Staff are trained like craftsmen through an apprenticeship and rely on tacit knowledge because the firm’s operating risks are often complex, hidden and uncertain.
The miner also faces another set of risks that affect its capital structure and cash flow, but they are managed quite differently. A treasury expert will quickly identify and quantify them, accurately estimate costs associated with alternative management strategies, and make clear recommendations on tactics. Well-recognized quantitative tools are portfolio theory, real options and risk-return trade-offs. These risk managers have formal training, apply explicit knowledge and use products as tools for risk management.
Shareholders, however, do not distinguish between losses from a plant fire or from an ill-balanced hedge book. They know from bitter experience that badly managed business risks can quickly bring down even the largest company. HIH, One Tel and Pasminco were well-regarded firms with blue riband directors that failed due to weak risk management. BHP, Fosters, Telstra and other firms did not go broke, but have destroyed billions of dollars of shareholders’ funds through poorly judged risks. Risk should be managed holistically, not in silos.
An important trend which we develop at length in the next chapter is that strategic risks – crises, major disasters, strategic blunders – are becoming more common, with growing impacts on shareholder value. A good example is shown in Table 1.1 using the example of one-time market darling Dell Inc. In the year to August 2006, as the Dow Jones Industrial Average rose, the share price of Dell fell 45 per cent after the company’s profits dropped, market share slipped and criticisms became widespread about neglect of customer service. In the following months the company was hit by the largest ever electronics product recall, a probe by the SEC into its accounting practices, and the threat of suspension following late filings of financial reports. A stream of executives left and company founder Michael Dell was forced to defend the CEO before firing him in January 2007 and dramatically resuming the position of Chief Executive.
Recurring crises of the type that enmeshed Dell are occurring under three sets of pressures. One is the growing complexity of systems: economies of scale, integration of manufacturing and distribution systems, and new technologies have more closely coupled processes so that minor incidents can readily snowball into major disasters. The second cause is deregulation and intense competition which have forced many firms to take more strategic risks in order to secure a cost or quality advantage. Given executives’ poor decision-making record, an increase in the scale and frequency of strategic risks leads to more disasters.
A third contributor to more frequent strategic risks is corporate re-engineering and expanding markets, and the need to maintain returns in an era of low inflation. As society and industry become networked, intangibles such as knowledge and market reach become more important and so usher in new exposures. Moreover, areas of innovation involving environment, finance and information technologies tend to be technically complex and enmeshed in their own jargon, which promotes yuppie defensiveness that deters even the most interested outsiders from exploring their risks. Too many high-risk areas and processes in firms fall by default to the control of specialists (often quite junior ones) because senior management – which is best equipped to provide strategic oversight – is unable to meaningfully participate in the evaluations involved.
The confluence of these and other contributors to strategic risk explains why conventional risk management techniques have reached a point of diminishing returns: they can be effective against point-sourced risks, but are unable to stem the steady rise in firm-level risks. In short, conventional risk management techniques are simply inadequate to manage contemporary risks, and a new paradigm is required for risk management. That in a nutshell is the rationale for this book.
With ‘risk management’ so popular as a buzzword, it is timely to consider what it actually is. The concept was first introduced into business strategy in 1916 by Henry Fayol. But it only became formalized after Russell Gallagher (1956) published ‘Risk Management: A New Phase of Cost Control’ in the Harvard Business Review and argued that ‘the professional insurance manager should be a risk manager’. As new technologies unfolded during the 1960s, prescient business writers such as Peter Drucker (1967) garnered further interest in the need for risk management, and it became so obvious that US President Lyndon Johnson could observe without offence in 1967: ‘Today our problem is not making miracles, but managing miracles. We might well ponder a different question: What hath man wrought; and how will man use his inventions?’
Inside firms, elements of risk management emerged in the goals of planning departments, and its philosophy was fanned in students of operations research and strategic management. But not until the early 1980s did risk management became an explicit business objective. Then neglect evaporated with a vengeance. In a popular book Against the Gods, Peter Bernstein (1996: 243) argued: ‘the demand for risk management has risen along with the growing number of risks’; Daniell (2000: 4–5) agreed: ‘We are now living in a world of rising risk and increasing volatility. Everywhere we seem to encounter increasing and intensifying risk … Poor management and lack of leadership have unnecessarily increased risks.’ Since reading these works it has intrigued me that neither author felt the need to offer evidence to support their contention. Certainly they would have had difficulty in finding relevant data as its absence is one of the most prominent gaps in our understanding of risk. Perhaps they simply assumed that we are in what Ulrich Beck (1992: 12–13) called the Risk Society where risk is a key trait of modern life:
The productive forces [of modern industrial society] have lost their innocence in the reflexivity of modernization processes. The gain in power from techno-economic ‘progress’ is being increasingly overshadowed by the production of risks.
As risk management became better codified, it established core processes of observation (hazard identification), extrapolation (what could happen) and judgement (the likelihood of occurrence). Moreover it proved immensely successful in reducing point-sourced risks in virtually all aspects of our personal and business lives. Every well-run organization can claim credit for contributing to this remarkable transformation by pointing to efforts made in recruiting and training, risk identification and procedure development, approval processes and exceptions reporting, and so on.
Significantly, though, traditional risk management has been focussed on individual sources of risk. By pigeon holing risks, each has been tackled alone with little heed to systems and how they might form and interact. Risks are not managed strategically in light of firms’ full exposures to markets, investments, operations and processes. This has led to two paradoxical features of risk: most corporate disasters now come from either conscious decisions or well recognized risks; and most serious process failures involve well-studied technologies and ‘old’ risks such as fires and explosions. Management had either neglected or ignored the risk being run (which is typical with financial and strategic disasters), or did not appreciate the potential for a succession of small risks to trigger catastrophe (typical of operational and marketing disasters).
When such catastrophes do occur, the contributing factors are evident with hindsight, and this leads to post-audits using a process called root cause analysis. As the name suggests, it looks not just at what happened, but why: what were the underlying causes that can be identified and managed better in the future? The technique involves a systematic process to identify each human or system deficiency that led to an adverse outcome, and thus eliminate factors that may precipitate a future adverse event. It is analogous to the nursery rhyme (usually attributed to Benjamin Franklin, but probably a much earlier proverb) warning of the outcomes that are inevitable in the relentlessly logical progression of events:
For want of a nail the shoe was lost
For want of a shoe the horse was lost
For want of a horse the rider was lost
For want of a rider the battle was lost
For want of a battle the kingdom was lost
And all for want of a horseshoe nail.
A similar perspective comes in the domino theory of Heinrich (1959) which argues that accidents are part of a chain of events involving characteristics of the victim and environment, a human error that leads to emergence of a hazard or unsafe act, followed by an accident and possible injury. He believed that 98 per cent of accidents were ‘avoidable’.
This perspective can build a compelling case for the predictability of catastrophes. Bazerman and Watkins (2004), for example, describe the 9-11 terrorist attacks on New York and Washington, Enron’s $US93 billion collapse in 2002, and other shocking events as ‘disasters you should have seen coming’.
In the case of September 11 they document (page 21) a series of clear warning signs through escalating attacks on the US by al Qaida after the 1993 World Trade Centre bombing; and growing ‘evidence that Islamic terrorists intended to use commercial airlines as weapons’ including a foiled plot in 1995 by an al Qaida linked terrorist group to crash an Air France jet into the Eiffel Tower.
There were also ‘gaping holes … in airport security’ that led to clear and strident warnings from reputable, independent observers. In testimony during 1996 to the House Subcommittee on Aviation, Assistant Comptroller General of the General Accounting Office Keith Fultz reported:
The terrorists’ threat in the United States is more serious and extensive than previously believed … Nearly every major aspect of the [nation’s aviation security] system – ranging from the screening of passengers, checked and carry-on baggage, mail, and cargo as well as access to secured areas within airports and aircraft – has weaknesses that terrorists could exploit.
Available online at www.gao.gov/archive/1996/rc96251t.pdf
Security in September 2001 was the responsibility of the airlines which – in a highly competitive industry – predictably took every possible action to contain their costs. Proposals such as those of the GAO to introduce tighter security standards were actively resisted by airlines, and the regulator charged with supervising airport security – the Federal Aviation Administration – was slow to enforce existing regulations, much less introduce new ones. Looking back it seems fairly obvious why 9-11 occurred.
On the other hand, the support of root cause analysis for recurring ‘predictable surprises’ has a number of serious deficiencies. It is reductionist: part of the process, for instance, involves charting the chain of contributing events, which inevitably assembles a pattern that fits the data and its sequence. In fact, though, this only explains how something could have happened, not why. The other defect in conventional analysis of risks and incidents is that it can all too easily identify what philosopher Arthur Koestler (1972) termed ‘confluential events’ where complex outcomes arise from essentially unconnected parts. Even though the pieces seem to fit, they may not actually have done so in the lead up to disaster.
Illustrations of how seemingly ‘obvious’ facts do not fit together ahead of time can be seen in the occurrence of major events such as wars. In the case of the First World War, many historians (see, for example: Tuchman, 1962) paint a clear picture of steadily escalating German militarism that ultimately led its leaders to engineer a strike against encircling powers of Britain, France and Russia. The inevitability of this conflict was, however, far from self-evident before the War, and an excellent illustration of this is provided by Sir Norman Angell, later a member of the British Parliament and recipient of the Nobel Peace Prize, who wrote shortly before the Great War’s outbreak (Angell, 1912: viii–x):
[Traditional views of military power] belong to a stage of development out of which we have passed … Military power is socially and economically futile … International finance has become so interdependent and so interwoven with trade and industry [but] political and military power can in reality do nothing for trade … The diminishing role of physical force in all spheres of human activity … has rendered the problems of modern international politics profoundly and essentially different.
In short the economic interdependence of modern states means that military aggressors suffer equally with those they defeat and starting a war is economically illogical. Thus war between the world’s major economies was unthinkable in 1912. It should have been even more unthinkable in 1938–9.
More basically, every accident, disaster, collapse that involves man-made or human-operated equipment, facilities or processes inevitably stems from human error. Each followed a wrong decision by a person who failed to correctly locate, maintain or operate whatever proximately contributed to the loss. This failure can be explained simply as a moment of inattention or an error by a normally reliable individual. Conversely it can be given a systematic cause such as poor recruitment and training which put the wrong person in a critical position; heavy workload and inadequate support so the person was hampered in making the correct decision; or external pressures, including moral hazard, that promoted error. If an operator makes a mistake because he was tired, hung-over or day-dreaming, is that operator error or a systematic problem?
Sadly the extent of the explanation of any incident is all too often related to its scale, not its real cause. The death of larger-than-life Princess Diana cannot involve a drunken driver and illegally high speeds, but must be due to a conspiracy involving MI5 and Prince Philip. Just 19 men with few resources could not possibly wreak the destructive horror of September 11. Loss of a shuttle and seven photogenic astronauts cannot be due to a single engineer’s misjudgement, but must involve all of NASA and much of Congress. Only the combination of mafia, Cuba and oil barons could explain a modern Presidential assassination. Pearl Harbor was such a disaster that it must have been orchestrated by President Roosevelt, and could not be explained by the inability to translate, piece together and act instantly on a huge volume of captured Japanese intelligence. This kind of thinking has surrounded so many incidents that the BBC developed a series entitled ‘the Conspiracy Files’ [transcripts are available at http://news. bbc.co.uk/1/hi/programmes/conspiracy_files].
Bazerman and Watkins (2004: 35–39) also conjoin the magnitude of a disaster and its causes by suggesting a variety of systemic causes of 9-11, including the apparent unwillingness of those with authority to act on risks that should have appeared obvious. They argue that an important contributor was cognitive biases such as positive illusions that downplay risks, a heavy discount for future costs, reluctance to change and a myopic focus only on immediate problems. This leads to a natural tendency to defer preemptive actions that are certain to bring an unpleasant reaction – legislation by Congress, higher airline ticket costs – when they are directed at threats whose occurrence is highly uncertain. In social terms this is what Thurow (1981) called the ‘zero-sum society’ where change brings both winners and losers, but the costs of any policy action tend to be concentrated whereas the benefits are diffuse. These factors militate against alignment of decision frameworks and lead to active protest by highly motivated opponents of change – which are especially likely to emerge in the case of imposed solutions – that can paralyze steps to reduce risks, particularly those associated with public goods.
In the case of 9-11, the criticisms above miss two key points. The first is whether it is ever realistic to safety-proof a system as complex as that of aviation across the US. Certainly it had gaps: every passenger during the 1990s had tales of security incompetence. But – absent huge expenditures – could the gaps really ever be closed? Or would there always be unsafe operations and hence casualties, just as there are in other transport modes such as road and rail? Intuitively it seems a daunting challenge to eliminate risks from people who are prepared to die and can hide themselves amongst the millions of passengers who fly each year out of scores of major airports. At the least, tightening security to detect the few terrorists would impose cost, delay and a flood of false alarms that may be quite unacceptable. This, of course, is Perrow’s (1984) argument that some complex systems simply cannot be made safe and hence accidents are ‘normal’. Even if the cause of a specific incident can be traced to system-wide problems, this is hardly helpful. Vague, sweeping generalizations – ‘the system failed us’ – may satisfy a public seeking easy explanations, but they do little to prevent recurrence of the problem.
The second issue is whether it would ever have been realistic to anticipate the specific chain of events leading to 9-11. Certainly most of its core elements – including the tactics and targets – had featured in previous terrorist plots; and the scenario was chillingly laid out by Tom Clancy (1994) in the thriller Debt of Honor which ends with a commercial airliner flying into Washington DC’s Capitol Building and wiping out the US President and political leadership. However, it is one thing to know a broad-brush method (that is, hijack an airliner and fly it – or force the pilots to fly it – into an attractive target), and quite another to anticipate the many possible signs in advance of one specific attack and stop it. Moreover many novels contain plots against government and national symbols, so al Qaida’s attack could have taken numerous forms. So, are post-audits simply being wise after the fact as they effortlessly connect events in hindsight that could not have been anticipated? A caution on the impracticality of predicting disasters comes from studies of frauds which show that half are detected by accident and another quarter are reported by ‘disgruntled lovers’; relatively few are picked up by audit and management (Comer, 1998: 11). If this figure is even remotely true, it bodes ill for the prospects of pre-emptive risk management.
Because risk management can never eliminate every adverse consequence, it must accept the restriction of doing no more than keeping as many as possible within acceptable levels. This ‘unmanageable’ aspect of risk is an important issue because severe risk outcomes have crippling costs for firms. Material risks of major loss that cannot be avoided mean that every organization needs to face up to and acknowledge the risks it runs, and scope out what could occur and how the resulting situations could best be managed. There can be little doubt that management and control of risk is the most important challenge in business today (Kendall, 1998).
With the scene now set, let us consider the objectives of this book in terms of closing gaps in the theory of risk and its management.
The first involves an understanding of the psychology of human behaviour and the dynamics of organizations. An important assumption is that as much as half of corporate risk taking can be attributed to the characteristics and behaviour of firms and their managers. This is termed risk propensity, and I am convinced that understanding behaviour is critical to managing risk.
A second goal is to apply modern theories and techniques that have proven successful in managing financial risks to management of the full range of risks attached to firms’ real assets and business operations. Options theory, for instance, has much to offer risk managers. As examples: insurance can be considered as a put option; and risk affects decision making because high uncertainty increases the value of any optionality and defers commitment. Another approach is to analyze firms as systems whose risk is related to their entropy; or as collections of assets that behave as a portfolio. In related areas, joint ventures can be seen as leverage and there are optimum return-risk trade-offs from diversification.
A third gap lies in the way risk is treated by different disciplines, most prominently that they ignore other contributions to the topic. First there is very limited exchange of knowledge: most modern insurance textbooks have little on finance, whilst most corporate finance texts have little on firm risk or even insurance. Most texts on business risks have little on either insurance or finance. Similarly organization and strategy theory sees risk management as a process dominated by checklists and due diligence with a firm-specific focus, whereas financial risk management is all about products, particularly insurance and market-based instruments.
In bridging this last gap, I seek to accelerate what Culp (2002: 9) sees as a ‘confluence of risk management and corporate finance’ in the last decade. Despite a lack of formal integration, risk management and corporate finance are becoming more strongly linked at the business level. Every firm’s capital structure is path dependent and a function of its history, especially investment decisions, judgements about capital market moves and operational performance; this means that any firm’s risk is interwoven with its historical decisions. A strong rationale for more closely linking corporate finance and risk management is that designing a firm’s capital structure without recognizing its risk profile will lead to sub-optimal outcomes for both.
Intuitively the integration of risk management and corporate finance should be easy because they are fungible. Consider a firm that wishes to actively manage its risks. It can adopt one of the standard strategies of avoidance, transfer and insurance, or retention and reduction. However, as shown in Table 1.2, each of these strategies has economic consequences that are identical to those of financial risk management products.
The most flexible risk management strategy is retention, in which the firm accepts a particular risk as an inherent part of day-to-day operations. Many firms consciously or unconsciously retain much of their risk because the alternative is too hard and has limited financial benefit. From a process perspective, retention acknowledges the difficulty in identifying all risks, quantifying their costs, and insuring or transferring each one. To compensate for the risks they are retaining, firms build organizational slack in the form of additional controls, inspections and specialist response capability that can cope with any risks that emerge. This self-insurance is economically equivalent to retaining a flexible balance sheet so that new equity or debt can be issued to cushion the costs of uncertainties.
Insurance is another flexible risk management strategy and is economically equivalent to debt. Each involves an annuity to meet the payout from a specific risk, either by offsetting the cost after the event by drawing down debt, or prepaying the cost of expected exposures through insurance premia. That is one reason why companies with large or unquantifiable risks – such as R&D, mining and technology firms – have low gearing: they build up as much slack capacity as possible to cope with a serious loss.
Similarly transfer of risk is economically equivalent to securitization or hedging. Consider a company that outsources a hazardous or polluting process such as tanning, and pays a premium to isolate itself from any unwanted exposures. This eliminates risks and locks in returns, just as a bank might securitize a portfolio of loans, or a gold company might sell its production forward.
Even though these strategies may be economically equivalent, they do have significant differences after taking into account factors such as taxation and timing. In the case of insurance and debt, for instance, insurance is a prepayment for loss that is immediately deductible from taxable income; whereas debt represents a payment that is delayed until the event and only then becomes deductible as interest expense and depreciation. Moreover the impacts can differ according to tax regime. Equity, for instance, can be a less attractive store of wealth under a classical tax regime where dividends are subject to double taxation.
Perhaps the major difference between the management and finance perspectives on risk management is in their operational aspects. Financial risk products effectively separate risk management from operations, and they have a more directly quantifiable impact on firm value.
Summary of the Book’s Contents
This chapter has introduced the topic of managing firm risks by establishing an analytical framework and providing descriptions of key terms. The remainder of the book proceeds as follows.
The next four chapters provide a theoretical basis for risk management by describing the causes, processes and consequences of risk taking. Chapter 2 discusses the nature and sources of firm risk showing the links between different concepts of risk and how they can be traced to a firm’s structure and environment. Chapter 3 discusses the behavioural and structural factors that lead managers and companies to take risks, and how this is growing under moral hazard imposed by government regulation and shareholder expectations. Chapter 4 examines the processes of decision making that incorporate attitudes towards risk and which can lead to more or less risky outcomes. It also provides a framework for decision making under uncertainty. Chapter 5 discusses the rationale for dialling up the right level of organizational risk to increase shareholder value.
Four chapters then set out different tools for managing risk. Chapter 6 covers conventional risk management techniques, most of them combined under the concept of enterprise-wide risk management, and Chapter 7 follows with an introduction to the concept and role of a Chief Risk Officer (CRO). The next chapter looks at financial techniques to manage risk, principally insurance and asset-liability management, whilst Chapter 9 discusses risk that arise in financial operations and how they can be managed.
Chapters 10 and 11 address strategic aspects of risk management. They first examine the risk consequences of ethics and governance in companies; and then take a high-level perspective by considering risk at the national level.
The next two chapters address practical issues with a detailed treatment of crisis management and a number of case studies to draw lessons from well-known risks. The book closes with a summary of its key themes and conclusions.