GPM First
Chapter 1 of Game Theory in Management (978-1-4094-4241-7) by Michael Hatfield

Chess Openings and Risk Management

Chapter 1

He attacked everything in life with a mix of extraordinary genius and naive incompetence, and it was often difficult to tell which was which.

Douglas Adams

The Western Roman Empire fell at about the same time as the game of chess was introduced in Western India – around the fifth and sixth centuries. This, perhaps the earliest game that pretended to represent elements from real-life conflict, and where canned strategies and tactics could be compared, or tested empirically, came to Europe in the ninth century. As a young man, I spent a great deal of time studying, practicing, and playing chess in order to become a proficient tournament player of the game. This experience taught me a great deal about chess, life, and, though I didn’t know it at the time, Game Theory and its possible applications to real-life situations.

When I joined my first competitive chess team in high school, I hadn’t spent a lot of time studying the game; I just had a pretty canny sense of how to win. It didn’t take long for inferior players who had done just a bit of studying to wipe me and my canny sense all over the chessboard and take my United States Chess Federation points away. The studying they had done that enabled them to do this centered on chess openings.

Chess openings are an orchestrated set of moves – canned strategies, or a list of specific tactics, if you will – that are used in the first (up to 30) moves of a game. These sets of orchestrated moves have names, such as Ruy Lopez, or the Catalan, for the basic (first 4–9 moves) version of the opening, and then names for their variants (the versions that manifest after the initial set of moves). It is a very rare tournament player who has any chance at all without complete familiarity with at least four openings, with their attendant variants: one for when the player is playing white, and black’s first move is the one expected; one for when the player is playing white, and black’s first move is other than the one expected; one for playing black against 1. P-K4, and one for playing black against 1. P-Q4. Since each opening usually has at least three key variants, the tournament player needs to memorize at least 12 games through the 20-somethingth move, and know how to punish a player who does not have the same knowledge. This is a key element: in an environment where people are executing canned strategies, the ability to recognize which strategy is being employed, and to counter it effectively, is the difference between winning and losing the “game.”

So, how did I move from chess team punching bag to the number one player on the team? By employing something that Von Neumann had described 45 years earlier as a “mixed strategy.”

The Queen’s Gambit was very popular at Eldorado High School’s Chess Club at the time I joined. The opening moves are:










Black now has a dilemma. If he takes the (apparently) unguarded pawn, he opens himself up to a variety of traps and setbacks in space, time, and pawn structure (more on leveraging advantages in one theater into another, seemingly unrelated one in Part III). But if he declines to capture the pawn, his own pawn is under attack, and deploying the defending Queen to the middle of the board all by her lonesome is an unattractive prospect, at best. My new teammates knew what to do in either case, and could very effectively outplay the opponent who did not know how to execute sufficiently to reach the middle game without a significant disadvantage.

My solution? Don’t answer P-Q4 with P-Q4. I studied the variations of Bobby Fischer’s favorite response to the Queen’s Gambit, the King’s Indian Defense. It answered 1. P-Q4 with N-KB3, followed by 2. P-QB4 P-KN3. For those of you who are reading this book without a chess board nearby, suffice to say that this opening looks nothing like the scenario the Queen’s Gambit lovers were used to seeing. And with copies of the way Fischer played this opening right there in my copies of Chess Life & Review, I was suddenly in a position to punish the Gambit players who weren’t familiar with this opening through the 20-somethingth move. In short, I knew beforehand what their canned strategy was, recognized it when it first manifest, and was ready with a thoroughly robust response.

After memorizing the opening sequences I needed to survive the first 20-something moves, I moved up, but it wasn’t long until I hit another wall. The next set of canned strategies I needed to get ahead was contained in the book My System, by Aron Nimzovich.1 [4] This book is an extraordinary aid to any chess player seeking to improve their game dramatically and quickly. Nimzovich lays out nine elements:

  • On the center, and development.
  • On open files (files are the up-and-down rows on a chess board).
  • The seventh and eighth ranks (ranks are the across rows).
  • The passed pawn.
  • On exchanging.
  • The elements of end game strategy.
  • The pin.
  • Discovered check.
  • The pawn chain.

I quickly discovered that the tournament player who had done her homework with respect to the openings, and had read Nimzovich’s treatment of these mid-game canned strategies, was very difficult to defeat, even in the upper levels of tournament play. I would encounter players whose only influence was My System, and they were tough to beat. It was proving very easy to associate the ability to select the most appropriate set of canned strategies in a given situation with ultimate success – and not just in the world of chess.

In several ways chess represents a superior real-life modeling game over the ones introduced by Von Neumann because of something the game theorists refer to as utility. Utility is essentially the payoff(s) the games’ participants receive, or are expecting to receive, and profoundly influences the selection of strategies. The reason chess may prove superior to the games reviewed by Von Neumann is that, with very few exceptions, the utility offered by the Von Neumann games was one-dimensional. For example, in the Prisoner’s Dilemma, the utility involves minimizing or eliminating the amount of time each player must remain in prison. In the Ultimatum Game (and most of the others), the payoff is the amount of money that the participants stand to gain out of a fixed amount.

But chess is different in that, not only are there multiple payoffs that can be pursued by engaging different strategies, those payoffs can be leveraged to attain other types of payoffs. In New Ideas in Chess,2 [5] Grandmaster Larry Evans argues that there are four main elements in chess: pawn structure, force, time, and space. Certain openings, or set strategies, can obtain advantages, say, in time and space, while (temporarily) giving up an advantage in force, which is what happens in the Queen’s Gambit Accepted. Chess comes closer to modeling real life than the single-utility games because it’s not always readily apparent which strategy the other player(s) will adopt, because it’s not always readily apparent which payoff they are seeking to attain.

Indeed, having the experience of learning to play tournament chess at a fairly competent level ingrained in my head the idea that life was like a great game of chess. Tournament chess teams are arranged by “boards,” or one’s relative position on the team. The first-best player is “First Board,” the second-best player is “Second Board,” and so forth. Your team’s First Board plays your opponent’s First Board, Second Boards play Second Boards, and so on, usually to Seventh Boards.3 [6] As First Board, I was expected to fulfill the role of Head Coach (to a certain extent), and my favorite piece of advice was “If you can foresee your opponent’s next move, you will almost never lose.”

Acting as if life was a chess game writ large, I was continually trying to anticipate that “next move” that life was going to throw my way. Sometimes I was right, but often enough I had no clue. The Behaviorism School of Psychological Thought teaches that a variable rate of reinforcement tends to guarantee the behavior; hence the number of people who are addicted to gambling. The notion that expertise in chess could lead to an advanced ability to anticipate events, and thereby to respond more quickly and with a superior set of response strategies, was one that did not die easily.

Risk Management, as Currently Practiced

If patterns of strategy-consistent behavior couldn’t be reflexively recognized, leading to the most appropriate response on a consistent basis, could the odds of a certain strategy being implemented be evaluated? And could that lead to the adoption of the most appropriate response?

In project management circles risk management is a big deal. The 1996 edition of A Guide to the Project Management Body of Knowledge ®(PMBOK ®Guide) 4 [7] states:

Project Risk Management includes the processes concerned with identifying, analyzing, and responding to project risk. It includes maximizing the results of positive events and minimizing the consequences of adverse events.


The PMBOK ®Guide then goes on to lay out four “major processes” of risk management:

  1. Risk identification;

  2. Risk quantification;

  3. Risk response development; and

  4. Risk response control.

Current (2011) risk management software tools primarily deal with the first two of these major processes: risk identification and quantification. The reason risk management is applicable to project management and project management alone is that the identification and quantification of risk events is predicated on project work that has been decomposed using a Work Breakdown Structure (WBS). The risk identification and quantification processes can then be used at the lowest level of the WBS. Once each of the lowest WBS elements has been so evaluated, the quantification of the risks are summed for a value of overall project risk, which can then be used to approximate cost and schedule contingency reserve amounts.

Risk events are categorized as known-unknowns, and unknown-unknowns. Known-unknowns are risk events that have some level of predictability and, presumably, quantification. Known-unknowns are often further subdivided into categories such as internal and external, or technical, internal non-technical, insurable, and legal.

The identification and quantification processes generally use one of three techniques, which provide successive levels of detail and complexity while requiring more effort in data gathering. The first (and most basic) of these techniques is Risk Categorization Bracketing, or just Bracketing. This involves the establishment of three risk brackets, low, medium, and high, and their associated percentage categories. For example, a given project may assign a 5% risk bracket to low-risk activities, a 25% risk bracket to medium-risk activities, and a 100% risk bracket to high-risk activities (meaning that, say, if something were to go catastrophically wrong with a high-risk activity, it could double the amount of time and resources needed to complete the scope of that activity).

Once these brackets are established, the risk management analyst would then go through the WBS at its lowest level and determine whether each of the elements represented a low, medium, or high risk, and assign them an “L,” “M,” or “H.” After these assignments have been made, the analyst will again go through the WBS at the lowest level and multiply the duration and the budget for the activity by the percentage associated with its classification. These calculations are “rolled up” through the WBS and summed for an approximation of the overall project’s contingency budget and schedule. An example is shown in Table 1.1.

Table 1.1 Risk Quantification Example

WBS element/task

Budget ($K) Duration (days)

Risk Category

Risk %

Budget/Schedule contingency

1.1 Project Management

$98 210d



$4.9 10.5d

1.2 Project Controls

$78 210d



$3.9 10.5d

2.1 Design Creation

$55 40d



$2.7 2d

2.2 Design Validation

$32 15d



$8.0 3.7d

3.1 Construct Foundation

$340 34d



$85 8.5d

3.2 Utilities

$46 65d



$11.5 16.2d

3.3 Structure

$120 52d



$120 52d

3.4 HVAC

$54 37d



$13.5 9.2

3.5 Interior

$103 68d



$25.7 17d

3.6 Roof/Ceiling

$98 82d



$24.5 20.5d

3.7 Landscaping

$19 14d



$0.9 0.7d

Total Project

$1,043 210d



$300.7 86.7d

While this is admittedly a crude method for evaluating project risk, it does have two advantages: it’s quick, and it’s (relatively) easy. It also has the added benefit of allowing you to tell your customer that you have performed a risk analysis on your baseline, and are using that analysis as the basis for establishing cost and schedule contingency reserves, without being dishonest.

The next most sophisticated commonly-used technique in risk identification and quantification is the Decision Tree analysis, which will come up again in the discussions on Game Theory. As with Risk Bracketing, Decision Tree analysis is usually performed at the lowest level of the WBS, though it does not have to be. It’s also similar to Risk Bracketing in that it assigns odds of risk events occurring and calculates an impact, but does so in significantly more detail and with, presumably, more accuracy.

For each activity or task being evaluated in a Decision Tree Analysis, the person performing the analysis attempts to quantify the odds of an identified risk event occurring, and its impact in terms of cost and schedule. For example, from Table 1.1, WBS 3.3, Structure, is considered a high-risk activity. Its budget is $120,000 and it is scheduled to last 52 days. However, our fictitious task leader for WBS 3.3 is worried about several potential risk events, not the least of which is that the Design Validation (WBS 2.2) guys have no idea what they’re doing, and a design flaw may not become apparent until significant progress has already been made. He’s also concerned about the availability of steel workers with the necessary level of expertise, as well as the impacts of weather. These identified risk events laid out as follows (Figure 1.1):

Figure 1.1 WBS 3.3 Structure (part 1)



Our manager then proceeds to assign what he believes are the odds of each of his concerns coming about, and what impact it would have (Figure 1.2):

Figure 1.2 WBS 3.3 Structure (part 2)



With these data elements in place, the calculation proceeds as follows:

Table 1.2 Decision tree analysis calculation

Risk event

Odds of Occurrence cost Impact

Schedule Impact

Cost Contingency Schedule Contingency

Design Flaw


$80K 35 days

$12K 5.25 days

Unavailable Expertise


$40K 20 days

$14 7 days

Bad Weather


$15K 10 days

$7K 5 days

Task Total


$135K 65 days

$33.5K 17.25 days


A couple of things to note about this example. It’s known as a single-tier Decision Tree, because we only evaluated one set of non-mutually-exclusive risk events. If we were to evaluate the odds and impacts, say, of the design flaw being recognized early or late in the task, the extra effort involved in recruiting versus a full-up strike from the union, and the disparate impacts of snow versus rain, that would require a quantification of the risk events that are subordinate to the events we’ve already quantified, and that would require another tier. Also, when Decision Tree analysis comes up in later chapters on Game Theory, each alternative in a Decision Tree is considered to be mutually exclusive, meaning that you can only select one decision or path from the alternatives. Because of this exclusivity, the sum of the odds of each tier in a Decision Tree must add up to 100%. Note that the fact that each tier’s odds sum to 100% does not imply an awareness of any and all possible risk event that could occur. Those who maintain to the contrary are failing to make the distinction between known-unknowns and unknown-unknowns. For example, if the sum of the odds of a given tier in a Decision Tree were to total, say, 83%, wouldn’t that indicate that there is an estimated chance of 17% of an unknown-unknown risk event occurring? And, if any event has a 17% chance of occurring, isn’t it now a (somewhat) known-unknown? No, the individual tiers adding to 100% simply implies that those are all of the known-unknowns being evaluated by the analysis. But the fact that the tiers sum to 100% without strict exclusivity being observed among the quantified risk events does introduce a level of error into the analysis, which I will address more fully in Part 2.

The third and most sophisticated of the commonly-used risk analysis techniques is Simulation, most often performed via Monte Carlo analysis. Monte Carlo simulations require the existence not only of a project WBS, but all of the project’s activities captured in a Critical Path Method (CPM) – capable software package in order for its analysis to maximize usefulness. Monte Carlo simulators are often sold and installed in conjunction with CPM software for this purpose.

Each activity is evaluated for identification and quantification of risk events, and this data is entered into the Monte Carlo engine. The bare minimum data needed includes activity I.D., duration, budget, worst-case scenario cost and schedule impact, and best-case scenario cost and schedule impact, with the original budget and durations taken to be the most likely. The software then calculates a series of performed projects modeled on the version captured in the CPM software, using a random number generator to create multiple different outcomes for each activity. Monte Carlo simulations are vastly superior to other risk analysis techniques for capturing the total impact of risk events that cascade, or affect activities that, in turn, delay or otherwise negatively affect the risk-encountering activity’s successors, and those activities’ successors, and so forth throughout the CPM network.

For example, in an article appearing in the Wall Street Journal on December 20, 1999, John J. Fialka describes how the National Ignition Facility (NIF) project experienced its massive cost and schedule overruns.6 [8] The NIF was a project designed to help researchers answer questions surrounding nuclear fusion. The concept was to surround a pellet of a hydrogen isotope with 192 high-powered lasers that could deliver sufficient energy to the target quickly enough to induce nuclear fusion, the same type of reaction that the Sun uses. The project was spearheaded by a scientist, who swore to Senate Appropriations Committee staffers that if Lawrence Livermore National Laboratory was selected to build the project, he would make sure it stayed on budget.

Unfortunately, cascading risk events rendered that promise impossible to keep. Work began in 1997 with an estimated budget of $1.1 billion. The final price was nearly quadruple that amount. The Wikipedia article on NIF states:

The Pulsed Power Conditioning Modules (PCMs) suffered capacitor failures, which in turn caused explosions. This required a redesign of the module to contain the debris, but since the concrete structure of the buildings holding them had already been poured, this left the new modules so tightly packed that there was no way to do maintenance in-place. Yet another redesign followed, this time allowing the modules to be removed from the bays for servicing. Continuing problems of this sort further delayed the operational start of the project, and in September 1999 an updated DOE report stated that NIF would require up to $350 million more and completion would be pushed back to 2006.7 [9]


More problems like this manifested, but perhaps the most damaging risk event had to do with dust. High-energy lasers do not react well to dust: if the various mirrors and lenses used to aim and manipulate the beams have so much as a speck on them, the damage can be explosive, immediate, and expensive. Insufficient consideration was given to the difficulties of performing an extremely large construction effort, with new technology and large, heavy, and expensive components, all in a clean room environment. The early problems with design, construction, and dust cascaded to the point that NIF became the poster child for the perils of failing to perform the project management function adequately. I do not claim that a complete Monte Carlo analysis on this project could have predicted or prevented what happened from happening; rather, I am using the example of NIF to demonstrate what can happen in cascading risk event scenarios, the same type of scenarios that Monte Carlo analysis is far more adept at predicting and capturing than other risk analysis techniques.

What Comes Next?

Once risk events have been identified and quantified, they are assigned to one of four categories (according to the Defense Acquisition University):

  1. Avoid;

  2. Control;

  3. Accept; or

  4. Transfer.


These assignments, along with the task’s risks’ identification and quantification, are entered into a risk management plan. The risk management plan is updated as the project progresses. The more advanced risk management systems will recognize contingency release points for those projects that have a contingency budget generated from the risk identification and qualification. For example, in the example we have been using so far in this chapter, once the task manager for the utility activity (WBS 3.2) has successfully completed his task on-budget, the amount of contingency that had been set aside for that activity ($11,500) can be released to another reserve, added to the baseline for accomplishing additional work, or kept in the contingency reserve in the event that another activity is expected to draw down more funds than the risk identification and quantification had originally estimated.

Another major function of the risk identification and quantification phases in current practice is to provide a confidence interval for the bases of estimate that serve as the underpinnings of original budgets. The aforementioned National Ignition Facility was by no means the first major overrun the United States Government has encountered in dealing with the contractors who provide vital goods and services to them. Management practices were developed to avoid finding themselves in a situation where a project that had been originally estimated to cost, say, $1.1 billion would be so far along in the project work that, should the real price tag be revealed to be far higher, the Government would not have to choose between walking away from the already-sunk costs and the equally distasteful choice of scraping together the funds to cover the higher estimate. One of these techniques involves performing a Monte Carlo simulation with enough iterated modeled project performances so that a contingency budget can be declared that would cover 80% of those modeled performances. If the proposed budget plus estimated contingency is considered too high a price, the project is not authorized.

The second way of reducing the risk of being committed to an overrunning project is to divide the project lifecycle into phases, separated by Critical Decision Points, also known as just Critical Decisions, or CDs. These points are:

  • CD-0, Identification of Mission Need.

  • CD-1, Initial Estimates and Baseline Creation.

  • CD-2, Approve Cost and Schedule Baselines.

  • CD-3, Approve Start of Construction.

  • CD-4, Completion of Construction, Turn Over to Customer.


At any one of these points the Government can cancel the project and reclaim any funds that had been previously committed but not yet spent.

This section of the book was not intended as a comprehensive assessment of the available risk management techniques or an in-depth analysis of the utility of using them; rather, it was intended as an overview of the common practices associated with risk management today, and to provide enough of a primer for when the discussion turns to the conflation of risk management theory with Game Theory, and the limits of risk management as an aid to informed decision-making (which will happen in Part 2). As we will see, these limits are far more confining than many risk management practitioners are prepared to recognize, much less admit.

Submit your own content for publication

Submit content