GPM First
Chapter 11 of Game Theory in Management (978-1-4094-4241-7) by Michael Hatfield

Corner Cubes and Robustness

Chapter 11

The wise are instructed by reason; ordinary minds by experience; the stupid, by necessity; and brutes by instinct.

Cicero (106 BC–43 BC)

The main obstacle to creating and implementing the information streams that will allow for both tests of robustness within the organization and for optimally-informed decisions is the mis-application of management information systems into areas where they are no longer relevant. But advocates of these systems further their techniques and practices beyond their nominal boundaries of relevancy because such furthering is consistent with their internal (or organizational) narratives, and the very worst thing one can do in trying to influence others is to imply that their internal narratives are, in any way or in any part, irrelevant.

But the brutal fact is that management information systems require resources and time to set up and maintain, and pursuing information streams that are not timely, accurate, and relevant is to waste that time and those resources, as well as to remain ignorant of the very information required to achieve success in any even remotely competitive environment.

When confronted with evidence that their technique or MIS is woefully inadequate for the job it was intended, defenders of the irrelevant MIS will inevitably resort to the argument that, for this particular information need, there was nothing in place previously, and the current system, no matter how flawed, is certainly better than nothing. This is sophistry of Cecil B. DeMille proportions. An untimely, inaccurate, or irrelevant information stream is worse than useless, since it both provides a facade of validity while actually being misleading. It’s the equivalent of the World War II German high command going over the data and analysis that showed why the Allies would most likely land at Calais, and confidently basing their decisions on that analysis. There is an old Turkish saying: it’s never too late to turn back from the wrong road. I would adapt that to: it’s never too late to abandon or dramatically modify an irrelevant information stream.

Another disingenuous dodge that often comes up when an information stream is shown to be irrelevant presents as the question, “Well, why wouldn’t you want to know that information?” The answer, of course, is that information takes time and resources to collect the data, process the data into information, and deliver the information in a timely and accurate fashion, and in a manner that the receivers can readily digest. Wasting time and resources on irrelevant information streams is, well, a waste. Nagumo needed to know the whereabouts of the American aircraft carriers; the number of barnacles on the bottoms of their hulls was irrelevant, and, therefore, a waste of time and resources should he have sought it. And it was his not knowing this most relevant information, while Spruance was in command of the information on the Japanese’s disposition that spelled doom for the Imperial Japanese Navy fleet at Midway.

On Useless MBAs

There apparently is a whole generation of market analysts known as “quants,” short for quantitative analysts, who are gifted mathematicians and statisticians. Their job is, generally, to go over massive amounts of data in order to tease out a causal link between observable trends in market behavior. So coveted and prevalent have the quants become, that traditional Masters of Business Administration degrees, with their emphasis on finance, accounting, and micro- and macro-economics, are increasingly viewed as undesirable in the field of advanced market analysis. This is, of course, ironic in the extreme. For starters, the very first lesson in every initial statistics class I’ve ever had was the notion that correlation is not causation. And yet, here are these analysts, poring over vast amounts of data in order to ascertain some sort of correlation, and immediately overlaying a causal loop onto them in order to provide a narrative that can be flipped from explaining how history unfolded over to how future events will unfold. The negating factors of our personal narratives, and our organizational narratives, of cognitive or confirmation bias have been replaced by the invalidating influences of the fact that correlation is not causation. It’s as if thousands of modus ponens arguments have been unleashed onto millions of points of data, looking to capture them in their logical structure and produce a valid conclusion, or even a truth. Wall Street firms think it’s great, but Karl Popper is rolling his eyes.

In The Black Swan, Nassim Taleb likened this business behavior to snatching nickels from the near path of oncoming steamrollers, and I think he has a point. Without knowing the performance of the organization along the three types of management, or even acknowledging the different types, a certain organizational myopia sets in, and leads to vulnerabilities that invite Black Swan events. As Taleb asserted, we tend to view Black Swan events as having been predictable after the fact, even though, by definition, they are not. Now consider how that happens on a micro basis. The asset managers observe what transpires, and, with the gift of perfect hindsight, point out how the organization could have handled the situation more efficiently. Their confirmation bias sets in and, when combined with the tendency to view Black Swan events as having been predictable, this aspect of management pseudo-science becomes even more entrenched in their narrative. In their 1975 book The People’s Almanac, Irving Wallace and David Wallechinsky included an article entitled “The Trillion Dollar Rat Hole.”1 [99] The article listed a series of public works and construction projects that could have been performed for $1,000,000,000,000 (one trillion US dollars), while still leaving a significant amount of money for defense – an amount that the author clearly believed to have been sufficient. Look how the effects of after-the-fact analysis twist the author’s logic. The modus tollens structure is:

  • If the United States military is too weak, the nation will be attacked.
  • The United States has not been directly attacked since World War II.
  • Therefore, the military was not too weak.


This represents a valid argument (though the initial premise may not be completely sound). But the additional assertions point to a perspective that the United States had spent far too much on defense since the end of World War II, assumed to be true because the United States had not been directly attacked. The validity of the initial premise evaporates, since there was absolutely no way of knowing which ship, airplane, missile, or even bullet represented the point at which potential attackers went from planning a direct attack to being dissuaded from doing so. For example, the last full year of peace prior to the United States being pulled into World War II, 1940, defense spending was a suicidally low 1.7% of Gross Domestic Product (GDP). As discussed in Chapter 2, even Yamamoto knew that if more of America’s industry was devoted to defense, it would become a “giant” on the world stage. But, at 1.7%, America was clearly vulnerable in a rapidly militarizing world, with some pretty bad players coming to power overseas. By 1941 defense spending had increased to a still inadequate 5.7%, with sharp increases during the war years. It never dropped below 5% again, and the United States did not endure a similar sneak-attack until September 11, 2001. However, had the United States followed the advice of this article’s silly author,2 [100] defense spending between 1946 and 1975 would have remained below 7%, with many years below 5% and even 2%. This Almanac contributing editor looked at the (relatively) peaceful years following World War II, and lamented the allegedly wasted resources devoted to defense. He or she elected to not make the connection, the causal loop of a strong military dissuading potential enemies from attack, and integrated this non-connection into their narrative, leading to a conclusion that the United States could have spent more on social programs (automatically assuming such spending is beneficial, thereby committing the syllogism of Begging the Question) without recognizing the concurrent lapse of robustness.

Similarly, asset managers watch how their organizations interact with outside market events and forces, and assume that the outcome would have been identical had the organization been less robust, and more efficient. It’s profoundly illogical, but it’s a meme that appears time and again in modern management science teaching.

The quants suffer from the same problem that the asset managers do, at the point where they attempt to flip their narratives from explaining the past over into predicting the future. And, on occasion, they’ll even admit it. Check this discussion on sensitivity analysis from Quantitative Models for Management:

Sensitivity Analysis

One of the assumptions we have made in our discussion of linear programming is that the values of the parameters of the problem are known with certainty. We assumed, for example, that in the extended Argo-Tech problem the objective function coefficients were 18.5, 20, and 14.5, respectively … But the values of these parameters are not always known with certainty. For example, changes in the cost of materials, cost of labor, or price of a product would cause changes in the coefficients. On the resource side, delayed shipments from suppliers, strikes, spoilage, and other factors all lead to changes in the supply of resources. Each of these changes could affect the optimal solution to the original LP problem.3 [101]


“But the values of these parameters are not always known with certainty.”4 [102] It would be more accurate to say that the values of all of the relevant parameters is never known with certainty, and yet the entire field of quantitative analysis in management is predicated on the idea that in most instances, the values of all the parameters can be known with certainty and, in those instances where they can’t, some sort of “sensitivity analysis” can be invoked to save the analysis. I would argue that this is a myth, and I would project that Taleb and Popper would agree with me.

Another trend that erodes the effectiveness of traditional MBAs has to do with the lack of boundaries to the knowledge and the techniques being taught. It’s as if the epistemology of management specialties has never even been considered. Recall the discussion on my classmates while I was pursuing my MBA, and how, after becoming familiar with a particular aspect of business, they would peel away to pursue that specialty, convinced of its primacy in the management world. Of course, we expected each of our professors to be a specialist in their field, and to be enthusiastic about their expertise. But never – not even once – did any of them step up, and tell the class “This is where on the management science map these ideas hold sway, and this is where they don’t add a thing to the decision-making process.” Newly-minted MBAs come off the assembly line knowing, say, how to read a balance sheet, but (probably) with no idea that Generally Accepted Accounting Principles bring next to nothing to the realm of project management. This ignorance is both bad for the MBA, and the organizations they are attempting to support. Recall also the struggles I recounted, about being a young project controls specialist and having to fight the accountants excessively to get my project’s actual costs collected by Work Breakdown Structure, instead of by Organizational Breakdown Structure. The accountants were simply so enamored of their knowledge and expertise that they could not be easily persuaded to accommodate the project managers’ most basic business information needs.

To be intellectually honest, though, I must now admit that I don’t perceive an outer boundary to Corner Cube theory as it pertains to management science, but there’s a difference here. When the Asset Managers assert that the whole point of management is to maximize shareholder wealth, they are actually reducing the realm of management to a certain set of confines – kind of like what happens in the whole Game Theory realm. When successful organizations make decisions that do not appear to have anything to do with maximizing shareholder wealth, the Asset Managers’ narrative cannot plausibly explain them. What Corner Cube theory does is to argue against such confines, and posit an almost infinite realm on the outside of the organization, one where the organization interacts with that wide-open realm in one of three ways. These ways mirror psychological models, with the several parallels noted in the previous chapter. Just as Game Theory sought to engineer circumstances so that options available to the players were reduced to a quantifiable amount, with the rewards and punishments also reduced via quantification in order to arrive at valid conclusions and truth, so, too, do the various specialties in management science seek to reduce that world by excluding the events and occurrences that cannot be reconciled via their theories. Recall also the results of the Ultimatum Game, when it was tried out with real people and real money. The actual outcomes virtually never matched the mathematically calculated predictions and, when they didn’t, the reason was ascribed to “cultural” differences that the model hadn’t taken into account. In short, the model was not sufficiently expansive to adequately reflect reality, just as much of today’s management science is insufficiently broad to reflect what goes on in the free marketplace. And yet management school graduates – and their professors – have profound confidence that not only do their techniques and knowledge exhibit high degrees of effectiveness, these people do not appear to have ever sought to discover the limits to the arenas where their ideas hold sway. This is what makes the proposition of doing exactly that so unattractive: these people’s livelihoods and future income streams are predicated on the perception that their knowledge and techniques are valuable across broad applications, and any assertion to the contrary isn’t going to be welcomed. Ah, well, let’s just plunge in, and see how much trouble we can get in to, shall we?

Limits of Asset Management Information Streams

The title of this section alone was fun to type. Here are the areas where Generally Accepted Accounting Principles, or GAAP, contribute to overall management information:

  • Number, amount, and value of the macro organization’s assets.
  • Amount and nature of the organization’s liabilities.
  • Value of the organization’s equity, and who holds it.
  • Amount of money coming in to the organization.
  • Amount of money spent by the organization.
  • When the organization is likely to have states of higher or lower liquidity.
  • The profitability (or lack thereof) of the organization.

But unless GAAP information is combined with information from the other management types, that’s it.

Of course, much insightful management information can be gleaned from combining these elements. There’s no denying that these elements, both singular and combined, can be essential to such decisions as to whether or when an organization should issue an initial public offering (IPO), or decisions on whether, when, or even how to expand or contract. But without input from the project or strategic management, asset management information can’t perform many of the functions it claims to cover completely, like how much inventory the organization should have. Without some notion of Product/Project Management and Strategic Management, there’s no way to accurately anticipate demand for the organization’s products or services, and without knowing approximate demand, the inventory decision must be made with woefully incomplete information. Don’t misunderstand – I’m fully aware that many (if not most, or even all) of the decisions that managers make are made with incomplete information. But the use of untimely, inaccurate, or irrelevant information returns us to the problem of maintaining a facade of an informed basis of decision. The issue here is that techniques exclusive to GAAP have been asserted far beyond their most appropriate range of relevancy. Consider this quote from Introduction to Management Accounting (Fourth Edition):

The accounting system is the major quantitative information system in almost every organization. An effective accounting system provides information for three broad purposes: (1) internal reporting to managers, for use in planning and controlling routine operations; (2) internal reporting to managers, for use in strategic planning, that is, the making of special decisions and the formulating of overall policies and long-range plans; and (3) external reporting to stockholders, government, and other outside parties.



Management accounting is concerned with the accumulation, classification, and interpretation of information that assists individual executives to fulfill organizational objectives as revealed explicitly or implicitly by top management.5 [103]

The first quote reveals that, like Corner Cube theory, Management Accounting believes it has three target customers. But, unlike Corner Cube’s triad, Management Accounting’s information customers are all internal to the organization – a highly limited grouping. The second quote contains the assertion that Management Accounting is “concerned with the interpretation of information that assists … executives to fulfill organizational objectives …” (italics mine). The collection and interpretation of the information streams that the organization’s decision-makers need and use to attain their objectives is certainly not confined to the internal domain of the organization. This is indisputable (using logic, anyway), and yet asset management techniques are often invoked outside of the asset realm.

For example, consider the Return on Investment, or ROI. For reasons that elude me, this calculation has been used to assess the value of things that have nothing to do with assets in-hand. As discussed earlier, PMI® has used it to support the creation and maintenance of Project Management Offices, or PMOs. In my view, to invoke the ROI calculation in the Project Management realm is to cede the intellectual high ground to the Asset Managers, at the very time Project Management specialists ought to be dominating the management science debate.

There are many derivative formulas to calculate the ROI, but common to them all is the central parameter of the most basic version: expected return. What is the expected return? On an asset like corporate bonds, it’s the stated return minus the impact involved in the odds of default. For government bonds, it’s the stated return, minus the odds of default plus the inflation rate, since governments are in a position to print more money and thereby default on their creditors. What are the odds of those things happening? Who knows? The Risk Managers? They can’t possibly know, but they will never admit as such. Consider commercial paper traders who become rich on so-called junk bonds. What’s the definition of a junk bond? That’s a corporate bond that offers a high rate of return, but is considered risky. Again, considered by whom? Risk analysts, of course, so that if anyone should become adept at identifying high-yield bonds where these risk analysts erred in classifying them as being at high-risk for default, then that person is almost guaranteed to become rich. It’s unfortunate that such insightful ones are somehow considered devious at best, and criminal at worst. Why the negative connotation? Because they profited from betting against those who derive bond default risk classifications, and won? What you have, in essence, are two scenarios: one where the expected value is known (the stated rate of return on the bond), but the odds of it being realized are uncertain; and the other most often involved in the ROI calculation, where the expected value can’t be known, but the odds of it being realized (whatever “it” is), are pretty predictable. The junk bond dealer is excoriated, but the everyday ROI calculator is considered insightful. Amazing. Once again, there are simply too many parameters involved (and only one has to be mis-cast to utterly invalidate the entire analysis) to accurately calculate an expected value or odds of default that would enable a precise and accurate (read: usable) Return on Investment calculation, which would lead to truly informed decision-making.

And yet, the ROI calculation has somehow achieved Litmus Test status, the key arbiter for all efforts undertaken by the organization (a Yahoo! search on “ROI” and “Project Management” returned 1.62 million hits in November 2010). This is a highly dubious meme perpetrated by Asset Management advocates throughout organizations. That is, organizations poised to be stricken by Black Swan events. I would love to attend a debate between the ROI advocates, and the Six Sigma proponents, both of whom believe they should be the final arbiters of which business processes deserve consideration, and those that don’t. What are the top executives to make of information from the Asset Managers, who claim that a certain project or acquisition has a negative Return on Investment, when the Six Sigma guys are simultaneously asserting that the exact same project requires an infusion of resources in order to bring about a favorable end, from a Quality Management point of view? It would come down to whose input is considered more valuable and, based on traditional management science, the fall-back position always goes to the Asset Managers. Without the perspective provided by the Corner Cube model, what other choice can be expected?

The “expected future revenue” parameter makes a comeback when Asset Managers are calculating opportunity costs, except, in this instance, they are compared to “expected future costs.” It’s unfortunate that when politicians spend tax monies, they obviously have absolutely no concept of opportunity costs, which are defined as “… the maximum available contribution foregone by using limited resources for a particular purpose.”6 [104] Otherwise they may be less inclined to advertise how those taxes are being spent. But, with no alternative to the new federal office building offered up, the new federal office building just looks like a gift – provided by the politician, of course. Nobody need know that the same amount of money spent at, say, the Center for Disease Control could have brought about a breakthrough that made everybody’s life a few pennies better every day.

But in the business world, managers are (usually) acutely aware of opportunity costs, and this piece of information is often used in making decisions about the organization’s future. But should it? The whole concept of “expected return” and “expected costs” bear a striking resemblance to the Drake Equation analyzed earlier, which supposedly provided a positive value of the odds of extraterrestrial intelligence in the universe. As with the Drake Equation, both expected future revenue and expected future costs depend on little more than rank speculation. And rank speculation does not suddenly assume prescience just because it’s CPAs who are engaged in it.

Look at how expected future costs are assessed – they are estimated. But, according to AACE (the predecessor to AACEI), the best possible estimate, performed by a professional estimator with off-the-shelf software, is accurate to within 15% of the final project’s costs. Decisions made using ROI or with considerations of expected costs often turn on far more precise parameters – a swing of a percentage point, or two. Even if one concedes that the expected cost data point is not rank speculation, a la the Drake Equation, the very best it could hope to return is 15% accuracy, and that’s with a considerable investment in resources and expertise. Leaving aside the scarcity of professional estimators in the typical organization’s accounting department, the use of these tools in an attempt to quantify future market conditions or events is far riskier than is generally accepted. They simply don’t work as advertised, and, when they do, they are not nearly as precise or accurate as they purport to be.

Another highly irksome intrusion of the Asset Managers has to do with their interactions with the Product/Project Management folks. As discussed previously, when asked what the final costs will be of a particular project, they will employ devices such as performing regression analysis on the project’s actual cumulative costs, compare original budgets to actual costs on a detailed basis, or try to re-estimate the remaining work, add that figure to the cumulative actual costs, and pretend to present relevant information. All three of these analyses are profoundly flawed, and yet continue to be incorporated in organizations even today.

Take a look at performing regression analysis on actual costs. This is as questionable a technique as it is popular. The script here would appear to be one of equating a given project team’s spending behavior with their performance against project objectives, and I would argue that such a script is utterly bereft of logic or reasonable causality. And yet the Asset Managers will confidently take this flawed structure for explaining how observed events have unfolded the way they did, and take that step of flipping over the time-now line, and pretend that the resulting data takes on the mantle of valid information. The data derived from performing a statistical regression analysis on a project’s cumulative actual costs provides two symptoms of uselessness: one, it’s an asset management technique trying to perform in the project management realm, meaning it’s irrelevant; and, two, the narrative being projected going forward includes fatal flaws in its function to explain how past events unfolded, meaning it’s invalid. In addition to these two shortcomings, this technique shares the unattractive characteristics of appearing to provide usable information, while it is actually misleading, as well as taking time and energy to collect the data and perform the analysis, time and energy which would have been better spent on truly usable information.

As for comparing budgets to actual costs, this technique actually does have a relevant function: that of evaluating the accuracy of the budgets. It’s also part of the equation to calculate depreciation. But, again, this technique has been extended far beyond its areas of relevancy. The aforementioned Introduction to Management Accounting (Fourth Edition) can’t get seven pages into the first chapter without asserting that this is the method for evaluating cost performance.7 [105] As discussed using the Widget Project example, comparing budgets to actual costs simply can’t provide relevant information regarding cost performance. The Earned Value information is central to this piece of management information. Without Earned Value, there is no way to capture cost performance, as the Widget Project example showed so clearly. And yet the asset managers persist. Introduction to Management Accounting (Fourth Edition) was written by Charles T. Horngren, who is both a Ph.D. and a C.P.A., and is associated with Stanford University.8 [106] It’s hard to get more mainstream in the arena of management science principles than that. And yet, in my opinion, he’s simply wrong.

Misapplication of the estimators’ skills makes another appearance here, though it could have easily been placed in the project management section. As we have seen, the Return on Investment and the calculation of opportunity costs require an evaluation of expected value or expected return which, in turn, relies on some form of estimate. These estimates are often inaccurate, even when performed by professionals using top-notch software, which, in my experience, is a rarity. Most often they are performed by low-to-mid-range analysts, using spreadsheets, meaning that their accuracy rate rarely approaches the 15% level. The next-level estimate, known as a budget estimate, has an accuracy range of 25% to 40%, which means that it borders on useless. Recall that an information stream must have all three characteristics of validity: timely, accurate, and relevant, and the estimators’ product often fails to be accurate. And, harsh as this may be, it is, therefore, often invalid – except in some very specific areas, which we will come to.

Memory, the Mirror, and Anticipation

Management Information Systems are usually predominantly feed-back or feed-forward, but can sometime manifest characteristics of both. Feed-back systems have the following attributes:

  • The data elements are matters of historical fact – they’ve already happened, and can be quantified.
  • Data so collected can be processed into information using known methods (though, as we have seen, these methods are often expanded well beyond their nominal ranges of relevancy or accuracy).
  • Feed-back systems are based on objective data.


Conversely, feed-forward systems have the following characteristics:

  • The data elements are comprised of projections.
  • Data so collected cannot be processed into information using trustable methods.
  • Feed-forward systems are based on subjective data.

Another way of defining feed-back and feed-forward systems is that feed-back systems help to construct the narrative that describes why history unfolded the way it did, while feed-forward systems attempt to provide the narrative of how things can be expected to unfold in the future. Asset management systems are based on feed-back, meaning that they are singularly ill-suited to predict the future. It falls to reason, then, that any attempt from asset management information streams, based on GAAP, to predict the future are probably going to fail the accuracy test.

Consider how feed-forward and feed-back systems mirror the Berne archetypes. Feed-back data points are used as fodder for piecing together how history unfolded, and are assembled into narratives that tell the macro organization who it is. Based on the narrative supported by feed-back information streams, the attempt to flip it forward, to describe the most likely future scenarios, supported by the subjective, feed-forward information streams, becomes the basis that serves as the foundation for decisions that determine the organization’s future direction. Feed-back systems create the narrative, and feed-forward systems anticipate when similar circumstances and scenarios appear, and how the ensuing events will unfold.

However, only in the realm of Project Management does past performance provide a strong and reliable indicator of future performance. In all other areas the ability to flip the narrative from accurately describing why past events occurred the way they did over to providing a model for anticipating how the future will come at us is highly suspect, and defies a usable and consistent quantitative structure. Caleb, from East of Eden, may have recognized that investing in beans as the First World War breaks out was a good (if fictional) investment, and I have a friend who benefited greatly by adjusting his investment portfolio towards Northrop-Grumman and Halliburton at the onset of the Gulf War. But, as they say, you can get away with these things now and then, but you can’t make a living off of them. Such insights do not lend themselves to all-encompassing economic truths: speculating in commodities is one of the riskiest acts in all of free-market participation. And, most importantly, these insights do not and cannot come from analyses completely rooted in Generally Accepted Accounting Principles. They must be fed from other types of management.

Which brings us to: what happens when we look at ourselves in the mirror? Do the biases behind our internal narratives keep us from seeing the evidence and facts that might lead us to seriously examine those parts of the narrative that may be invalid, or obsolete? And, if that is happening on a personal basis – as it almost certainly is – what does that mean for the macro organization, with all of those individual biases coming together in large teams?

Generally speaking, GAAP provides relevant and accurate information when using its feed-back type information streams to tell the organization about its internal characteristics. As soon as the asset management analyst attempts to use those techniques (like Return on Investment or Opportunity Costs) that rely on a feed-forward element, they suddenly swerve into irrelevancy or inaccuracy. This inaccuracy and irrelevancy becomes all the more acute when feed-forward-based GAAP techniques are used to provide information streams on subject areas external to the organization, such as Product/Project Management or Strategic Management (Table 11.1).

Table 11.1 Asset Management Information System efficacy

Asset Management MISs


External – Customer

External – Competition

Berne Archetype




Corner Cube Type




GAAP – feedback

Relevant, Accurate

Irrelevant, Inaccurate

Irrelevant, Inaccurate

GAAP – feed-forward

Relevant, Inaccurate

Irrelevant, Inaccurate

Irrelevant, Inaccurate

Referring to Table 11.1, note that the sole area where GAAP provides both relevant and accurate information is when the techniques used are based on feed-back type systems with the purpose of providing information internal to the organization. When attempting to provide information internal to the organization based on feed-forward type systems, the information is still relevant, but the difficulties involved in predicting the future (or “flipping the narrative”) makes such information more suspect. Past these areas, though, Generally Accepted Accounting Principle techniques take on enough irrelevancy and inaccuracy aspects as to render their output unusable.

Again and again, it comes down to information. Relevant, timely, accurate information. All things being equal on the information front, the organization with the most bias-free narrative will win most of the time. But, if things are not equal on the narrative front, the organization with the most timely, accurate, and relevant information will win every time.

Limitations to Project Management Information Streams

Project Management is not without its limitations. Compared to the asset managers, though, they have had approximately 500 fewer years to assert their supremacy in the arena of management science ideas. The Project Management Institute® was founded in 1969, and its first certification, the Project Management Professional®, was initially issued in 1984.9 [107] They now boast over 350,000 PMP®s.10 [108] These PMP®s must pass an examination that covers the nine (formerly eight) sections of the Guide to the Project Management Body of Knowledge, or PMBOK ®Guide. These areas include:

  • Scope.
  • Cost.
  • Schedule.
  • Risk.
  • Communications.
  • Human Resources.
  • Procurement.
  • Quality.
  • Implementation/Integration.

I do not know who came up with these areas of management as the building blocks of Project Management. Knowing PMI® as I do, my bet would be that it was done by committee. I do not agree that all of these topics should be considered the essentials of Project Management. I want to examine each of these nine areas, to evaluate if and where PMI®’s technical approach has been extended past its boundaries of relevance. And, in the arena of Project Management, it all begins with scope.


Scope Management is entirely predicated on the ability to fully and clearly define the product, project, or service being undertaken by the organization. Without a clear definition of the organization’s intended output, it’s impossible to assess how much the output will cost, or when it can be delivered. All project management-based information streams depend on the scope management piece being performed adequately.

Work that should be managed as a project has the following characteristics:

  • It has distinct scope, i.e., an independent observer can ascertain if the intended work has been completed (as distinct from a function, or an asset).
  • It has ascertainable beginning and ending dates.
  • Resources are dedicated to the work.
  • A single person (or organizational entity) is responsible for the work.

Often work that does not meet all four of these characteristics should still be managed as a project, but work that does meet all four is best managed as such.

The main technique involved in scope management is the decomposition of the overall project into smaller pieces in a hierarchical structure known as a Work Breakdown Structure, or WBS. The WBS is usually documented in the following formats:

  • A graphic chart, resembling an organization’s line management relationships, except instead of indicating which departments report to which executives, the graphics indicate the parent–children relationship of the decomposed project work.
  • A WBS Index, which lists the number and title of each WBS element.
  • A WBS Dictionary, which is a WBS Index with a relatively short description of each element in the WBS Index.
  • Work Packages and Planning Packages, documents that describe in detail the work associated with the most detailed (or lowest) level of the WBS. Work Packages and Planning Packages belonging to the same parent are often combined to form Control Accounts.


Based on the detailed description of the scope, cost and schedule, estimators can create their respective baselines that allow for control of the project.

Outside of project scope management, the technique of decomposing work into smaller, more easily evaluated pieces is common among quality analysts and industrial engineers, most often with the goal of making a given function or organization either safer or more efficient, or both. However, once a technique’s given goal is no longer oriented towards improving performance from the customer’s point of view, it’s a sure sign that we’re no longer talking about project management. In this case, since the decomposition technique is being used on functions or organizations in order to improve aspects that are internal to the organization, we have stepped back into the realm of the Asset Managers. That’s okay, just as long as everybody’s clear on the topic of which Corner Cube area is being addressed or evaluated.


Much information system chicanery occurs in attempting to manage a project’s schedule. The only valid manner to manage all but the very simplest project’s schedule is through the Critical Path Method, and there are no exceptions. Critical Path involves estimating the duration of accomplishing the scope detailed in the Work Packages, and then establishing which pieces of work must be accomplished before other pieces can begin. When these pieces of work have been linked in sequential order, they are said to have schedule logic, and a critical path network can be established to calculate total planned project duration. There are many off-the-shelf software packages that can accommodate and calculate these networks, varying in degree in ease of use and robustness.

The Critical Path Methodology is particularly powerful in its ability to tell the future. There’s no narrative flipping, either – CPM does it with (mostly) objective data. For example, suppose an activity for pouring a building’s foundation was originally estimated to take 30 days. On day 15, the scheduler contacts that activity’s manager and asks for an estimate of how much progress has been made. The manager measures how much concrete has actually been poured, and informs the scheduler that the activity is 30% complete. The scheduler dutifully enters this piece of data into the CPM software, recalculates the schedule, and, voila!, the building’s foundation will not be completed on day 31. Rather, at the current rate of performance, the foundation will not be done until day 50, so that all of those activities’ managers who were planning on showing up on day 31 assuming they could start their activities because the foundation would be finished need to be contacted and told to wait an additional 20 days.

Of course, there are limits to this predictive power. Rates of performance can and do change (though, as we saw earlier, no more than 10% in either direction once the activity is 20% complete), and Critical Path can only provide this information within the project management realm. The acid test here is the ability to assess an activity’s percent complete. Obviously one can’t glean percent complete on the engineers’ department, or from an asset. But as long as we recognize where CPM is relevant and accurate, the fact that it uses mostly objective data in a feed-back-type system means that its ability to accurately depict the timing of future completion dates is matched by no other information stream or technique.

But that certainly doesn’t mean that simpler, but invalid, techniques won’t give it a try. Recall that all legitimate information streams’ architecture can be reduced to the following model:

Figure 11.1 Valid MIS architecture



Probably the most common invalid MIS structure is a poll. Mapped out, it looks like this (Figure 11.2):

Figure 11.2 Invalid MIS Structure



This poll-type structure is the basis for uncounted milestone lists and action item trackers. There are several problems with this structure, including:

  • Note that no actual processing of the data into information is taking place. It’s simply a raw data repository.
  • There’s rarely any discipline involved in the data being entered. In the case of milestone lists, “updates” are invariably a given manager’s opinion about whether or not she will meet her deadline. Even if said manager is aware of a problem, it’s a rarity for her to self-identify early, preferring to believe that the problem(s) can be addressed and corrected before a deadline is missed.
  • It’s very difficult to evaluate the data with respect to its timeliness. One of the problems with the so-called information coming out of a poll is that there’s always someone with better, more recent data, and they may or may not have entered (much less superseded) the obsolete data in such a system.


The use of poll-type structures in an attempt to manage a project’s schedule has a particular consistent, almost comical, manifestation. The milestones are set up at the beginning of the macro reporting cycle, usually an organization’s fiscal year. Since these milestones are for the whole year, most of them come due towards the end of that time. At the end of each micro/reporting cycle (usually monthly), the owning managers are contacted to gauge their opinion if they will meet their deadlines or not. Almost always they will answer that the milestones will be met on time under the following circumstances:

  • They honestly believe that the milestone will be met on time.
  • They recognize problems exist, but believe these problems are too minor to impact completion dates.
  • They recognize deadline-threatening problems exist, but believe they can be remedied in time.


In fact, the usual case is that the owning manager will not readily self-reveal a schedule problem unless and until a delay cannot be avoided or covered up. This leads to the output of the poll-structure system indicating that all is well up until the closing reporting cycles of the milestones’ baseline, when early trouble signs begin showing up in the reports. Finally, in the closing cycle of the reporting baseline, all of the problems surface simultaneously, essentially negating the organization’s ability to effectively deal with the problems until after they have completely manifest. The system that pretended to anticipate problems can only point to where the problems were after the fact.

Now, if these systems function as a sort of broad-based to-do list, I have absolutely no problem with that. It’s only when this type of structure moves to displace a Critical Path-style system in order to “manage” project or program milestones or activities that they should be considered irrelevant and inaccurate. CPM systems are populated with mostly objective data elements – not so polls, which are often populated with subjective data elements. And when I say “displace,” I do not mean that a CPM system has to already be in place on a given project. The previously-discussed ruse of “There was nothing there before, and this system is better than nothing” should never be allowed to serve as the basis for installing a poll on a project when a CPM is the appropriate system.


The next in our cavalcade of counterintuitive but inescapably true challenges to current management science that will draw the ire of business professors everywhere is this gem: the techniques contained in the GAAP codex do not provide the information needed to manage costs. The closest they come is in managing spending, which is something different altogether. In addition to the amount spent by members of the organization and the amounts planned to be spent, one more critical piece of information is needed: Earned Value. As discussed in Chapter 9, there is no cost management without Earned Value. Chapter 9 also discussed the calculated Estimate at Completion, or EAC – in other words, Earned Value can accurately predict future project costs in the same fashion that Critical Path can accurately predict the timing of future project completion dates.

But, like the polls that attempt to elbow aside legitimate CPM systems, GAAP techniques are often (mistakenly) employed in place of Earned Value systems when trying to predict future costs from project work. Ironically, Earned Value was originally a GAAP concept, which was adopted by the Project Management realm much as English readily steals words from other languages. It’s easy to see why – Earned Value has limited value when removed from the project realm, particularly when it is tried in the Asset Management world. But, in the project management arena, it is clearly indispensable. Similarly, when GAAP tools make an appearance in project management, they simply don’t work as advertised. This is because (a) the narrative is no longer about the characteristics of the internal organization – it’s now about why things unfolded the way they did, and how they are likely to occur in the future; and (b) the attempt to flip the narrative is based on the “expected value” parameter which, as we have seen, is highly suspect.

One of the most powerful characteristics of Earned Value Methodology (EVM) is its self-correcting capability. If a GAAP analysis contains a profoundly flawed expected value figure, the whole analysis is rendered inaccurate and misleading, and will remain as such unless until the offending parameter is discovered and corrected. However, a poor initial assumption, such as planned budget, is almost automatically exposed and corrected in an EVM system. Say a project is originally estimated to cost $100,000, but a perfectly prescient estimator would have known that the final costs will really turn out to be twice that. The project gets underway. At the end of an early reporting period, the EVM analysts learn from the project manager that the project is 25% complete; however, the accountants have dutifully been collecting costs based on the Work Breakdown Structure, and total project costs are $50,000. The EVM analysts know instantly (and, if they don’t know instantly, any EVM software would tell them) that the project’s EAC is going to be $200,000, even though a completely flawed assumption served as the basis for the original budget creation.


I think I did a fairly thorough job of deconstructing the Risk Managers’ narrative in Chapter 8, and do not wish to revisit it here. However, I would like to point out that Risk Management is commonly considered an essential part of project management, both in the public and the private sectors, as evidenced by its being required by many government agencies and mega-corporations, as well as its continued presence in the PMBOK ®Guide. There’s an interesting rumor floating around the blogosphere, that there is not a single solitary major project that can point to Risk Management techniques as being responsible for attaining project objectives on time and under-budget, but I don’t know how that assertion could be proven (or disproven) empirically. I do know (from experience) that publicly asserting that Risk Management is little more than institutionalized worrying expressed in mind-numbing statistical jargon is almost sure to draw a plethora of angry responses, many of them unhinged.

Risk Management suffers from both problems of relevancy and accuracy. Chapter 8 mostly focused on accuracy, especially with respect to Black Swan events. The relevancy issue can be settled with one simple question: did the accurate prediction of the contingency event change the response from the project team in any substantial fashion? If the answer is “no,” then there can be no question – the risk analysis was irrelevant. In fact, the only time after the establishment of the project’s cost and schedule baselines that risk analysis techniques can be said to add any value whatsoever is when the analysis is both accurate and substantially changed the project team’s response. Consider a payoff matrix based on the risk analysts’ information stream (Table 11.2).

Note that only in the instances where the information is relevant AND accurate does the outcome have any possibility of justifying the time and expense of pursuing a robust Risk Management system. But it has been my experience that, rather than seek augmentations to the accuracy rate of Risk Management techniques, combined with an identification where risk analysis is and is not relevant, the Risk Management community has instead sought the expansion of their techniques into the so-called “upside risk,” or opportunities management.

Table 11.2 Risk Management Information Stream Payoff Matrix





Wasted effort producing misleading information

Misleading information


Waste of effort

Usable information, lending credence to the system

Ironically, the Risk Managers’ greatest contribution comes when they abandon the idea that they can predict the project’s future, and embrace the notion that they can process valuable insight into the narrative of why the organization’s history unfolded the way it did – but they would have to embrace CPM and EVM to do so. For example, when an organization encounters a project management disaster, the forensic analysis that can provide a clear script as to what went wrong, and why, can be elusive. However, by using a Responsibility/Accountability Matrix, or RAM, which cross-connects the project’s Work Packages with the specific team or organizational units that perform the work, past cost and schedule performance can be gleaned and analyzed. Can you imagine the value of the information that indicated, say, that those Work Packages that performed the poorest in the Big Dig project disaster were those with the highest union participation? (To be clear, I’m not saying such information exists. But can you imagine the value of such information?) Besides the explosive political implications, those contractors most concerned with success in the Project Management realm (unfettered by forced union participation) would work to avoid the characteristic associated with project failure. Because correlation is not causation, Risk Analysts could test for confidence intervals that would indicate which characteristics were most probably the proximate cause of the project encountering the negative event.

But eliminating cognitive bias errors from the organization’s internal narrative is not what these guys are about. It’s really too bad, too.


Check the definition of project communications management from the PMBOK ®Guide:

Project Communications Management includes the processes required to ensure timely and appropriate generation, collection, dissemination, storage, and ultimate disposition of project information. It provides the critical links among people, ideas, and information that are necessary for success.11 [109]


If one accepts, as I do, that information is the lifeblood of any organization, then this definition is so broad as to not exclude virtually any aspect of management science. As with the overly broad definition of Risk Management, this definition, by failing to establish that which clearly falls outside the realm of Communication Management, also fails to give a usable definition of what it actually is. This section of the PMBOK ®Guide actually goes on to define a variance as the difference between the plan and the actual costs, and claims Earned Value analysis as belonging in the Communications Management purview. It’s as if the PMI Standards Committee provided distinct lines mapping out the subject areas they believed comprised project management proper, coloring-book style, and then allowed the various contributing writers and editors to color not only their own areas, but to cross the lines into other contributing writers’ areas, without coming back afterwards to comment on the transgressions. The result is an intellectual blur, a series of overlapping management science theories and hypotheses competing for supremacy within the vaguely-defined realm of “Project Management.” And here, ironically, we have “Communications Management,” as poorly defined as any of them, apparently overlapping virtually all others. I hope that I am up to the task of reining in the communications management aficionados, and placing appropriate limits on where their theories contribute to the narratives and, more importantly, where they do not.

I readily concede that any miscommunication that leads to misunderstandings, conflicts, and poor decisions should be avoided, but I am not ready to concede that they should be avoided at all costs. Communication Management advocates, in my experience, are extremely fond of depicting enhanced communication techniques as the key to complete understanding, harmony, and good decision-making, and I’m just not buying it. Most conflicts – perhaps a strong majority – have nothing to do with a breakdown of communications. The belligerents are perfectly clear where they stand with respect to their opponents, and discussing the nature of the conflict does not help resolve it. It’s true in politics, it’s true in military affairs, and it’s true in management.

Indeed, none of the text in the PMBOK ®Guide’s section on Communications Management addresses the concept of deceit – the writers act as if all communications are genuine representations of the communicators’ true intentions and positions. However, as we saw in Part 1, deception is a key component of communications in games – perhaps the key component in games such as Chicken, Poker, or Diplomacy. And, to the extent that these games model aspects of managerial interactions in the free marketplace, it must be conceded that deceit plays a significant role in management science in general, and Communications Management in particular. Information must be timely, relevant, and accurate; but, if your organization can’t get sufficient information that passes all three of these criteria, then injecting inaccuracy into your competitors’ information stream becomes a tempting alternative. In these instances, attempting to ratchet up the communications would not only fail to lead to an acceptable resolution, they would do the exact opposite, by providing greater opportunities for deceitful and inaccurate information to taint the organizations’ MIS streams.

It’s hard to make clear distinctions where the Communications Managers’ techniques begin to lose efficacy, because the assertions of where they do hold sway are so poorly defined in most cases. I did learn of one Communications Management approach from the brilliant Fred Tarantino, currently the President and CEO of the University Space Research Association. At the time, Fred was Principal Associate Director for Nuclear Weapons Programs at a national laboratory, and I was his Director of Program Controls. Our customer was the US National Nuclear Security Administration, and (obviously) incomplete, inconsistent, or inappropriate communications could have led to severe organizational setbacks. Dr Tarantino set up something he called a “zipper plan,” where each of his management team members knew their specific customer contacts (Fred did not invent this approach – he merely taught it to me as he implemented it in his organization). These lines of communications were exclusive – like the interlocking teeth on a zipper. In matters concerning cost and schedule performance reporting, I was responsible for managing that relationship on behalf of the Laboratory. I knew my point of contact within the NNSA, and the Program Directors who reported to Dr Tarantino knew these things as well. Similarly, on matters of science or engineering I knew which Program Directors were responsible for managing those areas, and would defer to them in all instances where there was a possibility that I would receive a request for information outside my purview. We sought to avoid the appearance of “stovepiping,” or limiting the functionality of the program team to specific organizational units, while at the same time respecting the other team members’ areas of expertise. I think the zipper plan worked great, though I have to admit that it’s impossible to differentiate the contribution to the excellent customer relations we enjoyed during that time between the utility of the plan and Dr Tarantino’s (and his team’s) personal charisma.

Much of the current scholarship on Communications Management (including the PMBOK ®Guide) is oriented towards delivering information among project team members and others, without much consideration of the accuracy, relevance, or appropriateness of that information. I would argue that, much as information must be timely, accurate, and relevant, so too the communication of that information must ensure that it is done in an appropriate way. Communications designed for consumption within the organization are rarely appropriate for customers, and almost never good for transmission to competitors, and vice versa. The transmission of information must be managed – but primarily to ensure its security, not its broadest and quickest dissemination. The pop management culture idea that any and all who can make even the most tenuous claim to being a project “stakeholder” ought to have ready access to a wide variety of organizational and project information is seriously flawed, and should be rejected.

Human Resources

I have never been associated with, nor become aware of, an organization where the Human Resources department is contained within the Project Management Office, or PM Organization. This managerial area’s name alone, Human Resources, gives a clear indication that it belongs with the management of other resources, and not within Project Management.

What, exactly, are we talking about when we reference “Project Human Resource Management?”12 [110] As could be expected, the cavalcade of overly-broad definitions of management science techniques or theories continues unabated. Again referring to the PMBOK Guide, it “… includes the processes required to make the most effective use of the people involved with the project.”13 [111] This definition appears to encompass every other section of the Guide, save Procurement – and, if we’re talking about hiring consultants or subcontractors, not even the buyers are safe from being claimed as falling under the purview of the Project Human Resource Management experts.

I would like to offer a more narrow definition of Project Human Resource Management: Project Human Resource Management is confined to acquiring the personnel that are best able to accomplish the project’s scope, within the project’s constraints of time and budget. Whether or not they are already part of the organization is not a Project Management concern – that variable belongs to the Asset Managers. Similarly, whether or not luring the person(s) involved away from a key competitor, and thereby inflicting a setback on such a competitor, is not a Project Management concern – it’s a Strategic Management issue.

Granted, it’s far less sexy than making “the most effective use of … people …” but I believe it’s much more defensible in the realm of management science. Many a dramatic project failure made very effective use of their people, and many projects have seen amazing success with arguably poor use of people. And what, exactly, constitutes effective or ineffective use of people? Isn’t it the outcome of the endeavor? What Human Resources expert worthy of the name would have counseled the Biblical King Saul to send the diminutive shepherd boy David – who was too small even to wear any of the available armor – up against Goliath?

Assigning the role of arbitrating what constitutes an “effective use of people” to the Human Resources crowd is similar to giving the Quality Management-types the power to define which business processes are valid, and which are not, and is similarly misguided.


What’s being procured? Assets.

No, I’m not being deliberately obtuse. I’m well aware that there’s a lot more to procurement than accounts payable, and a good chunk of that does fall legitimately into the realm of Project Management. But only to the extent of the project manager judging of fit and meet with respect to attaining the project’s (or task’s) scope, within the parameters of cost and schedule. In this area the PMBOK ®Guide’s discussion of vendor and bid evaluation is highly valuable.

There are two aspects of procurement management that I would like to examine: how procurement behavior adds to the narrative that explains how history unfolded, and how it may broadcast to customers and competitors alike what is likely to happen in the future.

I was a complete geek in High School (if you couldn’t tell by my Part 1 discourse about being First Board on the Chess Team). I actually spent the night of my Senior Prom studying chess openings for an upcoming USCF tournament, since I couldn’t get a date. Then, in the Summer before I started attending the University of New Mexico, my parents bought me a 1963 Cadillac Sedan de Ville (at the onset of the late 1970’s energy crisis, cars with terrible gas mileage were fairly cheap). It was painted gold, with a cream-colored interior, and I never wanted for dates thereafter. Girls practically jumped into the thing. Now, I hadn’t changed significantly in the intervening weeks between missing prom and bearing a striking resemblance to the subjects of the old Hai Karate commercials – the Cadillac was clearly the dependent variable. The things we procure and hold do not necessarily define us, but they do convey information about us, both accurate and relevant, and not.

An organization’s assets – its offices, location(s), even company cars – transmit information to customers, competitors, and itself. And what this information tells the organization about itself is highly vulnerable to the cognitive and confirmation biases of those who assemble the interior narrative. I have often wondered why such a large percentage of high-value lottery jackpot winners return to their previous economic conditions within five years, and I think it has to do with cognitive or confirmation biases in their internal narrative. The cold, hard truth that winning a large amount of money in a lottery is both (a) incredibly rare, and (b) has absolutely nothing to do with the relative merit of the recipient does not make for comfortable incorporation into the internal narrative, leaving recipients vulnerable to a script that pushes them towards future failure.

In many areas of management, a significant procurement can transmit information to customers, competitors, and the procuring organization on likely future pursuits. In the (American) National Football League, a team that expends significant resources on, say, a wide receiver is telegraphing to the other teams in its Division that it intends to pursue an enhanced capability in its aerial game. Transactions such as corporate takeovers are virtually impossible to keep secret, and can be very telling about the acquiring organization’s future plans.

In both of these two instances, procurement behavior generates an information stream, the output of which can be consumed by employees, customers, and competitors. As we will see in the next chapter, combinations of information streams can be highly enlightening when used by the owning organization, but highly dangerous when competitors are in possession of their output.

And one more thing: I have never encountered an organization where the folks doing procurement report to the Project Management Office.


Like the Risk Management supporters, I’ve already done significant deconstruction of the Quality Management script. To reiterate, the techniques employed by Quality Management experts are highly valuable in the production world. They are also valuable in the service industry, but less so than in production. But once you get a bunch of Six Sigma guys crawling over your business processes, there’s no end to the mayhem they can cause.

I am, nevertheless, sympathetic to their situation. Many (if not most) areas of management can and do hinge on the ability to make the best decisions, which, in turn, is often predicated on having the best information in hand. This is only partially true of the quality guys. Measurements of tolerances and other parameters may indicate which products are likely to fail prior to their expected lifespan, but to make significant improvements in the organization’s willingness to significantly improve the quality of a given product, you need to be able to change people’s narratives, which can be difficult in the extreme – just ask any clergy. While in graduate school one of my professors passed along a story about a quality consultant who had been hired by a US-based automotive company. This expert came to a board meeting with a forged and polished Honda piston, along with a comparable piston produced by the US car maker, at a time when they had tended to cast their pistons. The visual difference in the quality of the two pistons was striking, so the US carmaker responded by firing the consultant and forbidding him from ever entering another of their plants. I have no idea if the story is true or not, but it does help illustrate that in order for the Quality Management specialists to have a positive influence, they must change organizations’ internal narratives.


This part of the PMBOK ®Guide ostensibly deals with the coordination of each of the other areas. But there is no narrowing of definitions, no lines of demarcation where each of the other eight areas can claim efficacy, and where they cannot. Imagine the processing of management information as if it happened on a conveyor belt. Along the sides of the conveyor belt are taped-off areas indicating where each employee works, adding to or processing the information stream as it comes into their taped-off area. The information stream, like a manufactured item, is incomplete as it enters their area, and the expectation is that the worker will change the item to a state that is acceptable to the next person down-line prior to the information item leaving their own area. At some point in the process the next person to receive the information is its ultimate consumer, the manager/decision-maker.

Now imagine that a provocateur sneaks in during the night, and moves the tape that represents the information stream’s integrated components. Some areas now overlap, others have gaps. Obviously, chaos ensues, severely damaging the timeliness and accuracy of the information, even assuming its relevancy is already established.

Any document or analysis that presumes to represent project management integration would have, in this analogy, to serve the function of being able to go in to the information processing assembly line, and putting the tape back into its appropriate place, whether or not the previous map of the lines was archived and retrievable. The PMBOK ®Guide section on integration does not attempt this, and, in my opinion, that is unfortunate. Such an effort would send a clear signal that at least some thought had gone into analyzing the limits of project management science efficacy, the limits of PM epistemology.

Don’t misunderstand – I’m not assigning blame. As has been previously discussed, the Asset Management crowd has so oversold their techniques that any competitor to them almost has to do some overselling themselves. To self-limit or self-contain is tantamount to ceding the argument at the outset. But if we are to evaluate the three types of management and their information streams, the only even-handed approach is to identify their logical limits, apart from the political or pop management implications, leading us to the following table (Table 11.3).

I’m maintaining that, via the Responsibility/Accountability Matrix, or RAM, cost and schedule information can be isolated to the specific teams or groups that contributed to the project. That being the case, valuable information can be obtained concerning the performance of the various parts of the internal organization. Of course, performing an Earned Value analysis on non-project work is futile, and yields no usable information. However, knowing which sub-organizations are the optimal performers, and which are not, must be considered key internal management feedback. Cost and schedule performance information assigned to specific sub-organizations also provides a powerful basis for flipping the narrative forward, and using the ensuing projection for anticipating future events. NFL teams have command of the statistics concerning their place kickers. If a given place kicker is inaccurate outside the 40-yard line, that team is more likely to go for a first down in fourth-and-short situations outside the kicker’s limits of accuracy. Similarly, organizations that know which of their sub-organizations are likely to perform poorly will approach real-time situations forewarned with a usable knowledge of their vulnerabilities, and can make informed decisions based on that knowledge. Hence, I’m giving the Internal Management/Feed Forward block a green light.

Table 11.3 Logical Limits of Management and Information

Project Management MISs


External – Customer

External – Competition

Berne archetype




Corner Cube Type




EVM/CPM– feedback

Relevant, Accurate

Relevant, Accurate

Irrelevant, Accurate

EVM/CPM – feedforward

Relevant, Accurate

Relevant, Accurate

Somewhat relevant, Accurate

Of course, Earned Value and Critical Path Methodologies are the only source of relevant and accurate cost and schedule performance information in the project management realm. That’s why they get the green light in the PM blocks.

In the strategic management realm, EVM has powerful predictive abilities, but its ability to add to the narrative of why an organization lost ground to a competitor is limited. While customer satisfaction is a usable variable in market share analysis, there are too many other variables to declare PM techniques viable. I would like to remind the reader of the discussion of calculating a project’s Estimate at Completion from Chapter 9. Both Critical Path and Earned Value Methodologies have powerful predictive capabilities that are not possessed by any of the techniques from GAAP that depend upon the variable “expected value.” The ability to accurately quantify performance germane to Earned Value and Critical Path is key to this ability to flip the narrative forward and (somewhat) accurately predict the future. Nothing in GAAP has this ability, and is therefore pretty much helpless when Asset Management techniques are pressed into this sort of duty. It is this ability to project likely future performance outcomes that leads me to assert that project management information systems can provide accurate information in the strategic realm. Those capabilities are limited, however, because project management information streams are centered on the cost and schedule performance of products and projects, which are oriented towards customers, not competitors, bringing about questions of relevancy.


There is absolutely no doubt in my mind that the Project Management Institute® has had an extremely powerful, positive impact on management science as currently practiced. That having been said, its problem, like that of a myriad of other sciences and scientists, is that they have not displayed the ability to get to a point where they are comfortable asserting what they don’t know, that their ability to provide a meaningful contribution to certain parts of the management science arena loses efficacy.

But, if that is true of the Project Management supporters, what is to be said of the Asset Management aficionados? How far outside their legitimate realm must they be when PMI® commissions a book that attempts to justify the PM function in GAAP terms, i.e., return on investment? All of the attempts to quantify the value of Project Management – and there are many – mis-state the problem from the beginning. Project Management as a type of management (like Asset or Strategic Management) can’t be quantified as a constant variable. What’s the value of completing a project successfully versus failing to do so? It depends on the project, the customer, the size of the project, its scope, and thousands of other parameters that can’t be accurately captured, or even conflated and then glossed over by substituting the expected value data point. Even if we reduce the question to evaluating the value of the Project Management information stream, that can only be known in retrospect, if at all. The information of the Imperial Japanese Navy’s order of battle for Midway – could that be valued at four aircraft carriers, one heavy cruiser, 248 aircraft, and 3,057 lives? Or was that the value of Spruance’s decisions, while the intelligence simply supported those decisions? It’s impossible to say one way or the other. Similarly, it’s impossible to quantify the value of the three types of management, or their information streams. The only conclusion that can be drawn with any certitude is that the lack of these information streams represents a profound vulnerability within the organization. If these vulnerabilities, to both competitors and Black Swan events, are not exploited, then the information is useless. If the vulnerabilities are exploited, then the information streams that could have provided sufficient warning are as valuable as the damage done, up to and including the total value of the organization and its assets.

Submit your own content for publication

Submit content