GPM First
Powered by Gower
Chapter 4 of Second Order Project Management (978-1-4094-1094-2) by Michael Cavanagh

Outcomes and Ethics

CHAPTER 4

Managing for Outcome

 

A means can be justified only by its end. But the end in its turn needs to be justified.

Leon Trotsky

In the beginning of the malady it is easy to cure but difficult to detect, but in the course of time, not having been either detected or treated, it becomes easy to detect but difficult to cure.

Niccolo Machiavelli

 

March 2, 1969. I can clearly remember watching the first Concorde flight. There had never been such a breathtakingly beautiful aircraft! It looked as though it didn’t need engines at all – just a slight push at the end of the runway and it would soar into the sky like a paper dart. What a contrast to the great lumbering jumbo jets! I had seen the future of air travel, and it was a triumph of Anglo-French design and engineering excellence.

Except it wasn’t. Heavy fuel consumption, small fuel tanks, noise and exhaust pollution (and possibly some transatlantic political jealousy) rendered it a white elephant which, apart from its use as a prestige luxury transport towards the end of its life, was economically unviable. The designers produced an aircraft to a specification. One suspects that no one told them to accommodate the likely impact of external events, in particular the possibility of an oil crisis coupled with a growing awareness of environmental issues, and the risk inherent in the assumption that the main route would be transatlantic. Possibly, they could have addressed these issues in the design, at least in part – but if not, the unfeasibility of the project would have been apparent from the beginning.

Actually, what makes it worse is that this unfeasibility was predicted – and received wisdom has it that if it were not for a no-cancellation clause in the original joint contract, imposed because of a deep-seated lack of mutual trust between the two governments, Concorde would never have gone into production at all. We may have missed seeing a thing of beauty – but beauty is more than skin deep.

The history of major projects is littered all around with similar stories; ships that have been obsolete even before their maiden voyage, new towns that no one wants to live in, bridges to nowhere. The difficulties in cancelling such projects when they are in progress are often insuperable; with hindsight, it is often clear that they should never have been started in the first place, and that this was known. Human nature is enthusiastically optimistic, though – we long to get started, and we want no Cassandra to warn us of the consequence – quite the contrary – when proposed projects have been cancelled after a feasibility study proved them unviable, the instigators of those studies have been vilified for wasting money and lacking the courage or the management skill to proceed.

It has long been known that the cheapest point to fix in-service errors is at the design phase, where the majority of them have their origin, as Figure 4.1 shows.

The biggest in-service error of all, of course, is to build entirely the wrong thing. First order techniques will deliver against a firm, documented, functional requirement. But it’s mostly the non-functional (usually unspecified) requirements that bite, the most significant of these being fitness for purpose in the target operational environment and the interaction with other, uncontrollable components within it. But this is not the only outcome to consider.

 
Figure 4.1 In-service errors

graphics/fig4_1.jpg

Source: Adapted from Barry Boehm, Software Engineering Economics (1981).

 

The typical project feasibility study considers the likely organisational rewards against the organisational cost. What it often fails to do is consider the wider scope, the consequence that project implementation will have on external, apparently unrelated, systems. Neither does it consider that desirable outcomes change over time, and that immediate benefit has to be weighed against a final outcome which may only be realised some considerable time later.

Requirements Management is concerned with what a product or service should do, and how it performs against the documented specification; whereas the purpose of Outcome Management is to deliver the effect. There are many excellent first order tools to address the former; but Outcome Management is very definitely a second order issue. Put another way, First Order PM delivers the change, but not necessarily the desired result. The two things are not the same; neither is ‘want’ necessarily the same as ‘need’.

What people ‘want’ is limited by their prejudice and knowledge of what is available and possible – encapsulated in the German word ‘Weltanschauung’, which roughly translates as ‘Worldview’ but really means more than that – it is the (often unexamined and/or non-explicit) outlook or assumption which makes a change desirable – ‘The way the world looks from where I’m standing at this precise moment, viewed through my past experience, prejudice, hopes and dreams – and conditioned by the burdens I currently carry.’

Unfortunately, since each person’s Weltanschauung is unique to them, their view of what is wanted (let alone what is needed) will also differ, sometimes profoundly – a four-year-old’s idea of an ideal mid-morning snack may be a bar of chocolate-covered toffee as opposed to the raw carrot proposed by its mother. Given that, by definition, complex projects will have many stakeholders, this is potentially a major problem, at its worst when the end user of the product or service is not involved in the specification. A recent example is soldiers using their own personal mobile phones in preference to the hugely expensive battlefield radio systems provided through the official procurement channels, which are not only too heavy and cumbersome, but complex to use and lacking in equivalent capacity. Involving the users (in the case the soldiers themselves) in the requirement stage might have been a good idea…

Production of a Concept of Operation, or ‘ConOps’ addresses this issue. There are a number of techniques for the development and production of a ConOps – Use Cases, Scenario Planning, Business Process Modelling, the Soft Systems Methodology (SSM) and several others, most of which have commercial tool support. The System Anatomy process, described later and more fully in the Appendix, is an attempt to synthesise these into a process that is simple and straightforward enough to be used at the very highest conceptual level; its output then forming the foundation for more detailed, lower-level analysis and conceptual planning. A ConOps should allow all (ALL!) stakeholders to document their own, subjective, description of what they would consider to be a success – what good looks like for them, in other words. The fact that some of these may be invalid or unachievable is irrelevant – in fact it is invaluable, in that it is by far better to discover unrealistic views at the earliest possible stage, in order to manage expectations. Unless an accommodation is reached between these, it is almost impossible for the delivered product to be universally considered successful.

Experience with past projects makes it imperative to ensure that stakeholder inputs are given equal weight. Too often, a small supplier’s opinion has been disregarded by a much larger customer, only for the importance of their specific knowledge to be recognised much later on when the product is found to be at best suboptimal, at worst completely unfit for purpose.

ConOps must address through-life issues – what is needed now may not be the same as the need over time; this is especially important when the product development cycle is likely to be of long duration. A well-known example is the deployment of military equipment designed to meet a once-perceived threat from the Soviet bloc, but which is of little use in the asymmetric warfare of today. It is essential to perform continuous analysis of the external environment; the original PEST (Political, Economic, Social, and Technological) approach has been extended by experience with a number of variations – my own preference being STEEPLED (Social, Technical, Economic, Environmental, Political, Legal, Educational and Demographic).

It is also necessary to consider other, possibly ostensibly unrelated, systems that the product/service may affect or be affected by. The problem of emergence is very real – and almost inevitable in complex systems. Individual products which work perfectly well in isolation exhibit totally different characteristics when used together – the whole becomes greater than the sum of its parts. This may be a good thing – but it may not. The battlefield radio mentioned above initially used frequencies that were the same as the equipment used to detect Improvised Explosive Devices (IEDs), meaning that soldiers couldn’t use their radio at the same time as their IED detector.

It is the result that matters. Unless ‘outcome’ is the project driver, constantly revalidated against the current business situation and future strategy, the project will always be at risk. Even if delivered to time, cost and quality, the wrong thing will always be the wrong thing. The ConOps, mutually agreed across all stakeholders, is the only sure way of informing agreement of the right thing; and the only way of guaranteeing the validity of the ConOps is to adopt a Systems Approach to deliverable production, not to mention its suitability not just for the organisation, its customers, but also the outside world. Indeed, it may in the future become mandatory for complex projects to be able to prove that they have used best efforts to analyse outcomes; for there is a wider dimension to all this.

Outcomes and Ethics

 

The release of atom power has changed everything except our way of thinking. If I had known to what my research would lead, I would have become a watchmaker.

Albert Einstein

 

Consider the following scenario. You are a conscientious, morally responsible research scientist, working for a global organisation that rewards you well and provides strong support for your work without interfering. You discover a method to provide clean, ridiculously cheap energy that more than satisfies the whole world’s needs, and will not only bring huge profits to your company, but will also relieve world poverty and hunger virtually at a stroke. There is a problem, however, that only you are aware of. Once the power is switched on, it cannot ever be turned off – and there is a 99 per cent chance that in 150 years’ time, it will explode and blow the Earth apart. What do you do?

Your decision is a classic ethical dilemma. The choice is between doing what you are paid to do and publishing your findings, or burning your research notes because the consequence of publication is unthinkable. We term the former an ethic of duty, the latter an ethic of consequence.

Duty ethics run the risk of people performing unspeakable acts (‘I was only following orders’). Consequence ethics are only as good as your powers of prediction. How do you know it’s 99 per cent? Are you positive it’s not 98 per cent? Or 90 per cent? Or 9 per cent? And how can you be so sure that we won’t discover a way to stop it within the next 150 years, anyway?

This is not a science fiction scenario – unforeseen (possibly!) consequences of technological progress have happened throughout history. Tobacco was seen as a wonder drug, curing all known maladies (and if you don’t believe me, read Spenser’s Faerie Queene). Chlorofluorocarbons (CFCs) were a singularly Good Thing; their inertness made sure that the pollution risk was minimised. Nuclear fission has a million and one uses for the benefit of humanity, and only two to its detriment. Applied to bring pain relief to those in agony, diamorphine is a great blessing; we call it smack, though, when used for other purposes. Most recently, on the very day that BP was to receive an award for ‘outstanding safety and pollution prevention performance in offshore operations’, it’s ‘failsafe’ systems failed. Clearly, they didn’t do what it said on the packet.

Ethical consideration and consideration of the wider consequences of product implementation as part of the PM process was for a long time regarded at best as a sideline issue, sometimes worse, in a business environment where a duty ethic prevailed. Milton Friedman, at the beginning of the decade of greed, was unequivocal. ‘Few trends could so thoroughly undermine the very foundations of our free society as the acceptance by corporate officials of a social responsibility other than to make as much money for their stockholders as possible. This is a fundamentally subversive doctrine’ (Friedman 1970).

Only a few years later, it is unusual to find a major organisation that does not consider ethical governance and consequence analysis as a major component of the senior management portfolio. The change was not due to a Damascene conversion in the boardroom; it was a much more pragmatic driver than that. Immanuel Kant offered the rule, ‘Act only upon that maxim which you can at the same time will that it should become a universal law.’ Writing today, he may have modified this for today’s litigious society to read: ‘Act only on that maxim which you can defend in a court of law.’

Outcome Management is not just about realising the project’s business objectives – it must encompass the effects of product implementation on a much wider scale, and again, I suggest that the only way that such effects can be identified and addressed is the adoption of a Systems methodology.

Submit your own content for publication

Submit content