Update: $6.2T Mistake Revisited

Thursday, October 22, 2009 |

A spirited conversation on Twitter with @PeterKretzman and Roger Sessions (@RSessions) led to a couple of updates:


Factor = 5.4 * .24 * .0275 = .03564


Total Yearly Loss: .03564 * $69.8T = $2.487T (round up $2.5T)


Editorial Comment: While $6.2T seemed too high , and $414B seemed a bit low, could this be the real number? Details below:


1. Updates to my math:


Roger's definition of indirect costs adds expended cost of failed project to both tangible and intangible benefits. So, to use my example of 50% ROI, 60% of benefits are intangible example, the multiplier would be 100% + 150% = 250% or 2.5.


That thought experiment assumes a very strong business case. Available research further confirms that experience: "Nucleus case studies reveal that, on average, indirect benefits account for half of technology ROI. " (Nucleus Research, 2004). In my experience, most organizations have hurdle rates around 20%, not 50%, so very few approved projects will have benefit case with 50% ROI. So, let's take 20% as our floor (2.2 multiplier). Example cited in Roger's white paper seems highly unusual, but it does provide a ceiling (9.6 multiplier). A question without a good answer is what is the distribution of project spend between 2.2 and 9.6 - my hunch is that even if it is a bell curve (normal distribution), it's far more weighted toward the floor in terms of number of projects, while far more weighted toward the ceiling in terms of spend. If we assume those two cancel each other out, the average multiplier becomes 5.4. I'd be interested if anyone has done a study to support/update this number, rather than using an assumption.

2. Updates to Roger's math:


I still find it curious that Roger and I are using the exact same research note from Standish Group to get the ratio of failed projects - but using different metrics from that note. I take the definitions from Standish more literally - it's hard for me to argue that their definition of "failed" is not sufficient, especially since they clearly delineate between "failed" and "challenged". So I'll stick with 24% failed metric from 2009 report. Whether there are some incurred indirect costs of "challenged" projects is an open question - it makes sense that there would be, but I'd love to see a study rather than make that assumption.



3. Open questions


First, it would be nice to have a metric supported by research on what the multiplier should be. That is the biggest weight in the equation, and it needs to be more than assumptions on all sides.


Second, there are obviously indirect costs to challenged projects. I'd be interested to see whether anyone (ahem, PMI?) has done a study that documents the costs of missing deadlines. That will also have an impact on the final answer, though probably not as much.

Why Reference Architecture(s)? (Part 1)

Monday, October 12, 2009 |

A few weeks back, I posted a sample EA Capability Map. It generated a lot of discussion here, on Twitter and LinkedIn groups. Within these discussions, one of the insights from the map - Reference Architectures, specifically Business and Technology, are the business of Enterprise Architecture - was routinely challenged. These challenges came from many perspectives - the uber-pragmatist EA movement objected to investing into shelf-ware, the semantic pedants wondered what my definition of Reference Architecture was, the decision makers have never seen this expenditure justified by ROI, and so on. This was contentious enough to warrant a detailed treatment.


As I got into writing about the "why", I found a strange thing - the semantic pedants were more accurate than I gave them credit at the time - there is little to no agreement as to "what" a reference architecture is. While there are many perspectives on what a Reference Architecture provides, there is no "one" definition. So I'm scoping this post to just the "what", and leaving the "why" to Part 2. The intent is to get enough reactions and opinions to crowd-source a definition of Reference Architecture that can replace what is currently in Wikipedia (see below).

1. What is Reference Architecture

There are many definitions, which is part of the problem. The other part is that few actually define Reference Architecture. Here's a representative definition from Wikipedia:

A reference architecture provides a proven template solution for an architecture for a particular domain. It also provides a common vocabulary with which to discuss implementations, often with the aim to stress commonality.

A reference architecture often consists of a list of functions and some indication of their interfaces (or APIs) and interactions with each other and with functions located outside of the scope of the reference architecture.

Reference architectures can be defined at different levels of abstraction. A highly abstract one might show different pieces of equipment on a communications network, each providing different functions. A lower level one might demonstrate the interactions of procedures (or methods) within a computer program defined to perform a very specific task.

A reference architecture provides a template, based on the generalization of a set of successful solutions. These solutions have been generalized and structured for the depiction of both a logical and physical architecture based on the harvesting of a set of patterns that describe observations in a number of successful implements. Further it shows how to compose these parts together into a solution. Reference Architectures will be instantiated for a particular domain or for specific projects

Note, nowhere does the definition say what Reference Architecture is! There is plenty on what it provides, what are its building blocks, etc. No wonder Enterprise Architects can't agree on whether it is even needed.

So here is my definition:

A Reference Architecture is a set of predefined problem solutions for a given perspective.

To me, the implication is that while there may be Business Reference Architecture, a Technology Reference Architecture, a Risk Reference Architecture, an Organizational Reference Architecture, an ... Reference Architecture. But there isn't "one" Reference Architecture.

Your contributions are encouraged and welcomed!

Aleks


The Holy Grail of Enterprise Architecture?

Wednesday, October 7, 2009 |

There's been several conversations in LinkedIn groups on existence of a list of Enterprise Architecture Guidelines. These questions have been asked from a Governance point of view (how does my organization assure that projects comply?), from a theoretical point of view (is it possible?), and even from a sharing the pain point of view (why is there such a resistance to following our guidelines?!) Of course, if such a list exists, it could be tremendously useful to quickly creating Enterprise Architecture programs. So it's an intriguing thought, but is it just that - an interesting thought exercise?

Theory first. My apologies in advance for mathematical hue of the response, but this question can best addressed by a branch of mathematics known as set theory.

Let's say such a list exists, then what are the attributes of these guidelines?

  • They would need to be almost infinitely customizable to account for every possible combination of organizational growth by every organization.

  • They would have to address all business/public verticals, multiple business operating models, all business outcomes (good, bad, and ugly)

  • They would have to somehow tie all organizational processes and services to the business outcomes - arguable one of the more important roles of EA.

  • And that's just a start.
I did, however, say 'almost infinite' - which is by definition, discrete. While each organization is unique, there is a discrete number of organizations. Their growth patterns and responses to challenges are even more standardized thanks to normalization. So mathematically, a set of guidelines that accounts for all these variations must be discrete.

That's the theory, now reality: while discrete in nature, a list of possible guidelines would still have large enough cardinality to be unusable. And that's before we start considering unique organizational circumstances. Anyone can quote from the gospel of best practices on how to build pure software. Unfortunately, purity is not equivalent to good business - "perfection is the enemy of good" goes the quote. Yes, it can be easy to slip down the slippery slope of "good enough" to bad outcomes, but managing that risk is the job of any oversight organization such as Enterprise Architecture. Here are a couple of practical examples of where best intentions can lead:

  • Reduce the number of non-core business processes that perform the same or similar functionality as a method for reducing cost.
    • Hard to argue with this as a guideline, especially in current economic times, right? However, process optimization has been a fertile field for both tremendous success and catastrophic failure. If the organization is comfortable with the price point of existing operations, why would it want to disturb what already works when it has 31% chance of success? I understand that these operations may be messy. They may not be elegant at all. They might even involve Excel spreadsheets and Access databases and email-based workflow. They do have a tremendous advantage over any project proposal: they work today.
  • Reduce the number of unique data structures passed between systems
    • Believe it or not, sometimes this is not desired. Anyone who's been in Data Management space for any period of time knows that it's nice if everyone needed the same data all the time. Trouble is, it never happens. Different perspectives require different information at different times. There is nothing neat and predictable about how business uses data. Hardly anyone is willing to wait an additional week for their information to be presented in standardized format when decisions are needed right now.
  • Loose coupling of systems and process
    • This is one of the pillars of Service Oriented Approach. But much to the dismay of the adherents of SOA Grand Unified Theory, value of SOA and even its applicability are heavily dependent on business operating models. So the real question is: how much loose coupling is enough? Are there, in fact organizations where loose coupling is a bad idea from a business perspective? There are plenty of examples where this guideline could introduce too much organizational complexity, or too much risk against the asset portfolio.
I'm not just trying to be a contrarian here. What these examples should highlight, however, is that Enterprise Architecture Guidelines are entirely beholden to many things - business goals, business models, business operational models, stakeholder perspectives, among others. Trying to assemble a list of such guidelines is difficult at best because a lot of them will be contradictory. What it resembles most is a decision tree with each organization's 'as is' and 'to be' states as inputs. Without that, at worst, this exercise can become a hunt for "Holy Grail" of Enterprise Architecture. So rather than concentrating on compilation of such a list, I would advise focusing more on the process by which such a list could be compiled for a given organization. We often hear that right level of "granularity" is crucial when it comes to good systems design. Since an organization is just a system of systems, shouldn't that principle be applied in this case?

Aleks

A $6.2T Mistake

Thursday, October 1, 2009 |

Perhaps I have an overly developed sense of smell. Or maybe it's my background in statistics that makes me a skeptic. Or having to sit through hours of vendor presentations about what features will be released in the NEXT version of their product. But when I read Michael Krigman's latest post on Roger Sessions' calculations of how much IT failures cost on an annual basis, it tripped several "are you kidding me?" alarms. So after a brief exchange with Michael on Twitter (@mkrigsman), he suggested that I look at the calculations and see where I disagree with Roger's results.


Fair enough. I spent a few minutes analyzing Roger's argument with simple math and readily available research. Here's the treatment:
According to the World Technology and Services Alliance, countries spend, on average, 6.4% of the Gross Domestic Product (GDP) on Information Communications Technology, with 43% of this spent on hardware, software, and services. This means that, on average, 6.4 X .43 = 2.75 % of GDP is spent on hardware, software, and services. I will lump hardware, software, and services together under the banner of IT.
So far, so good.
According to the 2009 U.S. Budget, 66% of all Federal IT dollars are invested in projects that are “at risk”. I assume this number is representative of the rest of the world.
66% is a fair number. As I blogged earlier this year, project success rate has stagnated at 31% +/-3% over the past 8 years according to Standish Group. The assumption in the second sentence, however, starts leading Roger's analysis off the rails.
A large number of these will eventually fail. I assume the failure rate of an “at risk” project is between 50% and 80%. For this analysis, I’ll take the average: 65%.
That's an interesting assumption that leads to ~43% failure rate for all projects. That's questionable at best, since the actual failure rate of projects was 24% for 2008 (Standish, 2009), and has been fairly constant over the last decade (31% +/- 3% success, 45% +/- 3% challenged, 22% +/- 3% fail). That eliminates roughly 45% of the $6.2T figure, leaving us with $3.46T. However, now that Roger's analysis is fully off the rails, it goes off the beaten path as well:
Every project failure incurs both direct costs (the cost of the IT investment itself) and indirect costs (the lost “opportunity” costs). I assume that the ratio of indirect to direct costs is between 5:1 and 10:1. For this analysis, I’ll take the average: 7.5:1.
Not sure what this assumption is based on. Most organizations will gladly take 30% ROI when embarking on a project. In all of the CBAs that I've had the opportunity to be a part of, intangible benefits were quantified from 30-60% of the overall benefits. So let's take a generous outlier: 60% intangible benefits, 50% ROI. That still works out to... 9:10 opportunity cost to program cost (invest $100, get $150 back, 60% of $150 = $90). To get to 7.5:1, both the ROI and intangible benefit numbers have to be of magnitude rarely seen in business (working backwards: for every $100 spent, get $750 in intangible benefits plus some tangible benefits - where do I sign up?!)

Based on the sober analysis above, the expected cost of IT failures is:

.9 * .24 * .0275 = .00594

of an "average" country's GDP.

So even ignoring the fact that not all countries can be treated the same for purposes of normalization (that would tremendously impact what "average" and "mean" numbers really are) the total cost of IT failures comes out to $414.6B per year (if we use the numbers for GDP in Roger's analysis). That may sound low in comparison to $6.2T, but I would be happy to solve even 24% of that number. What it underscores is that IT project failures are a serious problem that deserves serious treatment.

Aleks

edit: based on @PeterKretzman suggestion, added links to Standish Group's 2009 Chaos Report