CIO Magazine: Forrester's Take on Project Management Metrics - Cont'd

Friday, July 31, 2009 |

As one would expect, there was a lot of angst from the PM community on this article. Here is my response to the tunnel vision that seems to plague majority of those posts:

"I find myself in the minority of agreeing with a lot of what Meridith wrote. Perhaps the article took for granted a point those of us in IT Management are painfully aware of, leading to most of the conversation so far to focus on the weeds.

According to Standish Group's CHAOS Report, 71% (+/-3%) of all IT projects are considered failed or challenged - and that metric has been stable for the past 8 years. Yet, stuff gets done. Things go into production, imperfect as they may be, and business moves right along. So there is a huge disconnect between project success metrics and reality. And that disconnect is likely caused by perception created by insufficient project success metrics.

For all the folks that keep harping on the "cop out" point, perhaps you'd like to offer up a suggestion to deal with the ever-changing requirements problem that doesn't involve telling the business stakeholders exactly where to take their ever-changing requirements, a.k.a. "project change management" or "art of saying no". Regardless of how you communicate "no", it will still be received as "I care about your P&L less than I care about integrity of my process", even if your stakeholders agree with the logic of that "no". One of the more common complaints from the business side is that "IT always says no". Business is business - needs change due to market and regulatory pressures, so requirements have to adjust on the fly as well. If we are to believe that velocity of information is increasing, then the pace of change is not likely to slow down today or tomorrow. Politically and culturally, I think we should avoid a scenario where project managers have to say "no" more often than they do today.

Trouble is, current metrics derived from project definitions simply don't take that kind of disruptive change into account. That's likely because neither construction nor aerospace, where project management has been grown, had to deal with this challenge. Historically, if a project was challenged in those spaces, there was usually a set of tangible deliverables worked on by people with standard set of education using standard set of methods. Stopping/realigning/restarting a project could be successful. If an IT project is challenged, none of those are usually present - assessing code completion and/or correctness is always a challenge (if you can find it), getting the necessary resources is a common complaint even with skill sets that sound like commodity (like developer), and there are few professional certifications that accurately predict someone's real expertise. Add the ever-changing requirements challenge, and it is quite surprising that even 29% (+/-3%) of IT projects can be measured as successful. And the same set of issues are now plaguing the aerospace industry - whether it's the Presidential Helicopters or F-22 programs, just to name two that have been in the news.

So the question is not whether change is used as a "cop out" or "cover for incompetence", as suggested several times above. That kind of thinking taken too far leads to rigidity of thought that can not consistently succeed in a change-driven world. The real question should be, how should the "project" itself evolve to withstand the impact of real world, and what metrics are then relevant to measure success/failure once that evolution occurs."

CIO Magazine: Forrester's Take on Project Management Metrics


One of the more controversial arguments from a prior post was that success metrics for a project used by a vast majority of organizations are incomplete to decide whether a project was successful. Well, apparently I'm not the lone voice in the wilderness on this! Great article on by Meridith Levinson on the topic, citing Forrester research:

Common Project Management Metrics Doom IT Departments to Failure -

I can see now why Forrester team, headed by Jeff Scott, took a very strong stance on capability-based approach to Business and IT Alignment a few weeks back:

"Business and IT alignment remains a top priority for CIOs and other business leaders. While many CIOs have made significant progress in gaining a seat at the strategy table, a gap remains in organizing and illuminating business executives' thinking in a way that drives consistent understanding throughout the organization. Business capability models provide a new approach to deepen the strategic dialogue between business and IT leaders and increase strategic coherence. These models can act as a "Rosetta Stone" that provides the translation between business concerns and IT concerns. Tying IT strategies, projects, and costs to business capabilities offers a view of IT that resonates with business executives. Enterprise architects should construct a capability map that CIOs can use to encourage a more meaningful dialogue with their business peers to guide IT investments."

It's great to see Capability Revolution gaining steam!

Who should you trust to make software investment decisions?

Tuesday, July 28, 2009 |

I've been following a very interesting poll on LinkedIn over the past week. SAP asked, "Who do you trust to make software purchase decisions?"

Results speak for themselves. More on that later,


How much should Your Organization Invest in PMO?

Monday, July 20, 2009 |

Standish Group CHAOS report is one that I follow very closely. From 1994, when the first CHAOS Report came out, Standish has been following success/challenges of projects. Some of the numbers are very familiar to any of us in the software industry - 84% rate of failure/challenges, 45% of features delivered never utilized, and so on. Here's a link to a great interview InfoQ did with Jim Johnson of Standish Group in 2006 with numbers through 2004. Some of the graphs are quite interesting too - for those of you who see patterns everywhere, yes, they do follow the Gartner Hype Cycle curve too closely to be a coincidence. That is a discussion for another day.

So some history - the success rates have improved from 1994 to 2002. That coincided nicely with a tremendous amount of emphasis in project management in a modern enterprise. Improvement is nice, but bad is bad - at the best measurement point in 2002, only 34% of projects actually succeeded. And then, success metrics started dropping again. By CHAOS 2004, 71% were challenged or failed. By 2005, failure rates quoted by PMI were at 72%, and didn't account for projects that were not "too challenged" - close enough is apparently not just for horse shoes in project management. The last two measurements from Standish confirmed that even if the trend wasn't reversed, success metrics have settled near 31% (+/- 3% depending on year). In 2009, it was 32%. So what gives?

I would argue that there are two separate challenges at work here: that IT has been focusing on solving the wrong problem and that measures of success as defined by PMI are insufficient.

First, as an industry, we've invested a lot of money into project and portfolio management. Organizations have spent on methodology development and rollouts, project managers, on project management software, on portfolio management software, on expense management software, on vendor/procurement management, on... well, if it was in the PMBOK, it was justified. And if there is anything to be gleaned from the CHAOS Report numbers, is that all this spending was actually solving the 20% problem, and maybe not even 20%. For all the investment, the project success rates improved from 16% to 31% over the first 8 years. And they've been there for the last 8 years. It seems project management has reached the plateau of productivity, to use the Hype Cycle metaphor, and further investment into the space is coming up against the law of diminishing returns. That is of little consolation to a CIO who goes into annual planning armed with knowledge that two out every three projects that will be approved will not deliver on their clients' expectations.

Second, there may be another explanation. Somehow, some way, modern enterprises keep on working. Sure, some of that is due to spiderwebs of manual processes created to pick up slack for failing projects and missed requirements. But things do work, if inelegantly so. Based on that, some have questioned Standish Group's method for classifying projects as successful, failing or challenged. What if the aim is off - perhaps the real issue is in the success criteria of the projects themselves? Consider these two points:

  1. Research suggests that 84% of organizations either do not do business cases for IT projects or do them for a select few key projects. (Gartner) Of the 16% that are disciplined in their technology investment decision practices, very few have organizational mechanisms in place to measure and manage to the business case once approved.
  2. As I look on the bleak landscape painted by the analysts, we have come across case studies where an initiative should be judged a failure or challenged by conventional project management metrics. Yet they deliver tangible value in areas that were not identified, or could not have been identified when the initiative was approved.
The first point highlights that very few organizations definitively know the actual disposition - be it success, failure, or something in between - of all their projects. The second point highlights that while the concept of "project" is fairly well-defined, the success criteria are not, and that a more appropriate way to judge whether a project succeeded is to look at the short, medium, and long-term impact it has on existing and new assets. Of course, since 89% of organizations have virtually no metrics in place to measure IT effectiveness except for finance’s account of expenses (Gartner), measuring that impact can be even more challenging than successfully delivering a project.