IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Commentary: State's Lessons from Past Should Guide Future

Due diligence, proper planning, stable leadership and attention to "soft skills" can help hard projects come to fruition, says IT veteran Shell Culp, The public and private sectors should both learn from the past.

shell-culp-senior-fellow.jpg
The end of the year is always a convenient time to look backward at successes to understand what we should repeat — and to look at failures to understand what we should avoid in the future. This is most often a very useful exercise, but there are some caveats. 

As we look backward, it is important to be honest and authentic to achieve clarity that enables improvement. Sometimes problems are complex, and it can seem that we fail to make improvements, even as we work feverishly on solutions. In these situations, we may be working on symptoms rather than on the actual problems. We all know that treating symptoms of problems does not actually fix them. 

When problems are complex, as we often find in large systems integration projects, we can expect that the solutions may be complex as well. With this in mind, I’d like to take you back about six years for a look at a project that is widely considered to be a failure, and then look a bit more closely at how to avoid repeating those mistake.

In August 2013, the California Senate Office of Oversight and Outcomes (Senate Oversight Office) released a report (“the report”) on the 21st Century Project — later renamed “MyCalPAYS” (“the project”), the State Controller’s Office’s (SCO) troubled payroll system project that had been launched 10 years earlier, in 2003. The report observed that by 2013, the project which had been planned since 1999 and launched four years later using a commercial off-the-shelf (COTS) solution had devoured $373 million and was years behind schedule. The report chronicled the project’s path and provided observations to help organizers and managers of future projects avoid some of the pitfalls that MyCalPAYS had suffered:

  • Due diligence did not occur in key areas of project planning such as business process analysis, data conversion and procurement.
  • Leadership turnover was a problem from the outset.
  • Institutional resistance was very much alive and well.
  • There was a general lack of transparency and honesty.
In addition to these highlights, data conversion and testing were both inadequately addressed. As bad as these observations may seem, it is important to note that we see all of these same mistakes still being made today in both public and private sectors.

Every single bullet above is related to people and processes. What, then, have we learned from the mistakes of past projects? The answer may be “very little.”

The Senate Oversight Office also noted that while SCO project leaders were embarking on the MyCalPAYS odyssey, the Los Angeles Unified School District (LAUSD) was encountering many of the same issues that SCO would encounter.  In response, SCO officials “addressed” the risks they expected to encounter through project organization — an additional steering committee for oversight was created, and deployment of the system was organized into “phases” rather than the “big bang” approach used by LAUSD.

Those risk management strategies did not work. They did not work because the project’s problems were not related to organization — they were related to people and processes. This is where we need to be honest and authentic with evidence if we ever want to fix project problems.  Let’s take a closer look at the problems MyCalPAYS faced and what we can do to avoid those issues.

Due diligence, project planning and analysis. The value of planning and assessment of readiness cannot be overstated. And assessment of readiness is not limited to a project per se. In this age of "data-driven" and "evidence-based," we should be using every way possible to understand the risks and impacts of decisions. That’s the whole idea behind data! And the analysis is not limited to an information technology system as a whole — we can and should assess readiness to use a new method of development. If there is no evidence on which to base a decision, at the very least, a risk must be registered and managed that reflects reality. If a system is complex, it is already a high-risk project by definition. To add risk with decisions that are not grounded in evidence seems counterintuitive. Insist on data and evidence that provide clarity.

A lot has been written on project success; a consistent determinant is planning. One observer[i] notes that, “Most projects are under-planned. They are already late before they start. Project teams that claim not to have time for detailed planning, typically end up working all hours to meet deadlines. Insufficient detail in the plan means time and effort requirements will be underestimated. Only when we get to the detail is the full extent of work revealed.”

The project mission must be clear as well. Don’t let ambiguity fog the team’s purpose — a solution for a business problem is being developed/delivered. If the team is also piloting a delivery method, or building an open-source public-sector platform, we have not clarified the mission, and we can expect some level of chaos and distraction.

If our project processes are not defined and transparent, we can expect that the teams may fail to use the processes appropriately and completely. If the processes do not include building and maintaining high-functioning teams that communicate well, we can expect a lot of finger-pointing when things go wrong.

It is also important to note that small, relatively simple projects are not the same as large and complex projects.  This seems rather obvious, but bears repeating — you cannot expect that what worked for a small project will be applicable to large, complex projects.

When people are valued and trusted, they want to do a good job, and there is a level of trust in the planning effort. If we skip this step, we don’t know where we’re going, and we can’t help others get there, either. People know when a good plan exists, and they also know when there is no plan.

Leadership turnover. Perhaps the MyCalPAYS project was doomed from the start because the COTS solution was ill-fitted to the state’s complex payroll requirements. If leadership was turning over so often, how could we expect that the project solution may require the state to compromise on how it does its business? Project teams don’t function well with turnover at the top, and the very existence of turnover points to difficulty. Some classic reasons for turnover are level of expertise and experiences of the project director, appropriate compensation, and trust issues.

If you are a project sponsor, how will you maintain project leadership? It’s worth some time to plan how you’ll nurture those who will maintain project operations. The alternative is expensive recruitment efforts and steep learning curves.  The reality in public sector is that the environment of the project will be a key factor in attracting talent — at the same classification/pay level, who would want to work in a toxic environment when they can make the same wage in a more appealing and supportive workplace?

Institutional resistance. This is not an insignificant obstacle. Organizational change is hard and must be intentionally managed to get the expected results. Methods for organizational change management are tested and work, but it is not free or easy. It is astonishing how few executives appear willing to pay for it, at the peril of project failure.

We all know that success with a new thing is not expected the first time out of the gate. Some would say that 10,000 hours of practice is required to “master” a new skill. Why, then, would we expect that trying a new way to organize and/or manage complex information technology projects would work well the first time?

Of course, the alternative to institutional resistance is ownership. Smart public-sector jurisdictions perform “organizational readiness” assessments that measure how well the organization is able to deliver technology projects and whether resistance issues may exist, among other things. Such assessments often use “dashboard-”type methods (green, yellow, red) to produce a “heat map” that characterizes an organization’s readiness for success and indicate where improvement must occur before success can be achieved.

Good examples would be understanding an organization’s capacity for supporting an application built with a new platform technology or understanding how quickly or slowly an organization might adapt to a project management model or new development method (agile vs. waterfall). If the heat map is yellow and red, some preparation work will be needed to achieve success. 

Lack of transparency and honesty. Lack of honesty will quickly erode trust, all the way up and down the chain. If you do not trust your team to get the job done, why are you continuing to spend resources? If you know the truth, and the truth is not good, recognize it and be honest about challenges ahead.

It is important to note that project plans are inherently uncertain and deviating from them is expected. But this should not mean that risks are easy to ignore. If you start from the position that there is always risk, you’ll find more, for sure, but you’ll also be more aware of what you’re facing.

People and processes are often not comfortable topics for technologists. And to be fair, the softer skills are often not core competencies in technology shops. But with large and complex technology solutions fairly well dependent upon them, it would stand to reason that we should get better at the soft skills that enable delivery of hard projects.

[i] Howard Vaughan, https://www.projectsmart.co.uk/effective-project-management-five-laws-that-determine-success.php, April 23, 2009

Shell Culp is a senior fellow at the Center for Digital Government, senior adviser for Public Consulting Group and principal with Almirante Partners. She formerly worked as an agency information officer for the state of California.