But with that momentum comes a familiar question that never quite goes away: How do we manage risk when things don’t go as planned?
Today, California relies on two primary tools. The first is contractual liability. The state’s standard IT service agreements, governed by Department of General Services (DGS) terms and conditions, generally cap vendor liability at twice the contract value, with exceptions for areas such as data breaches, fraud and intellectual property. These provisions were designed to protect taxpayer dollars and reinforce accountability. They serve an important purpose. But they also inadvertently shape bad behavior. Vendors respond by hardening positions early, pricing risk into proposals, and negotiating for carveouts. Meanwhile, departments respond by tightening liability and documentation. Before the first line of code is written, the relationship can feel adversarial.
The second tool is the performance bond. These are common in construction, where they guarantee completion if a contractor defaults. In technology, however, they are far less effective. IT failures rarely look like abandonment. They usually emerge as scope drift, integration challenges, evolving user needs or agency readiness gaps. Default is hard to define, and payout triggers don’t accurately reflect true cause and effect.
As a result, we are left with two blunt instruments. One feels adversarial. The other does not fit the work.
But California already has a third model; it just hasn’t been adapted for IT yet.
THE CASE FOR OCIPS
Owner Controlled Insurance Programs, or OCIPs, are already used by the state for major construction projects. Under this model, the state, through DGS, can manage a centralized insurance pool that covers liability, workers’ compensation and builder’s risk across all contractors. The result is greater efficiency, stronger oversight and more predictable protection.
This works in construction because risks are physical and measurable. Injuries, property damage and defects can be modeled with relative confidence. The insurance pool spreads risk across many participants, lowering costs and reducing uncertainty.
Technology risk is different, but it is no less real. Large IT projects face recurring challenges: delayed milestones, incomplete modules, unstable integration, or systems that technically launch but fail to perform. These failures are costly not only in dollars but in public trust and service continuity. Yet we do not have a shared framework designed for mutual responsibility and risk.
Here's where a performance-focused OCIP for major IT projects can help us. In this scenario, vendors would contribute premiums up front, just as construction contractors do today. In exchange, their contractual liability exposure could be moderated, and a pooled insurance mechanism would provide a backstop when defined performance thresholds are not met.
This could be structured such that OCIPs would apply only to large or high-risk IT procurements above a defined threshold, such as major system modernization efforts that require PAL or PDL oversight. Premiums could be calibrated based on project complexity, delivery approach, vendor track record and the maturity of the sponsoring department’s governance and project management capacity. Over time, the state would build a database of real delivery data (i.e., schedule adherence, change orders, system defects, etc.) to refine pricing based on evidence and not just assumptions.
While OCIPs can help with project accountability, their real advantage is risk diversification. Today, vendors price risk at the project level. Each proposal must carry its own insurance through pricing assumptions, contingencies or delivery safeguards. A pooled model spreads that risk across many projects and many participants. Over time, this could lower overall costs while creating more stability in project delivery.
Because the state would manage the program, it could also calibrate premiums based on real experience. Projects led by strong delivery teams and supported by departments with mature governance could see lower costs. Higher risk environments would pay more. That pricing signal matters. Moreover, it aligns incentives for both vendors and the state. A vendor that repeatedly delivers successful projects would see lower participation costs. A department that improves its governance, reduces change orders, and strengthens decision-making would also benefit from lower program costs. Instead of arguing about liability after an IT project stalls, both sides have skin in the game at the onset. That is, risk management becomes a shared responsibility rather than a contractual standoff.
STARTING WITH A PILOT
This would not need to be mandatory on day one. A pilot approach would allow the state to test the model on a handful of large, high impact projects. Vendors could opt in. Departments could learn. Actuarial assumptions could evolve based on real outcomes.
DGS could administer the program, drawing on its experience with construction OCIPs, while the California Department of Technology could help evaluate project readiness, delivery risk and historical performance data. Over time, the state could determine whether OCIPs should be required for certain categories of IT projects.
Yes, this would be complex. It would require collaboration among DGS, CDT and external experts to model IT performance risk. But complexity should not deter innovation. Besides, the state already manages OCIPs. California has already shown a willingness to rethink procurement through agile methods and innovation pilots. Applying that same mindset to risk management is a natural next step.
A performance OCIP would give agencies a structured path to recover from setbacks while encouraging better project discipline. Vendors would gain a more rational framework for managing exposure. And both sides could spend less time negotiating hypothetical risks and more time delivering results.
Public-sector technology now carries infrastructure-level consequences. Our risk tools should reflect that reality. Borrowing a proven model from construction and adapting it to digital infrastructure is a practical, not radical, idea.
Let’s pilot this and measure and adjust. If it works, then scale it; if it doesn’t, then refine or drop it. This is one of the iterative approaches we should consider taking for California’s next generation of IT projects.