The Peltzman Effect
Risk compensation!
There’s a well-documented phenomenon called the Peltzman effect: when systems are made safer, people often take more risks often cancelling out some of the intended benefit. In the original paper, Peltzman argued that mandated safety devices changed behaviour, shifting harms rather than cleanly reducing them.
In cars, the risk knob is obvious, speed! Feeling safer lets you satisfy the same risk appetite at higher speeds.
In software, the risk knob usually isn’t speed. It’s how much proof you demand before you ship. Feeling safer shows up as shipping with less verification: fewer local checks, more “YOLO merge”, more reliance on rollback, or peer review to catch what you didn’t. Software “going faster” often means technical debt.
Here are a few common software guardrails that can accidentally invite risk compensation:
Heavy CI: “Why run it locally? CI will catch it.” (Verification moves later; ownership diffuses.)
Feature flags: “Ship the rough shape now; we’ll harden it later.” (Incompleteness becomes normal.)
Code review: “I’ve done enough; reviewers will spot issues.” (Diffusion of responsibility.)
Microservices: “It’s only a small service.” (Blast radius feels small; aggregate risk rises.)
The Peltzman mechanism works through a specific causal chain:
Safety mechanism creates a feeling of safety
Feeling reduces vigilance
Reduced vigilance enables riskier behaviour
Risky behaviour consumes the safety margin

In the original Peltzman paper, each driver has a fixed risk appetite, and feeling “safer” let them satisfy that risk appetite at higher (more dangerous) speeds. Software teams aren’t quite as simple. They aren’t individuals with fixed appetites they are entire systems of multiple people with different risk tolerances, incentive systems that often reward shipping over everything, and various feedback loops of varying speed and fidelities (from compilation to customer feedback).
The Peltzman effect for software is where people start to abdicate their responsibility for the safety of the system. So how can you design systems that resist it?
Safety systems should reveal risk.
Safety systems should reveal risk, not quietly absorb it. A CI failure that merely turns a pipeline red is a weak signal (especially when flakiness makes “red” feel meaningless). Over time, the team learns which failures to ignore, and the safety net stops teaching anything.
The stronger pattern is treat near-misses as data. In high-reliability settings, near-misses are valuable because they expose hazards before they become incidents.
So, when CI catches something, don’t let it be anonymous cushioning. Make the consequences visible: “this would have corrupted customer data” or “this would have caused an outage.” The goal is to keep the developer’s internal risk model calibrated.
Budgets, not walls
Error budgets, popularized by Google’s SRE book, flip the framing of reliability. Instead of saying “no failures allowed”, you define an acceptable failure rate and use that time as yours to spend on maintaining that level of quality.
Why does this resist the Peltzman effect? Because the risk is always visible. With a budget, if you ship shit, you’re going to eat into a shared resource and burn some of your minutes.
Different people might have different risk tolerances, but the risk is shared. The person who wants to ship without manually running their code creates team wide consequences.
Progressive Trust
Progressive trust is about calibrating constraints to demonstrated behaviour.
A new team with a new service might start with aggressive alerting thresholds, mandatory reviews, and other deliberate friction. As the team demonstrates reliability (fewer incidents, good recovery, excellent observability) then the constraints relax.
Trust can contract as well as expand. A production incident doesn’t just trigger a post-mortem; it triggers a temporary tightening of constraints. A mistake demonstrates your current trust level exceeds your current capability, so the system recalibrates.
This resists Peltzman because the safety net is responsive You can’t simply consume the slack created by guardrails, because consuming that slack (taking more risk, having more incidents) causes the guardrails to tighten.
Fast feedback over prevention
If you can detect and recover from problems in minutes, you need less prevention. But, the Peltzman Effect still applies! People will ship more carelessly because recovery is easy.
This has parallels to the CI system - the feedback loop has to carry real signal. For example, a rollback should trigger a post-mortem, so it still registers, even if the pain was short-lived.
Coupling safety to social accountability
Tests are easy to ignore because they’re between you and the machine. But if your test failures are posted to a team channel, or your “saved by CI” rate is tracked then it starts to matter more.
At a former workplace, we practiced social accountability by placing Kermit the frog on the desk of the person that broke the build (e.g. committed some syntactically invalid code). No-one wanted to be a muppet!1
Establishing mutual accountability for quality on a team is an important part of mitigating the Peltzman risks.
Closing
I’m definitely not arguing that safety systems are bad (some risk compensation is kind of the point!), but to be effective they’ve got to close the feedback loop so that the teams can calibrate risk levels. Error budgets make risk visible. Progressive trust makes consequences responsive. Social accountability makes near-misses public.
Looping back to cars, perhaps Peltzman-resistant device for cars don’t just save your life, instead they tell you how close you came to losing it? An airbag that deploys with a disappointed sigh. A crash helmet that plays back your near misses on a little screen while you sit in the lay-by reconsidering your choices?
Having written that, it sounds a bit like workplace bullying. Definitely not recommending this as a modern practice!

