Solving the complexity gap
There are two general classes of problems I don’t understand1. The first is a knowledge gap. I can fix these problems by asking someone who does know or try and learn the knowledge myself. The second is a complexity gap. A problem that’s too complicated to solve analytically.
This post is about the complexity gap.
Let’s start with a silly example, cruise control on a car. The job of cruise control is to maintain a set speed. Let’s consider the simplest possible case of a completely flat road with no other cars. Solving this is a knowledge gap - how much throttle do we need to apply to keep the car at a set speed against the rolling resistance of the road? We could calculate this, and it’d almost certainly work.
But now, let’s make it more complicated. It’s some undulating terrain. Now it’s hard! Let’s look at our fixed throttle approach first.
Well, clearly that sucks. As soon as we start going downhill we get out of control, and we risk stopping when going up a hill! How can we fix that? Let’s floor it when we’re going to slow, and slam on the anchors when we’re going too fast.
Now we’re driving like a lunatic, but at least at a mostly consistent speed! We might have “solved” the problem but I certainly wouldn’t want to be a passenger in the car as we go back and forth on the throttle and brake.
There’s actually a deep principle at work here, called Ashby’s Law of Requisite Variety. It states that to control a system, your controller must have at least as much variety (complexity) as the system you’re trying to control.
Our flat-road cruise control worked because we had a simple system (constant conditions) and a simple controller (fixed throttle). But the moment we added hills - more variety in our environment - our simple controller couldn't cope. We needed a controller with more variety: the ability to increase throttle, decrease throttle, apply brakes, and respond to different situations.
This same principle applies everywhere in software development and in management (though we rarely think of it in these terms).
Software is all about limiting variety. From abstractions, to types, to design patterns to principles such as SOLID - each of them is about reducing variety by limiting the dimensions of variability. We provide feedback loops with compilers, tests, observability and (ultimately) customer feedback.
Management is the same. We use small teams to limit dysfunction. We apply team boundaries (and Conway’s law) to strategically (or not!) limit communication paths and variety which hopefully controls system complexity. We set feedback mechanisms such as KPIs or OKRs to help us understand whether we are meeting objectives.
But feedback loops are super difficult to design. In the next section, we’ll look at some of the warning signs that feedback signs might be going awry.
Detecting Controller Failure - Warning Signs
If you are responsible for a system, then you are the controller (either human or machine) of the system. You have to accept you can’t control the system by adjusting the internals, instead your job is to take the attenuated feedback from the system and make appropriate adjustments. You are steering a complex system.
But how do you know if the feedback you are processing is actually providing value? Here are some patterns:
Oscillation patterns - When variety attenuation is poorly tuned, you see wild swings between extremes rather than stable control. This happens when the controller either over-responds to signals (like our cruise control slamming between throttle and brakes) or under-responds until problems become critical. The variety isn't being smoothed - it's being amplified. Think of a team that swings between micromanagement and complete hands-off, or a budget process rapidly alternating between spend and cut.
Resource exhaustion - Controllers consuming disproportionate resources without benefit suggests they're fighting variety rather than managing it strategically. The variety attenuation mechanism has become more complex than the system it's trying to control.
Signal degradation - Either information overload (too little attenuation, drowning in noise) or information starvation (too much attenuation, missing critical signals). Good controllers maintain the right signal-to-noise ratio, but failing ones either flood decision-makers with unprocessed variety or filter out essential information along with the noise.
Boundary collapse - When isolation mechanisms fail and problems spread beyond their intended domains. This indicates the controller isn't properly containing variety - instead of managing complexity within bounded regions, it's allowing variety to propagate uncontrolled throughout the system.
Latency inflation - Response times getting longer as the system struggles to process variety. The controller is being overwhelmed by the complexity it's supposed to manage, creating delays between sensing problems and responding to them. This often precedes complete controller failure.
Bypass emergence - When people start working around the official control mechanisms. This signals that the controller isn't providing adequate variety to handle real-world conditions, forcing humans to become ad-hoc variety attenuators. The formal system is being abandoned because it can't cope with actual complexity.
Threshold drift - Control parameters gradually shifting away from their intended values without conscious decision. This suggests the controller is slowly losing its ability to maintain proper variety attenuation, often as a response to accumulated complexity that wasn't properly managed. Think of this as the OKR that’s failing to hit targets, and no-one cares. Or the CI system supposed to be a ten-minute build that’s ever so slowly creeping up with no-one caring.
The meta-pattern is that failing controllers either become variety amplifiers (creating more chaos than they prevent) or variety eliminators (becoming so rigid they can't respond to legitimate change). Successful controllers maintain dynamic equilibrium - just enough variety to handle real conditions, not so much that they become unmanageable themselves.
Attenuation is the art of strategic simplification
I find the law of requisite variety provides me with a new lens to look at complexity problems. Now instead of trying to solve the problem directly by means of filling the knowledge gap, I can approach it as a systems problem. This gives me a new palette of questions, such as:
What are the signals coming from the system?
Which of those signals should I focus on?
What are the inputs I can provide to the system?
How will I tell whether things are working?
What do I want to amplify?
What do I want to dampen?
What could I constrain?
And sometimes, a new frame of reference with a new question is the inspiration you need to solve it!
There’s plenty more for me to learn here. I’m currently consuming Stafford Beer resources, working through a Cybernetics book and hopefully I’ll eventually understand the Viable Systems Model.
Any other recommended resources?
There’s almost certainly an infinite number of problems that I don’t know I don’t understand!




Excellent post Jeff!. Hadn't appreciated this line of thought on the complexity Gap. A cheat sheet for managing complexity!