Social and Organizational Heuristics
Mental Models that might be useful.
I’ve been reading Poor Charlie’s Almanack (highly recommended). One of the points Charlie makes in an early essay is a set of mental models goes a long way to helping you understand the world.
These mental models aren’t laws; they are heuristics. Think of them as pattern recognition tools to guide your intuition rather than a prescription.
I’ve tried to collect some organizational heuristics; rules of thumb for getting a bunch of people together to solve a problem. My experience is in software teams, so there’s a definite bias towards that but many of the mental models described below come from other disciplines. For each heuristic, I’ll try and give you an idea where it came from (with a reference to the canonical source so you can find out more), and some ways you could apply it to your software team.
Understanding these patterns (and I’m sure many others) can help you diagnose team dysfunction and avoid common pitfalls. I believe these are all the more important in this world of AI, where we’re going to be in a loop of trying to go faster and faster.
Team Size
Brooks’ Law
Coordination and onboarding costs grow faster than team size.
Fred Brooks coined the idea in The Mythical Man-Month (1975), reflecting on IBM’s OS/360: adding people to a late project made it later because newcomers needed onboarding and coordination paths exploded hence the quip, “nine women can’t produce a baby in a month.”
To apply this to software, you should consider:
Reducing scope before adding people
If you must scale, split work into independent streams
Budget for the cost of coordination and ramp-up
Dunbar’s Number
Big groups lose cohesion.
In 1992, anthropologist Robin Dunbar correlated primate neocortex size with typical group size and extrapolated that humans comfortably maintain ~150 stable relationships; beyond that, trust, shared context, and coordination decay (see interview with New Scientist).
This lesson seems to have already been learned by software teams.
Keep units small and meaningful. Aim for two pizza teams (5–9) and modular groups
When a unit nears ~150, split it up, clarify the practices, and try to push decisions down to preserve group cohesion
Decision-Making
Parkinson’s Law
Work expands to fill the time available.
C. Northcote Parkinson’s 1955 Economist essay (alternative link) satirised British naval bureaucracy that kept growing despite a shrinking fleet, capturing how tasks inflate to match the time allotted.
The challenge here is finding the right balance between no deadline and an absurd deadline!
Use credible, short deadlines and tight timeboxes.
Define “done”!
Cut scope to fit the box
Avoid sandbag windows that invite procrastination.
Parkinson’s Law of Triviality (Bike shedding)
Groups over-focus on the trivial.
In his 1957 book, Parkinson lampooned a committee that spends hours choosing a bike-shed color while waving through a nuclear reactor. Simple topics feel safer to argue about.
We do this all the time in software teams. Tabs vs. Spaces. Emacs vs. vi. To avoid this:
Decide the hard/irreversible decisions first.
Send pre-reads and set the decision criteria,
Default aggressively on trivia (lint rules, templates) so more energy can be devoted to the important things
Hick’s Law
Decision time grows with the number of choices.
William Edmund Hick and Ray Hyman’s 1952 experiments showed reaction/decision time increases roughly logarithmically with the number of alternatives.
Don’t accidentally create a wide choice of decisions and make it more complicated! Instead
Curate choices. Bring 2–3 viable options with a recommendation and trade-offs
Use progressive disclosure (narrow, then decide) rather than dumping the full menu.
Lindy Effect
The longer something has lasted, the longer it’s likely to last.
Born as Broadway lore about show lifespans, later generalised by Benoit Mandelbrot and popularised by Nassim Taleb to non-perishables like ideas, protocols, and books.
If a book has been in print for forty years, I can expect it to be in print for another forty years. But, and that is the main difference, if it survives another decade, then it will be expected to be in print another fifty years. This, simply, as a rule, tells you why things that have been around for a long time are not "aging" like persons, but "aging" in reverse. Every year that passes without extinction doubles the additional life expectancy. This is an indicator of some robustness. The robustness of an item is proportional to its life! (Nassim Taleb, Antifragile)
I’ll sum this up with Choose Boring Technology!
Murphy’s Law
Anything that can go wrong, will.
Attributed to Capt. Edward A. Murphy Jr. after a 1949 US Air Force test failed due to mis-wired sensors; the aphorism spread via engineering culture and safety communities.
Assume failure as part of the process, rather than assuming everything will work perfectly!
Plan for failure - what are the recovery processes?
Understand the risks of failure with a risk log
Incentives and Behaviour
Goodhart’s Law
When a measure becomes a target, it stops being a good measure.
Economist Charles Goodhart formulated this in a 1975 paper on monetary policy, observing that once a metric is used for control, people optimise for the number rather than the underlying reality it was meant to proxy (paper).
Metrics are deceptively difficult to get right (see Misunderstanding measurement motivations)
Use balanced metric portfolios, for example pair speed with quality (e.g., lead time with change-failure rate)
Rotate or time-box targets, and make the motivation explicit.
Prefer flow and outcome measures tied to customer impact.
Peltzman Effect
Safety nets can invite risk.
Sam Peltzman’s 1975 analysis of US auto safety laws found that seatbelts reduced harm but also encouraged compensating risk-taking that eroded some benefits (study).
Treat every safety feature as a guardrail, not a guarantee!
Tests don’t replace running the app; canaries don’t replace monitoring.
Combine automation with human checks
Watch for the moral hazard (“the tests will catch it”) in team habits.
Peter Principle
People rise to their level of incompetence.
Laurence J. Peter’s 1969 bestseller generalised a workplace pattern: promotions often reward past performance in a different job, until the person lands in a role where they struggle.
I’ve felt like a Peter many times! Sometimes it’s a matter of growing into a role, sometimes it’s a matter of changing role. If you’re in a position to help people get promoted, make it safe to fail.
Audit for role fit before promotion.
Use trial periods and scope-limited “acting” roles
Keep dual tracks (IC and management), and build pathways between the two.
Make it safe to step back without stigma when the fit isn’t there.
Matthew Effect
Advantage compounds.
Sociologist Robert K. Merton named the phenomenon in 1968: those with early recognition attract more credit and resources, amplifying their lead (wikipedia).
This can lead to dysfunction such as the sales folks getting all the glory, or the tech debt work being under appreciated. To fix this:
Spread visibility (rotating presenters, shared authorship)
Provide under-recognised teams with senior sponsorship
Deliberately rotate the high-profile projects.
Estimation and Planning
Hofstadter’s Law
It always takes longer than you expect—even when you account for Hofstadter’s Law.
Douglas Hofstadter coined it in Gödel, Escher, Bach (1979) while reflecting on recursive underestimation in AI and problem solving (book).
Estimating big things is hard
Slice until lead time matches your feedback horizon
The best way to estimate is to see how long similar work took!
Plan by capacity and flow limits rather than wishful point totals.
Pareto Principle
A vital few drive most of the outcomes.
Vilfredo Pareto observed skewed distributions in 1890s Italy - roughly 80% of land owned by 20% of people. This same pattern recurs in many domains (summary of works).
This is really all about leverage!
Identify the top 20% of features, customers, failure modes, or code hotspots that drive 80% of value or pain, and concentrate effort there.
Be explicit about the long tail: defer or delete!
Law of Diminishing Returns
Each extra unit of effort yields less benefit.
David Ricardo’s 1817 treatment of agriculture showed marginal output falls as more labour is applied to fixed land (text).
You need to know when you’re done and to do that you:
Measure! Is the juice worth the squeeze?
Stop optimizing once latency or cost hits the agreed goal
Collaboration and Culture
Conway’s Law
Systems mirror the organisation’s communication structure.
Melvin Conway’s 1968 Datamation article argued designs inevitably reflect how people talk and align across boundaries, drawn from his software practice (article).
In terms of applying this, it’s not just that systems mirror the org communication structure
Design org and architecture together.
Use the “Inverse Conway Manoeuvre”: shape team boundaries around desired service seams and make communication paths match the architecture
If you can’t change the org, adapt the architecture to the teams you actually have.
Chesterton’s Fence
Don’t remove a rule until you know why it exists.
In The Thing (1929), G. K. Chesterton warned reformers not to tear down a fence without first understanding the reason it was built (text).
If you’re new to an area, chances are you’ll instantly be able to spot the weird thing. There’s always a temptation to fix it without understanding the reasons behind it.
Record decisions with ADRs to capture the weirdness when you created it
If decisions don’t exist, research carefully
If you make changes to simplify, replace or remove then (for God’s sake!) record the new rationale!
Social Proof
People copy perceived norms.
Solomon Asch’s 1950s experiments showed participants conforming to wrong group answers; Robert Cialdini popularised the mechanism in Influence. You can see this principle in action on social media, where the group can flock to the wrong decision. Frightening!
In a leadership role, your job is to make those perceived norms the right norms. It’s the essence of culture and one of the most powerful ways of changing it!
Make the right behaviour obvious and easy (for example, PR templates)
Publicise great examples (high quality PRs, incident write-ups)
Leaders visibly use the practices they ask for (much better than a memo!)
Risk and Reliability
Normal Accidents Theory
In complex, tightly coupled systems, some accidents are inevitable.
Charles Perrow’s Normal Accidents argued that complexity and tight coupling create unexpected interactions that defeat prediction and control, making certain failures “normal” in high-risk systems. This is a scary read as we start to attach AI to all the things!
In software, we can budget for failure rather than denying it.
When building software we should reduce coupling (queues, bulkheads), add slack and circuit breakers
As team practices we should budget for chaos experiments and fuzz testing
If the blast radius is unacceptable, simplify the system.
Hanlon’s/Heinlein’s Razor
Don’t attribute to malice what error or constraints explain.
Robert A. Heinlein phrased the idea in 1941; later collected as “Hanlon’s Razor,” the aphorism cautions against over-ascribing intent when simpler explanations suffice.
Default to systems being the problem, not humans!
In incidents and review, look for missing affordances, ambiguous runbooks, unclear ownership, or perverse incentives.
Fix the environment so the “wrong” action becomes harder and the “right” action is the path of least resistance. (a bad system will beat a good person every time)
Conclusion
None of these models are magic that will change your life, but I find them useful as lenses. They help me frame trade-offs and bring a different perspective.
To put them into practice just reach for an appropriate lens!




Great article!