Technical Debt as Organizational Memory
Or it was probably there for a reason..
Three engineers stand at a whiteboard, markers in hand, circling the parts of the codebase they’ll rewrite as part of Operation Fixup. “This whole module is technical debt,” one says confidently. “Nobody knows why it works this way.” They all nod.
Six months later, the same three engineers are trying to understand why 3% of payments are failing, but only on Tuesdays, and only after 4 PM. After two all-nighters, they find it: a ten-year-old commit message reading “Workaround for race condition in payment provider’s batch processing - DO NOT REMOVE.”. The code they’d called “technical debt” was institutional memory. The fence they’d removed had been holding back a flood.
As engineers, we hate seeing “technical debt” and strive to eliminate it. That’s definitely the right approach for some types of debt, but in this post, I want to focus on the kind of debt that comes from modelling the domain we’re working in. Not poorly structured if statements, or overly long methods. Instead, the focus is on essential complexity debt.
Accidental complexity debt: “This function has 8 nested if statements”, just refactor it
Essential complexity debt: “We use event sourcing for orders but CRUD for inventory because in 2019 we needed audit trails for financial compliance, but inventory was moving too fast for the team to learn ES”, this encodes a decision
The Chesterton’s Fence Principle for Code
This is a parable by GK Chesterton told in his 1929 book, The Thing.
There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
We can simplify this a bit - don’t remove something before you understand it!
Before you refactor away something that looks “obviously wrong,” understand what problem it was solving. The fence exists for a reason, even if that reason is no longer valid. In other words, before being a software renovator, become an archaeologist and discover the artifacts that led to the creation of the fence in the first place.
Process Echoes in Code
Version control isn’t just about being able to rollback, it’s an archaeological record of your organization.
Obviously, the commits themselves give actionable information, you know who was involved and you know their intent from the commit message (or their frustration!). But you can dig deeper!
What can we learn?
Temporal correlation of changes - files that change together reveal hidden coupling. For example if the
PaymentValidator.csandFraudCheck.jsalways change together, it might turn out they’re tightly coupled from some long-forgotten regulatory requirement.When did things stop changing? - what hasn’t changed tells us what’s stable, what’s “done,” and what constraints have remained constant.
Commit patterns - do certain types of changes cluster? That reveals organizational priorities and perhaps organizational boundaries.
Adam Thornhill’s book, Code as a Crime Scene, digs into this in more detail.
Organizational Memories in code
Dig deep enough and you can find different types of memories stored in your codebase.
Constraints encode what was scarce when decisions were made. That manual process? No budget for automation. Microsoft stack everywhere despite team preferring other tools? Partner agreement made it cheap rather than right.
Knowledge gaps are your unknown-unknowns made visible. Authentication code looks like Java? It was acquired, and nobody on the team knew C# well enough to rewrite it properly.
Priority decisions show where “needs of now” trumped “needs of right.” Line up your commits with the org calendar and you’ll find the stories; the looming deadline, the fire that needed fighting, the trade-off that seemed reasonable at 11 PM on a Thursday.
Assumptions change over time. That module optimized for 100 users when you thought you’d stay niche? Now you’ve got 10,000. Design docs or ADRs might spell this out, but often it’s just there in a commit message if you look hard enough.
Process memories reflect collaboration patterns (Conway’s law and all that jazz). Seven compiler phases because you had seven teams? That awkward seam exists because of team boundaries? That weird shared library is just the mono-repo’s way of saying “we had nowhere else to put this.”
Reading your tech debt
Before removing the fence, do the archaeology!
As an example, one of the first places I worked had a “wonderful” C library for dealing with files. It was (mostly) a wrapper around fopen but contained the curiously named fileCopyWithRetry7. This function did what it said on the tin, it copied a file and retried seven times. Both oddly specific and just plain odd! Discussions with the old timers (who were thankfully there) revealed the why (a demo had failed because the anti-virus software caused some weirdness). With this context, we were able to make the decision not to delete, but to put a better fence in place.
Finding the data
You can start with your version control tools, such as blame and log. If the opportunity exists, perhaps the original authors of the code are available. What do they remember? Perhaps the organizational calendar highlights “crunch week” or a nearby incredibly important deadline?
Nowadays, LLMs make it dead simple to conduct ad-hoc analysis. Perhaps you want to see a heatmap of the activity around that area of time? If I want to find what code changes together, I’ll just get claude to do it, check it (don’t forget this bit!), run it and summarize the results.
Understand the context
Once you’ve got the raw data, refactor that into a set of reasoning. Assume good intent of the folks writing the code in the first place, and that the decisions they made were as a result of the constraints that existed. After all, you’re making decisions today based on the context you’ve got available too!
Try and reverse engineer that data into a consistent story.
What constraints existed? (budget, time, knowledge, tools available)
What was the state of the art? (2015 was a different world than 2025)
What assumptions were valid then?
Now make a decision - Is that context still true?
Once you’ve reconstructed the context of the original decision, it’s now time to see whether it’s still relevant.
Are the constraints and assumptions still true?
Have our tools / knowledge evolved in this area?
Decide: Keep and Document, or Change
Now you’ve got a relatively good idea of why the design is the way that is, it’s decision time.
If the context is unchanged, or the cost of change is greater than the benefit then keep the fence! But this time, write down the ADR that you should have written back then!
If we need to change the fence, then let’s document that! The context is changed, but the insight you’ve gained from your archaeological dig is still valuable. Encode the lesson in an ADR.
And perhaps the fence can be removed entirely? The same applies - you’ve found a false assumption or an obsolete constraint, document it and then delete it.
ADRs: From Implicit to Explicit Memory
I keep mentioning architecture decisions, these are the field notes of your archaeological digs. Most technical debt is implicit ADRs, decisions that were made but never documented. This leads to the code being the only documentation and unfortunately it’s in a language that’s hard to read.
Each time you pay down technical debt, make the knowledge you gain explicit.
Do the archaeological dig
Write the ADR you should have written back then
Write the ADR explaining why you’re changing it now
THEN make the change
And as an added bonus, this makes thinking visible, giving everyone a learning opportunity.
When the fence is just a mistake…
Before you think every change requires a three-day archaeological expedition, let’s be pragmatic. Some fences really are just mistakes, and some code should be deleted with extreme prejudice.
Skip the archaeology when:
The code is provably unused. If your tooling shows zero callers (beware reflection!), zero imports, and the tests still pass when you delete it, you’re not demolishing a fence, you’re clearing debris. Dead code isn’t institutional memory; it’s clutter that makes the real memory harder to find.
It’s a clear security vulnerability or compliance violation. That SQL injection vulnerability doesn’t encode organizational wisdom, it encodes organizational risk. Patch it now, understand the context later. The same applies to GDPR violations, hardcoded credentials, or anything that could get you breached or sued. Fix first, document after.
The cost of archaeology exceeds the cost of failure. Spending three days investigating a 10-line utility function that formats dates? Just rewrite it and monitor for breakage. If it breaks, you’ll learn fast. Not everything deserves the Chesterton’s Fence treatment!
It’s trivially testable. If you can write a comprehensive test suite that captures the current behaviour in an hour, just do that first. The tests become your safety net. If something breaks, the tests tell you what the fence was protecting. This is archaeology through empiricism rather than history.
The obvious explanation is correct. Sometimes a function really is just poorly written. If the commit message says “first draft, TODO: clean this up” from 2019, and it’s a straightforward algorithm with no weird edge cases, trust your judgment. Not every decision was profound.
The key question: What’s the blast radius if I’m wrong? Deleting a payment processing module? Do the archaeology. Renaming a private helper function? Just do it.
The Obligatory Provocative Conclusion
The problem isn’t that we have tech debt. It’s that we have amnesia about why it was there in the first place!
We violate Chesterton’s fence daily, removing fences without understanding why they exist. We ignore the process echoes in our version control. We treat our worst code as embarrassing rather than as honest artifacts of who we were and what we knew.
Some of our “worst” design decisions might be our most valuable archaeological site. It shows what really happened, not what we wish had happened. Before you excavate it away, make sure you’ve documented what you found.
Next time your team wants to tackle tech debt, start with archaeology, not judgment!
Remember, what we call technical debt might be the only record of why things work at all! Your worst code may remember what everyone else forgot




I really, really like this post and think it should be required reading for any team looking to do a refactor or, even scarier, a rewrite:
https://fffej.substack.com/p/technical-debt-as-organizational
The opening paragraph reminds me of Joel Spolsky's article, here: https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/ .
I'm also reminded of the book, Working Effectively with Legacy Code, by Michael Feathers. In particular I think about the utility of pinning tests (tests that you write to reflect the current behavior of a system that you don't understand) and no-change refactors (extracting helper functions etc.)
My one battle scar worth sharing here is to beware of the latter - no-change refactors - that might not be so no-change after all. I saw a really bad thing happen when an engineer pulled out a function by hand and ignored a parameter that should have been passed by reference instead of by value. The moral of that story was to 1) understand what the code is doing and why (like Jeff argues) 2) implement a pinning test if you don't and 3) use the IDE's built in refactoring functionality every time.
Really good article. I thought I was good at figuring out old code but I never thought of data mining the git history!
The article reminded me of an Amazon engineering tenet that I like to quote sometimes: "respect what came before"
It comes with this justification "Principal engineers are grateful to our predecessors. We appreciate the value of working systems and the lessons they embody. We understand that many problems are not essentially new."