Software Development

How to Evaluate Your Legacy Codebase Before a Rewrite

Stefan Milanovic

Stefan Milanovic

Apr 1, 2026 · 10 min read

Technical diagram of a legacy system being decomposed into modular components

The conversation usually starts the same way. Someone on the leadership team says, "We need to rewrite this thing from scratch." Maybe the system has been around for eight years. Maybe deployments take a full day. Maybe the last developer who truly understood the architecture left two years ago and nobody has been brave enough to touch certain parts of the code since.

I get it. The impulse to burn it down and start fresh is strong. When you are staring at a codebase that feels like it is held together with duct tape and good intentions, a clean rewrite sounds like freedom. But in my experience, that instinct is wrong more often than it is right. The most expensive mistakes I have seen in software have not been bad code. They have been rewrites that were started without a clear understanding of what the old system actually did, and why.

Before you commit to a rewrite, you need to evaluate what you actually have. Not with gut feelings or frustration, but with a structured assessment that tells you what is genuinely broken, what is just ugly, and what the real options are. That is what this article is about.

Struggling with a legacy codebase? Stefan Dev helps teams evaluate, modernize, and rebuild software systems the right way. Our Services

Why Most Rewrites Fail

There is a famous essay by Joel Spolsky called "Things You Should Never Do, Part I," written back in 2000. He argued that rewriting code from scratch is the single worst strategic mistake a software company can make. Twenty-six years later, the pattern still holds. I have watched it play out multiple times with clients.

The core problem is what Fred Brooks called the "second-system effect." When you rewrite, you are not just rebuilding what exists. You are also trying to fix every annoyance, add every feature that was deferred, and redesign every architectural decision you now disagree with. The scope balloons. The timeline doubles, then triples. Meanwhile, your existing system still needs maintenance, bug fixes, and the occasional new feature. Now you are running two systems instead of one.

The other problem is subtler and more dangerous. Legacy codebases contain enormous amounts of institutional knowledge, encoded as code. That weird conditional on line 347 that makes no sense? It is probably handling an edge case that took someone three days to debug in production five years ago. When you rewrite from scratch, you lose all of that. You will rediscover those edge cases the hard way, one production incident at a time.

This does not mean rewrites are never the right answer. Sometimes they are. But the decision needs to be based on evidence, not frustration.

Signs Your Codebase Actually Needs Attention

Not every legacy system is a problem. Some codebases are old, ugly, and perfectly functional. The question is not "is this code pretty?" but "is this code preventing us from doing what we need to do?" Here are the signs that point to a real problem.

Deployments Are Painful

If deploying a small change takes hours instead of minutes, or requires manual steps that only one person knows how to do, you have a deployment problem. This is not always a code problem. Sometimes it is an infrastructure or process problem. But if the code itself is so tangled that you cannot deploy one component without deploying everything, that is a signal the architecture needs work.

Onboarding Takes Months

New developers should be able to make meaningful contributions within a few weeks, not a few months. If every new hire spends their first quarter just trying to understand how the system works, the codebase has a comprehensibility problem. This could be a lack of documentation, inconsistent patterns, or an architecture that is genuinely too complex to reason about.

Every Bug Fix Creates Two More

This is the classic sign of high coupling and low cohesion. The components of the system are so intertwined that changing one thing breaks something unrelated. When your team is afraid to touch certain parts of the code, that fear is telling you something important about the structural integrity of the system.

The Framework Is End-of-Life

Running on a framework or runtime that no longer receives security patches is a legitimate business risk. If you are on .NET Framework 4.5, or Python 2, or an old version of Angular that is no longer supported, you need a migration plan. This does not necessarily mean a full rewrite, but it does mean the clock is ticking.

The Original Developers Are Gone

This one is tricky because it is common and it is not always a problem. Code does not need its original authors to be maintainable if it is well-structured and reasonably documented. But if the system was built by one person who left no documentation, used no standard patterns, and built custom frameworks for everything, you have a knowledge silo problem that will only get worse over time.

The Evaluation Framework

When I evaluate a legacy codebase for a client, I look at six dimensions. None of them alone tells the full story, but together they give you a clear picture of where you stand.

Code Quality Metrics

Run static analysis. Cyclomatic complexity, code duplication, dependency depth, test coverage. These numbers are not gospel, but they are useful baselines. A codebase with an average cyclomatic complexity over 20 per method has functions that are doing too many things. Test coverage below 10% means you are flying blind on regressions. High dependency depth means changes ripple unpredictably.

The important thing is to look at trends, not just absolute numbers. A codebase with 30% test coverage that was at 10% a year ago is moving in the right direction. A codebase with 60% coverage that has been declining for six months has a cultural problem, not just a code problem.

Team Velocity

How long does it take to ship a typical feature? How has that changed over the past year? If features that used to take a week now take three weeks, the codebase is creating drag. Look at your project management data. Pull request cycle time, time from ticket creation to deployment, and how many tickets get reopened or sent back. These are proxy measures for how well the codebase supports productive development.

Deployment Frequency and Reliability

How often can you deploy? How often do deployments fail or require rollbacks? A healthy system can be deployed daily or more frequently, with automated pipelines and low failure rates. If you are deploying once a month with a manual checklist and crossing your fingers, the system's architecture or infrastructure is working against you.

Incident Rate

How often does the system break in production? What is your mean time to resolution? Are the same components causing repeated incidents? Incident data tells you which parts of the system are actually fragile versus which parts just look messy. Sometimes the ugliest code is the most reliable because it has been battle-tested for years.

Dependency Health

How many of your dependencies are outdated? How many have known security vulnerabilities? Can you update them without breaking things? A codebase that is three major versions behind on its web framework and has dozens of unmaintained packages is accumulating risk that compounds over time.

Developer Satisfaction

This one is qualitative, but it matters. Talk to the people who work in the codebase every day. What do they dread? What wastes their time? Where do they feel productive? Developers are usually honest about what is broken when you ask them directly. The patterns in their answers will tell you where the real pain is, not just where the theoretical problems are.

Refactor, Rewrite, or Replace: A Decision Matrix

Once you have the data, you need to make a decision. There are really four options, not just two.

Refactor

Refactoring is the right choice when the architecture is fundamentally sound but the code quality has degraded. The bones of the system are good. The structure makes sense. But years of quick fixes, workarounds, and "we will clean this up later" have accumulated into a mess. In this case, you do not need to start over. You need to invest in systematic cleanup: improving test coverage, extracting duplicated logic, simplifying overly complex functions, and updating dependencies incrementally.

Refactoring preserves the institutional knowledge embedded in the code. It is lower risk, can be done incrementally alongside feature work, and delivers value continuously rather than all at once at the end of a multi-month project.

Strangler Fig Migration

This is the middle path between refactoring and a full rewrite, and it is the option I recommend most often. Named after the strangler fig tree that grows around its host tree and eventually replaces it, this pattern involves building new functionality in a modern system while gradually migrating existing functionality out of the old one. Both systems run in parallel, with a routing layer directing traffic to the appropriate one.

The beauty of this approach is that it eliminates the big-bang risk. You never have a single cutover moment where everything could go wrong. Each piece of functionality is migrated independently, tested independently, and can be rolled back independently. It takes longer than a clean rewrite on paper, but in practice it is faster because you avoid the rework cycles and scope creep that plague rewrites.

Full Rewrite

A full rewrite is appropriate when the architecture itself is the bottleneck and cannot be incrementally improved. If the system was built as a monolithic desktop application and the business now needs a distributed cloud-native architecture, no amount of refactoring will get you there. If the system is built on a platform that is being decommissioned, you may have no choice.

If you go this route, scope it ruthlessly. Do not try to replicate every feature of the old system. Identify the core functionality that actually matters and build that first. Let the old system handle everything else until the new system is ready to take over piece by piece. This is essentially the strangler fig pattern applied to a full rewrite, and it is the only way I have seen rewrites succeed consistently.

Replace with Off-the-Shelf

Sometimes the right answer is to stop maintaining custom software entirely. If your custom CRM was built ten years ago and modern platforms like Salesforce or HubSpot now handle your use cases better, the ROI of continuing to maintain custom code is negative. This is a hard conversation for engineering teams because it can feel like admitting the original work was wasted. It was not. It served the business when no better option existed. But the landscape changes, and an honest assessment should include the possibility that building is no longer the right approach.

What the Assessment Actually Looks Like

When a client asks us to evaluate their legacy system, the engagement typically takes one to two weeks. Here is how it works.

Week one is data gathering. We get access to the codebase, the CI/CD pipeline, the project management tools, and the monitoring systems. We run static analysis, review the dependency tree, and look at deployment and incident history. We also schedule interviews with three to five people: the tech lead, a couple of senior developers, the product owner, and ideally someone from operations or support who deals with the system's failures.

Week two is analysis and recommendations. We synthesize everything into a report that covers the six dimensions I described above, scores the system's health in each area, and provides a clear recommendation: refactor, strangler fig, rewrite, or replace. The recommendation includes a rough scope estimate, the key risks, and what we would do first.

The goal is to give the decision-makers enough information to make a confident choice. Not a 200-page document that nobody reads, but a focused assessment that answers the question: what should we actually do with this system, and why?

The Conversation Nobody Wants to Have

Sometimes the honest answer is "leave it alone."

I have evaluated systems that were ugly, old, and running on outdated technology, and my recommendation was to do nothing. The system worked. It was stable. The business was not being constrained by it. Yes, the code was hard to read. Yes, the framework was old. But the cost of modernization would have been significant, and the return on that investment would have been minimal.

This is a hard recommendation to make because it is unsatisfying. The team wants to work on something modern and exciting. Leadership wants to feel like they are investing in the future. But pouring resources into a system that is not actually holding you back is a misallocation of engineering capacity. That time and money is better spent on the things that will actually move the business forward.

The point of an evaluation is to make the right decision, even when the right decision is boring. If the codebase is genuinely holding your team back, modernize it. If it is not, focus your energy where it matters and revisit the question in a year.

If you are facing this decision with your own system, we can help you evaluate it objectively. A structured assessment takes the emotion out of the decision and gives you a clear path forward, whether that means modernizing, migrating, or simply maintaining what you have. Reach out and let's talk through your situation.

Let's Build Something Together

Whether you need a codebase evaluation, system modernization, or a full rebuild, we bring the expertise your team needs to move forward with confidence.

Stefan Milanovic

Stefan Milanovic

Founder & CEO

Senior Software Developer & Data Scientist with 6+ years of experience across the full stack. Leads Stefan Dev LLC, building custom software solutions for businesses in healthcare, engineering, and beyond.

Related Articles