Reading:
Four reasons to stop testing: Finding the right balance in software quality
MoT Professional Membership image
For the advancement of software testing and quality engineering

Four reasons to stop testing: Finding the right balance in software quality

Decide when to stop testing by weighing up the costs, benefits, risks, and impact on quality.

Four reasons to stop testing: Finding the right balance in software quality image

There are many valid reasons to reduce your testing efforts or stop testing an area altogether. 

Testing is essential in software development. After an initial trial during development, a structured, often automated process takes over. Regularly running tests against a set of requirements helps maintain functionality over time. 

But no matter how thorough, testing never covers everything—some parts of a product remain untested. So, what gets left out? When is it reasonable to stop testing? And how do you decide which components aren’t worth the extra effort?

1. When the code has not changed

If something works and no one is touching it, why test it? Research shows that any change is likely to introduce bugs, and code maturity "can be considered an indicator of quality." Security research found that "the vast majority of vulnerabilities reside in new or recently modified code." Bugs stem mainly from unintended side effects of planned updates, making each change a risk to reliability. The more people have reviewed the code and left it untouched, the more likely it is to be correct.

If your software has quality issues, pausing feature development might be more effective than adding tests. In every project, I see plenty of untested code. In fact, I’ve never come across tests for Excel functions, even though they play a major role in business intelligence. At testup.io, we’ve been running multiple services for years without regular tests. One service collects usage statistics in SQL, another translates security tokens between systems. What they all have in common is that they’re tiny, have a clear purpose, and—most importantly—rarely change.

Maybe the code SHOULD change?

The assumption that you can build an entire system out of components that never change is obviously unrealistic. Some aspects have to evolve. Hardware needs replacement. Security needs constant upgrades. The internet evolves and legal requirements mandate how you handle your data. 

 

A lack of change is often a sign that a system is not safely changeable. For example, there are times when the system should change, but necessary refactorings are put off indefinitely for fear of regressions. 

 

A well-tested system is obviously much easier to migrate and update than a system that seems to work fine in production but is not well-tested..

Does the product really work as designed?

When something works for too long, chances are that people forget how it actually works inside. 

  • White-box testing not only checks that a system functions but also ensures that its internal architecture aligns with a design. 
  • Tests verify that internal APIs and modular structures work as expected while also documenting architecture and design.
  • Good tests list use cases along with expected responses. Ideally, they also explain why a behavior is expected and provide technical assurance. Often, this is more valuable than a written design document.

2. When the code changes rapidly

In early design stages, components undergo rapid changes. Interfaces, behaviors, and architecture are constantly shifting. Writing tests for each version of evolving software can slow things down without adding value—a test might become outdated before it even proves useful.

Many features are built under intense market pressure. Marketing campaigns are often short-lived, requiring temporary features like discounts or special offers—functionality that exists briefly before being discarded. In early product phases or lean startups, you might create lookalikes without even intending them to work, simply tracking clicks or user interactions to gauge interest. In these cases, speed matters more than quality. 

Is the design weak?

There are times when code changes too rapidly to justify thorough testing. But if this happens too broadly or too often, it raises concerns about your software architecture. Are you sure there aren’t stable foundations you can build on and test? Or are you just accumulating technical debt? 

Even the fastest-moving projects should establish some fixed points over time. Too much change can make it hard to trust your collected data, as it may reflect artifacts from various iterations. 

Once data storage is solidified and tested, the focus should shift to business logic. Front-end and graphical elements are usually tested last—styles and trends evolve quickly and harm only grows with the number of active users.

Are you really measuring progress?

If you’re not testing, how do you know you’re not regressing? Even during rapid iterations, there must be some way to measure progress. 

The urge to keep up a rapid momentum can spark creativity, but it rarely results in truly optimised software. Legend has it that Google doesn’t even change a single shade of blue without quantitative impact testing. While the value of such an extreme approach is debated, one thing is clear: You need some measure to track success. Other methods include customer reviews, sales figures or plain revenue. I remember a trader who was running a big data analysis framework on his desktop. It was always in production. He could see the impact of his algorithm within minutes and –surprise– it always worked.

3. When a feature doesn't matter much to anyone

Let’s be honest—sometimes we just don’t care about a particular feature. But ask yourself: is it you who doesn’t care, or is it your company? If it’s you, maybe it’s time to adjust your attitude. If it’s your company, consider cutting the feature and embracing simplicity. Some users might have started relying on a low-value feature—not because it’s truly useful, but simply because it exists. Its failure could disrupt a larger process, even if the feature itself was minor. If testing costs more than the feature is worth, it’s better to drop it sooner rather than later.

4. When testing is too expensive

Sometimes, the best option is to make a change and hope for the best. Realistically, every operation occasionally requires bold decisions—where changes are made responsibly and thoughtfully, yet with some risk. The value of testing depends on the business you’re in. A medical device demands far greater accuracy than a weather app. Still, in both cases, testing should stop when the expected cost of failure is lower than the cost of testing.

Can the costs of testing be lowered?

When testing is difficult, the system setup might be too complex. If a feature is hard to test, how can it be developed effectively? I’ve seen projects where developers had to spend minutes manually setting up the system just to see the impact of a small change—wasted time and a bad habit. A testable system should allow any relevant state to be reached quickly. Only a testable system is an extensible system. If testing is too expensive, don’t expect any planned extensions to be cheaper.

Do you track the costs of regressions?

Without testing, there will be a big gap between how you think the system performs and how it actually does. If testing seems too expensive, you’ll likely skip it or run it less frequently. 

Ideally, everything is tested at the unit level, where isolated tests run quickly after each change. When coverage is low, bugs slip through to later testing stages or even to production, where fixing them is far more costly and time-consuming. Performance and security tests can be expensive, but skipping them allows slow, creeping regressions to take hold—making it harder to trace issues back to a single change and requiring even greater effort to fix.

To wrap up

There’s always a point where testing must stop—otherwise, you’d be testing forever. It’s a trade-off between quality and the cost of ensuring it. 

Unfortunately, the cost of failure is much harder to track than the cost of QA. This creates short-term gains from skipping tests, but the long-term consequences can be serious. On the other hand, testing without delivering equivalent business value can be wasteful and might even weaken your competitive edge.

To decide when to stop testing, consider the risks associated with different testing types and what happens when testing efforts fall short:

  • Black-box testing: Your component becomes difficult to update, and necessary refactorings get postponed.
  • White-box testing: The internal structure drifts from its original design without tested documentation.
  • Unit and component testing: Regression bugs appear late in testing, making manual oversight costly.
  • Front-end testing: Your UI frequently breaks, leading to lost customers due to inaccessible features.
  • Performance testing: Slow regressions accumulate, causing unpredictable slowdowns while speculation spreads without concrete data.
  • Security testing: You get hacked.

If you've mitigated these risks, then you can safely stop testing. There are plenty of other valuable tasks that need doing —new features, documentation, recovery planning, performance optimizations. And, of course, all of those will eventually need testing too.

For more information

CTO
Stefan Dirnstorfer is the CTO at testup.io, where he focuses on creating an entirely visual workflow for test automation. With many years of experience as a software developer, he has now dedicated himself to a new mission in software quality assurance.
Comments
MoT Professional Membership image
For the advancement of software testing and quality engineering
Explore MoT
Accelerating Test Design: From Chaotic Requirements to Ready-to-Run Test Assets image
Fri, 27 Jun
Automating Manual Test Design with Generative AI
MoT Foundation Certificate in Test Automation image
Unlock the essential skills to transition into Test Automation through interactive, community-driven learning, backed by industry expertise
Leading with Quality
A one-day educational experience to help business lead with expanding quality engineering and testing practices.
This Week in Testing image
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.