[Software] Testing is Unnecessary

Antonio Alexander
7 min readJun 1, 2023

I was six years into my programming career before I wrote a unit test (scout’s honor). Whether I was lucky or just that good; I’m living proof that testing your code isn’t necessary. I didn’t write run-of-the-mill code; I wrote code for test systems (ask me how I tested the test system haha). But unlike some of you reading this, I was the sole author of my code; I wasn’t writing the code for another developer or having to architect the code a specific way so it could be supported and maintained by someone else, it was just me. But since then, I’ve had to develop code with the idea that other people would maintain it; I’ve had to sacrifice elegance to feel more confident that regressions were less likely to be introduced by a new developer or if a bug was found we could both reproduce it, create a test to confirm and perform future regression tests with those changes. But even with my more recent experiences, I stand by the idea that testing isn’t necessary, but if you do decide that testing is what you want to do, it deserves gravitas.

In contrast to LabVIEW, testing in Go is so much easier, it takes very little effort to add a test to code, especially if you’ve already written tests for it before. Tests exist on a spectrum where at one end you have unit tests, the other end blackbox tests and in the middle you have a mix of integration/function tests. I think of this spectrum as being closer to or further from production.

The darker your test, the closer it is to production

The…darker your test, the closer it is to how you’d interact with your application’s functionality in production. In _production_ you’re generally limited such that:

  • you can only communicate via the exposed technologies (e.g., http, grpc, kafka etc.)
  • if your software is layered you can only communicate with the outer most layer(s)
  • you’re at the mercy of any strange interactions with infrastructure
  • you’ve got limited access to administer infrastructure (e.g., no one will give you database admin privileges)
  • you have to FULLY implement protocols (e.g., Modbus, OPCUA, etc.)
  • you cannot easily see intermediary values (you can’t debug)

Yes, I know that in a lot of cases it’s totally possible to debug in production, but its impractical and doesn’t scale well. You shouldn’t do it and it’s a bad counter-argument. Delve (a Go debugger) is FUCKING amazing, totally possible to compile delve support into your container and do magic. **Don’t**.

Unit tests are the least complex: you provide a specific set of inputs and you get a specific set of outputs. Unit tests are <u>static</u>; meaning you will always get the same output with the same input (otherwise you’ll have a flaky test). Unit tests are less complex because they’re semi-finite and you’re generally testing pure(er) functions: functions that contain no other functions or contain functions specific to the purpose of the function (e.g., you’re not testing an http endpoint that has to do a database call). As a result, unit tests are often the most comprehensive and the easiest to write.

In contrast, blackbox tests are tests that are closest to production; your application is in it’s _production_ form and as a result you’re limited in your ability to test it. In a way, these tests are generally less exact and significantly less comprehensive due to the inability to control or affect intermediary processes or infrastructure. A blackbox test can be considered unique because you could hypothetically run it against production while unit tests are difficult to impossible to do so.

Yes, I know it’s not _impossible_ to run unit tests against production; but the caveat is that you have to take into account all the implications of running a unit test against production; it may not specifically be written to be safely run against production. You may just be “that good”, but..try not to. It’s easier that way.

Function (or integration tests) are in the middle of the spectrum and generally involve anything that’s NOT a pure function; whenever you’re talking “through” layers or you’re testing a function that does two VERY distinct things, you’re no longer writing a unit test. There are some ways around it that I’ll describe below; but in general if it’s not a unit test it’s a kind of function/integration test.

Regression testing isn’t a kind of test, but…an **existing** test under specific circumstances. A regression test runs these tests under “new” circumstances to confirm that a change you’ve made to something else (often unrelated) hasn’t been adversely affected. Regression testing is a bit unique because it requires that you can run “old” tests against “new” code. This kind of testing is ideally comprehensive in terms of representing a functional cross section of what you application is supposed to do from the perspective of your consumers.

Regression testing, especially automated regression testing, is easier said than done. You can manually do regression testing given you have something like swagger, an application or an alternative interface, but automated regression testing is the no-nonsense, regression test as often as possible, solution.

One of my main constraints for writing this article was to try to not get into the weeds as tests can be very language specific, but what I can leave you with are a set of maxims or core ideas I’ve learned from testing that I think are universally applicable:

Developers shouldn’t write their own tests

DON’T DOO IT. This is one of those very obvious if you take the time to think about it. One of the easiest ways to introduce bias/happy path syndrome into your code is by allowing the developer who’s writing the code to ALSO write tests for the code. It’s a subconscious thing, they’re not doing it because they want to, it just happens that way. Better to have one developer write tests and another do the functional code.

Integration tests using infrastructure should be prioritized against abstractions where reasonable

In a perfect world, you can abstract infrastructure like kafka to something simple like an in-memory queue and be relatively safe, but practically, infrastructure can’t be compared to something as “perfect” as an in-memory message queue. With tools like Docker, there’s no reason you should go through the trouble of abstracting kafka into an in-memory queue, just use kafka. Better to write your tests against infrastructure and figure out the quirks early on than in production.

Try not to test things that are unlikely to occur; test them once they’ve occurred

Don’t waste your time coming up with test cases that are unlikely to occur. It’s an invitation for scope creep. You SHOULD have a way to add tests when weird situations DO happen; you should be able to re-create it and add it to your tests because you have proof that it’s [now] a valid test case.

Test driven development can give you false confidence

Although I think it’s a really neat crutch for new developers, test driven development can give false confidence in that the tests are a safety net that can allow you to not have to understand what the code you’re developing is supposed to do. You can place a square block into a circular hole if the block is small enough.

Your tests can only EVER be as good as your architecture

I can’t stress enough how much your architecture plays into your ability to test. Sometimes code that works incredibly well, is effectively untestable: to test something you have to be able to isolate functionality by providing static input; depending on what’s exposed, that may not be possible and you may only be able to do high-level tests that can fail to validate internal edge cases. Maybe it works, but you can’t test if its working because it’s supposed to or because you were lucky.

Code coverage (especially visual coverage) is the BEST indicator if your test/code is effective

Code coverage in terms of quantity can give you some confidence that you’ve put the code through its paces; being able to see what code is or can be tested can give you an idea of the efficacy of your tests and code because…(see below)

Untestable code shouldn’t exist

If you can’t test the code, you shouldn’t keep it. If a piece of code is unreachable, or you’ve tested “everything” and you’ve got no code coverage for certain portions of code, you should ask yourself why it’s even there. You can omit code that falls under “I don’t trust this API to do what its supposed to do safely” because sometimes it’s more efficient to use a bad API.

Code coverage is a qualitative metric, never quantitative

Like all metrics, they have to be understood in context; although a “metric” of 80% code coverage is a good starting point, it can provide a false sense of security. Heat maps are a bit better; but the holy grail is to have a plan for testing and verifying that the plan was executed (that you hit all your test plans) by using code coverage.

Testing is unnecessary…it’s honestly a luxury, a nice to have. It’s something you do when you’ve got plenty of budget, time and bandwidth.

Anchor Man, testing is kind of a big deal

This is NOT me advocating for NOT testing…rigorously; Testing is kind of a big deal, it’s something that should touch every effort within your project from planning, to architecture, to maintenance and support. Sometimes, if you’re not going to go all-in, it’s better to omit testing altogether than to do it half-way.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Antonio Alexander
Antonio Alexander

Written by Antonio Alexander

My first love was always learning (and re-learning); hopefully I can share that love with you.

No responses yet

Write a response