Saturday, September 10, 2011

5 Reasons to Write Tests For Your Code

I hate to admit it, but I wasn't always an advocate of writing tests.  My past self understood tests to be good practice; but typically saw them as a luxury.  Like fine china, they were shiny, but not something I could afford.

That being said, if I could go back I would certainly have some brutal lessons to share.

So as my first post, here are five arguments that I believe would convince my past self to write programmatic tests.  I hope that these can convince others who are debating whether tests are worth the time.

1. Tests use programming skills to check programming skills.

I hate doing manual testing.  One of the reasons I became a developer is because I find repetitive tasks draining.  In his article Top Five (Wrong) reasons that you don’t have testers, Joel Spolksy explains why programmers make terrible testers, and for all the reasons he stated I’m certainly a good example of his hypothesis.

Contrary to repetitive manual testing, programmatic tests are still software, and writing them is task that feels very natural.  Rather than trying to force myself to do something outside my core competency, tests are a programming task focused on a different aspect of the problem.

I classify a test as a technique that use a programmer’s coding skills as an opposing, self-checking force.  In a way, using tests to check code is like designing two trucks and having them pull against each other; if nothing breaks, you know they both work.  


If you can code, you can write tests, and there is nothing stopping you from using them to dramatically increasing the quality of your software.  Rather than relying on manual testing, apply your programming skills in an opposing direction, and increase your code quality tenfold.

2. Tests force you to have a conversation with your code … and with yourself.

Writing tests for you code is a lot like going back in time, and having a conversation with your past self.
As such, the key to reaching code quality is to ask tough questions, and demand answers as if it were an interview:

  • Is this complex exception handling logic to recover really necessary?
  • Is this level of encapsulation tight and well defined?
  • Is this extra null check necessary?
  • Does this function really need four block levels?

The process of writing tests also reminds you to make your code consistent with the broader principles of the system.  Remember, specs are like water: easy to traverse only if frozen. Tests present an excellent opportunity to fix code written under an old spec that might have changed halfway through development.

This task of facing yourself is one of the most brutal, but also most powerful, aspects of software development.  Interview your code, and if you wouldn't hire it after the interview, change it.

3. Tests remind you that code is measured in lines spent, not lines written.
Edsger Dijkstra once said that it should be lines of code spent rather than produced.  It’s important to remember that repetitive or unnecessary code leads to cruft, maintenance, and more potential for bugs.

Writing programmatic tests not only helps eliminate unnecessary code, but rewards the task twofold:  Less code written means less code to test.  Here are a few tips to save you time and complexity in your code:

  • Check to make sure common libraries, such as apache, guava, or Boost, don't already have an implementation you can leverage.
  • Remember to store the minimum amount of state necessary to solve the problem.
  • Do a quick search of stackoverflow.com or github to check if someone else has already worked on this problem.
  • Beware of unnecessary logic such as deep levels of abstract classes, functions that don’t pull their own weight, or unnecessary checking of edge cases that are already handled or can never occur.

After writing tests for your code, you will find that your implementation complexity can often be decreased significantly; in many cases I’ve eliminated half the original logic.  Use tests to incentivize boiling down code to the core problem, thereby keeping maintenance and bugs to a minimum.  

4. Well-designed tests document the tested code.

Code comments are good practice for good reason. When written correctly, comments couple documentation with the living code.  However, the weight that comments carry is implied; in order for comments to work, developers must be trusted to update them.

This social contract doesn’t always work.  Documentation can get out of date, a broad refactoring may change an assumption, or a programmer may write obscure comments then leave the group, making the comments useless without context.

By contrast, tests are functional, self-checking documentation.  A test represents an assumption, and if the test fails, the assumption has changed.   A broken test presents objective evidence that either the code or the assumption is broken, and adds a weight that simple comments lack.

Well-written tests can be better indicators of code behavior than the code itself.  I’ve found that good test suites check the basic case, the edge cases, and a few pathological cases.  Together, these define the behavior of the code as it appears from the outside -- the “what” rather than the “how”.  

Reading tests can be especially useful when trying to ramp up on existing code.  Next time you’re poking around unfamiliar code, try looking at the tests before the code.  Armed with this extra information about expected behavior, you’ll find your understanding the implementation may be much more thorough when you’re done.

5. Tests give you transparency into code you didn’t write.

Whenever a group of developers programs together, the inherit complexity of managing the code is a tough task in itself.  The possibility that one of the hundreds of submissions will break the system increases this task significantly if not handled properly.

When writing projects in a group, it’s essential that everyone in the group be able to objectively evaluate the code contributed by others.  Without such a system, you will find that errors from unfamiliar code will cause your team to lose confidence that the code as a whole.

Tests give you transparency into this unfamiliar territory.   A test failure in unfamiliar code points exactly to what’s broken and which change caused the breakage.   I actually recommend that every check-in trigger the full test suite, and if something breaks the change should get rolled back.  This gives everyone confidence in the canonical copy, saving the group from the worry that one developer’s change will inadvertently break the entire system.

When convincing others in your team to write tests, remember: most of the little mistakes that we all make can be caught by the smallest of tests. A simple integration test can check your assumptions about an API.  One unit test ensures you don’t use the wrong overloaded function.  Even if you're not testing every case, just writing a few tests for common cases will give your team a healthy sense of the overall code quality.

For groups writing complex systems, I consider tests just as important as having good source code management.  Without good code management, nobody knows which copy of the code is canonical.  Without tests, no one knows if the canonical copy is reliable.

Where Do I Start?

For those of you who found this article helpful but don’t know where to start, here are a few resources:


I would also recommend looking into Test Driven Development, which is the philosophy of writing tests prior to writing implementation.  There’s a great Software Engineering Radio podcast about Testing, TDD that discusses TDD usage in the wild.

No comments:

Post a Comment