Testing is a cross to bear. It’s tedious, time-consuming and not half as much fun as the creative coding itself. But developers can’t get around it. We’ve compiled the four most common mistakes in testing and how to avoid them.

Code testing is not one of the favorite tasks of developers and is often avoided or outsourced. But there are exceptions, like Matt Lacey, who spends more time testing than coding. Lacey believes that his own productivity can be increased by continuous code testing. However, this is true only if it is done properly. He has compiled the most common mistakes developers stumble over in his article “11 Things You’re Doing Wrong When Testing Code.” We take a look at the four most serious ones.

There is nothing wrong with testing

The biggest mistake most developers make is not starting testing in the first place. However, there is no excuse for this: Neither lack of motivation nor the excuse that others don’t test their code offsets either are good reasons.

And likewise, pointing out that a project is already underway does not justify not testing it. Rather, it is precisely in such situations that plans for testing should be started, even if the integration subsequently requires changes to the architecture. In general, if testing is implemented into a project from the beginning, it can save a lot of money and effort down the road.

It is not about faulty tests

Because test-driven development enjoys great popularity, the Red-Green-Refactor is used more and more often in software development. While red signals the failure of a test, green indicates a positive run. However, many developers now make the mistake of misinterpreting the idea behind Test Driven Developments. It is not primarily about continuously writing faulty tests, fixing them, and starting the whole thing over again. Rather, creating tests before the actual programming work is meant to determine in advance how the system should behave correctly.

Faulty tests are thus not the starting point, but the result of test-driven developments. In this context, it is important to give the tests names that provide unambiguous conclusions about the behavior that is expected of the system. It is up to each individual to decide which conventions are used for naming. What is crucial is that the labeling has a certain coherence and clearly points to the issue under test. In this way, tests can provide important information.

Too much, too one-sided, too short

Developers can also make mistakes when it comes to proper naming. One of the most common causes of test failure is that too much is tested at once. Long and complicated designations are good indicators of this. A good test is characterized by testing only one thing at a time. If a test fails, the cause should be clearly marked in the code. If you have to look in the code to locate the error, something is going wrong. Conversely, this does not mean that several things should never be tested at the same time. However, care should be taken to ensure that they are closely correlated.

However, often not only too much is tested, but also too one-sidedly. Other options are often left out if one is too strongly committed to one test procedure. It is not possible to test all parts of a system with only one procedure. Unit tests, for example, are needed to ensure that individual components are working properly. And integration tests confirm that the various components work together. Further, automated UI tests are needed to verify that the software works as it was intended.

In addition, tests develop their full potential over time. They are not just meant to determine that something has been programmed correctly. Rather, they must be able to guarantee proper functioning even after further changes have been made to the code base. So in order to be able to detect and fix problems as early as possible, tests must be constantly repeated. Therein lies the advantage of fast and automated tests.

In addition, tests only develop their full potential over time. They should not only determine that something has been programmed correctly. Rather, they must be able to guarantee proper functioning even after further changes to the code base. So in order to be able to detect and fix problems as early as possible, tests must be constantly repeated. Therein lies the advantage of fast and automated tests.

Especially in the context of a continuous integration system, automating tests is a good start. However, it should not be overlooked that everyone involved in a project must be enabled to perform tests. The mistake is often made of unnecessarily complicating testing with special setups, releases or configurations, thus separating it from coding. However, developers must be able to perform tests independently before, during and after their programming work. Otherwise, the freedom from errors of software cannot be guaranteed.

Code coverage is not decisive

While code coverage is a good thing, it only has limited value in code testing. This is because how much code is executed during a test allows only a few conclusions to be drawn about the quality of the test procedure. At most, the code coverage provides helpful information. If the coverage is high, this may mean that there are already enough tests and that another one will not provide any additional insights. If, on the other hand, the coverage is low, this can be an indication that not enough tests are available.

But even here, caution is advised, as high code coverage can also be achieved with tests that do not check anything. If the tests only aim to increase code coverage, they are useless in most cases. But then, how can it be determined when a code block needs to be tested or not?

The question can be answered as follows: If the code exhibits non-trivial complexity, testing is required. If it does not exhibit such complexity, no testing is required. Code offsets have a non-trivial complexity if their functions cannot be clearly assigned at first glance during programming or in the review process. Thus, if errors are found in existing code offsets, this is usually due to the fact that the complexity was not sufficiently checked.

The four most common mistakes in testing

Post navigation