To test or not to test

I've been coding in C/C++ by myself for years, but now that I'm at uni (and therefore have to use *shudder* java), I'm forced to do something I've never done before - unit testing. While I can definitely see the benefits of spending time on unit tests if you work with a lot of people on a large project, I can't help but feel like it throws me off. Having to go back and forth between test-code and actual code, testing every bit of added functionality as I go, really disrupts my "mental flow" when coding. I don't know how to explain it, but when I'm really getting into a problem, following long trails of events in the code, having to snap out of that and write test-code is a real annoyance. Any one else feel this way, and if so, got any tips to get past that? No flamewar intended here, I don't even really know if unit-testing the java-way is common in C++-world or not.

EDIT: Of course, code needs to be tested, what I'm talking about is specifically the TDD-style of development.
Last edited on
It's fundamental making sure that code stays consistent. It allows changes to happen to code and makes damn sure that the result stays the same.

The correct answer on whether or not to do unit testing is yes, you should test. Realistically, it's not common though...

In large projects, unit-testing is a requirement, almost a project of its own. If you look at Mesa, you'll notice a large amount of piglet tests: http://people.freedesktop.org/~nh/piglit/

It's impossible to test every feature of a large project by yourself in a timely manner and is cumbersome to asynchronously do it among a large group.
Last edited on
I expect (and usually find) any serious large C++ project to require unit tests to accompany every checkin. The programmer writes them, the QA verifies and signs off.
"Testability" is actually a major design requirement where I work - when designing classes/functions/components, we're supposed to think about how they can be properly unit-tested.

(this has nothing to do with "TDD-style of development", though)
But would you rather write tests first, and then code, like TDD "says you should", or do you make an effort to write testable code, so you can write tests at another time? Another thing is what if the specification changes, wouldn't you have to rewrite both the code and tests, giving you a lot more work?

EDIT: I dunno, maybe I'm just put off the idea because we have a very small codebase, but are forced to write a crazy amount of tests for it. Right now it only seems like a hassle, but maybe (probably) I'll learn to appreciate it if I work on a large-scale project.
Last edited on
my preferred sequence is preliminary design - working prototype - preliminary performance metrics - detailed design (this is where testability is one of the considerations) - implementation - unit tests (whitebox) - performance metrics - code review - checkin. There may be overlap and backtracking as needed.

Our main codebase is 175MLoC at a recent count (there are separate repositories too, but I don't have stats on those). Each individual component is small and their tests only deal with that individual component's code, assuming everything lower in the hierarchy of library components was tested)
Unit testing is valuable, but not for every kind of code. Heading towards 100% code coverage is a stupid waste of resources that could be spent on better ways of achieving high code quality.

1. Adding unit tests increase chance of finding bugs. But 100% test coverage won't find 100% bugs. Tests don't give any guarantees. This the problem. There is a point of diminishing returns. Many bugs stem from interactions between components (you may have perfectly tested components, working fine alone, but not together), external systems (e.g. a bug in the browser) or logic errors (not understanding how the system should work). They cannot be caught by unitt tests.

2. Adding unit tests adds maintenance cost. It is not only cost of initial writing tests but also the cost of updating the tests whenever requirements change.

3. Unit tests written by the same person who writes the code tend to have exactly the same bugs that the code has. If you forgot to handle some edge case in your code, you'll forgot it when writing tests.

4. TDD is bullshit and doesn't really work. If you do not know how to properly structure your design, no amount of TDD wizardy is goig to help you. And if you already know how to design your software right from the start, you don't need TDD either.

What to do instead:
1. Write unit tests only for complex code that has a simple API, doesn't have many functional dependencies on the other components, and which has well defined and easy to test outcomes. E.g. sorting is a good candidate. Findng a shortest path is a good candidate. Code proxying requests from one system to other systems basing on URL pattern is a *bad* candidate. Generally any test that requires writing more code than the implementation under test or writing test code that is too similar to the implementation is a sign of a useless test.

2. Have a QA team do automated functional testing of the whole product or its big components.

3. Do code reviews.

4. Learn to leverage static type system to catch bugs for you (essentially making buggy programs not compile).

5. Use static code analysis tools (best integrated with IDEs).
Last edited on
Cubbi, that approach makes a lot more sense to me :)

rapidcoder: Thanks for some interesting thoughts! Our assignment right now seems way beyond that point of diminishing returns :P Your point 3 is exactly what happened to me last assignment, haha :) Interesting idea about leveraging static type system, that seems much neater.
I disagree with #3 the most but we've been over this. Our reliability requiremnts are different.
Last edited on
Topic archived. No new replies allowed.