How many C++ programmers TDD?

I am just curious, but I have a feeling it isn't as popular in this community as in others, but I have nothing to base that on. Just a gut feeling I guess.
I avoid it in personal projects because writing the tests is just as much (or even more) work as writing the actual code. It effectively doubles your workload for very little benefit.

In large group projects it makes sense, but when doing my own personal stuff it's a complete waste of time. Manual testing works just fine.
We write a lot of tests, but never use TDD. Design is driven by the business requirements (but certainly takes into account testability along with many other software design principles)
Last edited on
Design and methodology fads come and go. Anyone remember "agents" and "middleware"? Most have a small good idea that is then wrapped up in a hundred layers of fluff.

TDD is no different. It's good to think about testing and debugging when you design the software. Where I work, we always create a small command line utility that provides direct access to whatever API is being created. This proves invaluable in unit testing and even in support.
FDD, TDD, BDD, DCI, etc., etc. I am starting to wonder if they are simply tickets to high-paying speaking engagements, books, and popularity. Not that there isn't value in them of course.
closed account (z05DSL3A)
So what does TDD mean to you all? It sounds like it may be different to what I think it is.
What TDD means to me:

1) Design/write the interface for your class
2) Write tests that exercise that interface... giving fixed input and expecting fixed output.
3) All tests will initially fail because the class was not written yet... you only have the interface.
4) Begin writing class implementation.
5) More and more tests gradually pass as the functionality gets added
6) Once all tests pass, you know the class is complete and functional (assuming you have tests which exercise all functionality)
The reality of it is much more simplistic.

1. write a failing test
2. make it pass by writing the simplest code possible, like returning the value you have the test checking for.
3. add another test to break the first test because the first test isn't working (it's cheating.)
4. get the testing working right, and get to green as soon as possible.
5. refactor

6. cook on 450 degrees for 1 hour, and repeat.
Last edited on
Step 2 doesn't make any sense. Why would you waste time and effort writing a fake fix just to get a test to pass in a meaningless way?

In all the automated testing I did in my last job, failures were kept as failures, and implementing "fake fixes" just to get the tests to pass was a huge no-no. Like it was one of the worst possible things you can do.

Instead, failures that were known to fail (due to test instability, or due to functionality that was incomplete or absent) were set to "ignore" -- so the tests would still fail, but the failures would not prevent code submissions.
closed account (z05DSL3A)
I was waiting to see if any of the others replied. I agree with Disch, it looks like we have the same understanding of TDD.

My problem is not with TDD itself but more to do with not being able to fit it into my workflow for my 'home projects' (different systems and tool chains and no standardised test framework).
Topic archived. No new replies allowed.