Test driver development questions and which book to read?

Hello!

I want to learn test-driven development but I can't find any resources on the internet (to worth at least). Is there a good book that can teach me that?

Basically, all I want to know is:

1) How to properly create unit tests, even for a complicated project like a game engine or an operating system which run in a nonending loop.

2) How to predict memory leaks by overloading new and delete.

3) How to print warnings every time I'm trying to access or delete a freed object.

Now let me tell you my solutions without having ever studied these topics.

Predicting memory leaks I'm not sure but I've heard that you just need to overload new and delete to increase a counter every each time they are being called and if those two counters are in balance then you are ok.

My concerns are mostly on question 3. What I thought of doing is that every time I call delete on an object, I'm appending in a list the memory address of that object plus the __LINE__ and __FILE__ from where that delete occurred. Every time I'm calling new, if that memory address already exists in the list (which means that this mem address is being re-used) then I'm removing it from the list. Finally, every time I'm trying to access the members of an object if the memory address of that object exists on the list then its an invalid access due to freed memory and I'm also printing the line and file of that access plus the file and line where the delete occurred (I stored those in the list). SO basically I can know if an object is freed and I'm trying to access it plus and where that object was freed.

Is that good?

Thanks.
A lot depends on your OS/compiler/IDE. Valgrind is popular on *NIX. I've used Visual Leak Detector on Visual Studio in Windows. Also, built into Visual Studio, look up CrtDumpMemoryLeaks (and research related details).

Those handle 2 & 3. You get reports of leaks (on some, with line references in source as to where they came from). The better tools will inform you of referencing freed objects, or using memory not allocated.

That said, if you seriously enforce a coding standard using std::unique_ptr, or std::scoped_ptr and/or std::shared_ptr (with related std::make_unique, std::make_scoped, std::make_shared), you eliminate the primary source of leaks.

Put another way, stop using new and delete. They're not exactly modern C++. They are there to be used when required, but one must seriously defined "required" in the context of dynamic allocation of objects.

If you can't (sometimes one creates specialized containers, must process large buffers for images, video and audio), use the memory tools mentioned above.

As to 1 - first, you must design and implement code that can be tested in isolation. That's one of the key points. You do not test code at large, with complex interrelationships. You test small units. For example, if you were to make your own version of std::vector, you'd test that vector class with all permutations of usage. You would not test in situ (in working code). You'd first verify the vector class (and possibly it's related family of classes/functions, if any) is solid. Then you'd know it's ready.

There are formalized test systems, some built into IDE's. However, testing can be as simple as writing a "console" application (unless, of course, you're testing GUI, or some 3D graphics software) which present example data, runs through all of the interface features of the code being tested, in all worst case conditions. For example, make sure it is overloaded (runs out of RAM) to know how it behaves.

You may find some example material to study by trying out C++ libraries that include testing applications. One of the most common use of a test is for compilers. If you download and build LLVM tools, there are compiler tests which exercise the compiler. That is an example of a kind of violation of what I just wrote. That tests a complete compiler. It is a kind of special case situation. Any sub-systems (like their own containers/parsers/etc) have been individually tested during development. The tests for LLVM are intended to demonstrate the overall build was correct. It does test virtually every keyword, every logical formation for all known language compliance requirements.

What does not constitute genuine tests are the example programs that come with some libraries. They may demonstrate how to use the library, and are able to prove the library built to a functional state, but they may not be designed to exercise every feature to worst case scenario.

Counter to what I've underscored, there are times when code uses a number of components built in house (in house libraries, or families of classes/functions), and must be tested in combination. This looks a lot like testing the entire application, which is generally a bad idea (relative to your question), but the central focus is more on the point that worst case operation is tested and verified (that it doesn't leak, doesn't crash, performs the tasks expected, handles out of memory, lost files, missing drives, etc.) In such situations, the components should each be tested in isolation, but then code which combines them deserves to be tested as it consumes these components.

Key is to combine these components in as isolated a way as possible, focused on proving the code which consumes in house components does what it is expected to do (having already proven the components it consumes survive tests individually).

Subsequently, the habit should be to keep these tests in a known location, for easy access. If you should ever find there is a bug, you may need to re-use the tests (and expand them to expose the bug).

Think of code as a machine built of logic, math and language. If you built a car, you would not test compression by driving. You'd use a compression meter to test each cylinder with car in a garage (and not actually running, just turning the engine without fuel or spark). If you're testing valves, they're tested with the cylinder head removed, sitting on a work bench. They're measured, installed and tested for air tight seal. After the head is proven to be up to spec, it is installed back onto the engine block, and compression is again tested there. It is assumed the head works at that point, but the compression test (after re-installation of the head) is to verify the seal between the head and the engine block is correct (and the piston rings are sealing, too).

When that proves correct, spark plugs and fuel injectors are installed/connected and the engine can be started to verify all is working - before the car is again taken to the road.

This is the kind of "testing in isolation" one applies to software, to make it function more like a machine instead of a jumble of instructions.


First of all your answer was very informative and I thank you for that! But still, I'm a little confused about unit tests. I understand why they should test small pieces of my code and this is why they are called unit tests but I still don't get how to design this if most of the classes have complex interrelationships as you said. This is because, when I try to test a class the functionalities depend on another, that other might also depend on something else and so on.

Let me give you an example. I have a game engine that works somehow like this.

creation of application ->
the application creates the window ->
the application creates the scene ->
the application loads the game objects into the scene ->
the application goes through each object in the scene and draws the graphics plus calls a method OnUpdade() for each game object, which this method runs code (script) specifically for that game object.

How do you make unit tests for that? Should I make a test for the scene creation for example which instantiates a new scene object without doing anything else (like creating the application and the window)?

I mean its wrong in order to test the scene, to create a test that goes through all that right? (Creating the application then the window and then the scene). This will not be a unit test.

When it comes to functions it's easy. Functions take some parameters and return a result. If the result is not what expected then the test fails.

One reason I want to test classes is to make sure that member variables have the correct values. I make a lot of mistakes where I'm initializing stuff, but something goes wrong and the members have wrong values. Plus, testing methods is important too.
Last edited on
Btw this is the best solution I could think of about question 3. Basically, I'm keeping track who is using my object, and after deletion, I'm setting them all to NULL.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
#include <iostream>
#include <vector>

using namespace std;


class Person
{

    public:
    std::string m_name;

    public:
    Person(std::string name, Person **src) : m_name(name)
    {
        m_references.push_back(src);
    }

    ~Person()
    {
        for (Person **p : m_references)
            *p = NULL;
    }

    void print()
    {
        std::cout << m_name << std::endl;
    }
    

    //Keep track of who is using this object.
    Person *get(Person **src)
    {
        m_references.push_back(src);
        return this;
    }

    private:
    std::vector<Person **> m_references;

};




int main()
{
    Person *a = new Person("Nick", &a);
    Person *b, *c;

    b = a->get(&b);
    c = a->get(&c);

    a->print();
    b->print();
    c->print();

    delete a;
    

    //All these references should be NULL after deletion of a.
    std::cout << "a: " << a << std::endl;
    std::cout << "b: " << b << std::endl;
    std::cout << "c: " << c << std::endl;

}
Last edited on
I still don't get how to design this if most of the classes have complex interrelationships as you said.

Unit tests don't test the interrelationships of anything. They're specifically intended to test only that a specific unit of code behaves in the way it's supposed to behave.

If you're thinking that this means they're of limited value, you're right. You can't just rely on unit tests to ensure your application works - you'll also need to devise bigger tests to test those interactions.

But having comprehensive unit tests enables you to have confidence that individual classes and functions do what they're supposed to, and to catch some of the bugs that might lead to observable problems in the wider system.
Last edited on
Topic archived. No new replies allowed.