Coding for "debuggability"

Do you code for debuggability? Edit: eliminate a little bit of jargon
When you code, do you try to make it easy/easier to debug?

Along with readability and testability, I place "debuggability" quite highly.

Going by some of the code fragments I see in these forums, I am wondering if this is something that should be instilled in fledgling programmers right from the offset. By example and explanation (the basics of how debuggers work).

Do you think about debugging, along with everything else, when you code?

If so, are there things you do or do not do to make code more debug friendly?

If not, ia this because other things matter that much more to you, or because you don't believe in debuggers? (I recently encountered my first commercial developer who has never used a debugger in 10+ years!!)

I think debuggability is obviously dependent on readibility. If you're stepping though hieroglyphic code, it's a lot less pleasant than if ought to be. But there are a number of things beyond readbility that have cause me grief.


PS I have found Thumper's thread "Debugging Your Program" from Sept 2010
but I see it didn't make it to an article. Do you know of any more recent articles?
Last edited on
Define "debuggability".
Well, I meant it in the usual meaning?


debuggability (countable and uncountable; plural debuggabilities)

(uncountable) The quality of being debuggable; capability of being (easily) debugged.
(countable) The extent to which something can be debugged

That doesn't answer the question, it just changes it to "What makes something easy to debug?"
@ andywestken: I'm with chrisname on this, I know that the scope of programs I've written is extremely limited compared to yours but I've only ever found debuggers useful for examining other peoples binary files (I have a recent story about that but I won't derail your thread). I have my code return or log specific error codes on failure but that's about it.
Well, on relection, I should just have asked "Do you code to make it easy/easier to debug", as that is exaclty what I meant. I guess I've become more habituated to jargon than I should be.

Do you never step though your code to see what it's up to?

For example:
* Prefer standard containers over library-specific containers (e.g. QString) if there's no difference, since they're typically easier to watch.
* Use _DEBUG to trim the data set, as the debugger overhead can get quite large.
* I've sometimes used this:
if (some_condition)
because it's faster (that is, it makes the program run faster) than setting a conditional breakpoint.
When I code I put in messages that display to a seperate window with information relevant to the current stage of the program. I also put a universal boolean variable in to turn these messages on and off.

I think this helps me Debug because it lets me know exactly where a program crashes or messes up.
There are a number of "defensive programming" things that I have done in my code. Most of them force the programmer to do the right thing in terms of how the objects work together.

For classes that should not be instantiated into an object, I make their constructor's protected so that any such attempt will result in a compilation error.

Another thing is to disallow certain operations that don't make sense for the objects.

For example, in my Land Surveying project, I have several types of coordinates - Plane, Assumed & Geodetic. There are several types of each of those as well.

Firstly it doesn't make sense for them to be added to each other because they have different meanings, (Even though they look very similar - they all have East, North & Height ordinates) so I don't provide operators to do this. I need to figure out how to prevent assignment of them as well, preferably to produce a compile error, not some kind of runtime error.

Secondly, when dealing with absolute (world) coordinates of the same type, it still doesn't make sense to add them - a typical coordinate might be 300000.000, 7000000.000, 200.000. So I have another class called Delta, and I provide operators to add Delta's to each of the other Coord tpyes.

In surveying there are a lot things that look very similar but are actually rather different. A good example is distances. I have several types of distances - Horizontal, Vertical and Slope. Again it doesn't make sense to do operations on these, and I need to be able to use them for overloaded functions. So I create a class for each one, which is a bit of a pain because each class has only one member variable, and I have to create an object for each one rather than just using a double. I use them in functions calls like this:

CreateNewPt(ExistingPt, Bearing, ZenithAng, SlopeDist);
CreateNewPt(ExistingPt, Bearing, HorizDist, VertDist);


Bearing and ZenithAng are examples of different types of angle - horizontal & vertical planes respectively.

Further complicating things, I have a whole inheritance tree which does unit conversion for distances, angles, areas etc. Angles are converted to radians and stored and used as such, while distances are stored in metres. I structure the code so that the programmer is forced to use the units, very few of the functions allow doubles as arguments.

The Geodetic Coordinates provide the biggest nightmare, because the formulae are long and fairly complex - there are lot's of variables.This is because Geodetic Coordinates are calculated on an ellipsoidal surface. If you have lots of spare time you can read about it here:

The other thing I do is to make the documentation in the code crystal clear, so that when I go back to it 6 months later, I don't confuse myself. Obviously this is great if someone else has to read your code.

Some other more trivial and fairly obvious things I do, are to have a variable for each term in a series to avoid complication of the code for complex formulae. I put braces around single line statements in if, loops, switches etc in case I add more code later.

As well as all that, there is the ever present mentality of, "What are all the possible ways this can go wrong?", and writing code to defend against all these possibilities.

Finally, there is testing. I write code to test my code. It can be quite tricky to make sure you have tested all the boundary conditions. Sometimes I have just as much test code as I do original code. I make heavy use of Git to achieve this, I found that easier than duplicating the whole project like I used to do in the past.

There you go - all that might produce some debate, do you think?
Last edited on
Coding for testability is one of the primary coding guidelines where I work. and it's great. But as for "debuggability", I don't think that's as useful: C++ is designed to be used with an aggressively optimizing compiler, and optimized code is not debuggable. Sticking to debuggable, unoptimized builds often masks serious problems.
It's kind of ironic, since optimisation sometimes makes hidden bugs more obvious.
A few tricks I've come to use:
(Context: all my programs are generally algorithms where very tiny calculations and operations are executed billions and billions of times. Most of the bugs/problems come from very rare and specific cases that I hadn't thought of that somehow lead to troubles. That means I have to be able to identify the specific iteration that lead to the problems, even though the crash/oddities might only take places much later. It also means I generally can't use the Debug settings, because it would take ages to get to the problem).

a) Design your code top-down and try to make each function small. I try to pinpoint problems by putting a print/write at the start and end of high level functions first and then work my way down to lower level functions to see where the problem is taking place. If your code is one massive function with multiple nested loops, it's often much harder to do this.

b) Make sure that you can access and save some data from previous steps. Making a full copy of the situation can lead to a massive slowdown, so if you can easily identify and save the changes that took place, you can easily reproduce the steps leading up to the problem.

c) When you have similarly typed variables with conceptually different meanings, give them a different name (typedef or object, cfr. TIM's post). In my case, objects are often overkill, but a typedef can do wonders. If you see you're assigning an ID to a COUNT, you know you messed up.

d) Don't overload functions unless you really need to. I used to simply overload all low-level access functions to have all possibilities covered, but that's the dumbest thing you can do. If several overloads have the same number of arguments, make sure the arguments are different object types that can't be converted implicitly. Typedefs will lead to issues here, when your Remove(COUNT) also accepts ID type variables as they're both unsigned ints.

Other than that... don't underestimate visual inspection. All my programs go through at least one phase where some data of each iteration is being printed. It'll be slow, but once that error message pops up you'll be happy you can just scroll up and see if anything is out of the ordinary. Maybe it's the same operation that always takes place a few iterations before the problem. Maybe it's the same element being handled.
Return error codes from your functions and check their values. Make your program modular. Avoid code duplication. If your making a cross platform application, use as portable of code as possible. Make searchable comments where your uncertain about portability. Use compiler neutral code. Use obvious naming conventions. Try and make it obvious what your code does even in the absence of comments. Don't use numerical constants.
Error return codes from functions? Can you give an example of how that would be useful?
I admit I have not worked on systems complex enough that simply being very stringent with the functional requirements and careful testing of each module to be sufficient.
Topic archived. No new replies allowed.