Strangest Programming glitches

I would like to share my experience and would like to hear if anyone else has had funny/annoying glitches with their code.

A friend and I were testing networking using SFML. I sent a float to a him, he incremented it by 1, sent it back, and repeat. However for some reason the variable kept resetting to its inital value. After about 15-20 minutes of trying to fix it, we recompiled it and it worked perfectly. We hadn't changed the code at all and it just worked. We still don't know why this happened.
We still don't know why this happened.
That's because floating point variables aren't portable across different CPUs and even on the same CPU with a different compiler! Whenever doing networking you should use fixed point math!
+1
...or strings.
Now hold on a minute, the fact that the issue wasn't persistent and that it was resolved by recompiling the code suggests to me at least that something else is in play here. I know that networking messages are done in text as best practices (IMHO Unicode or ASCII is ultimately dealers choice), and that floating points suck and should be avoided like the plague.

@ OP: Academically, you should preserve the current binary and recompile your code until you can recreate the issue. This may take a few dozen tries but once you have it, you should run it through a debugger. I'd be interested in seeing the underlying cause. If Avilius is right, and at the moment there is no reason to think that he isn't, then you'll see the odd behavior right there in the floating point registers. Are you both using the same version of the same compiler? Do you explicitly set your FP precision? What are your respective optimization levels set for?
My hobby: injecting code into other processes and changing the floating-point rounding mode on some threads

https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/
Nothing is portable unless there is a agreed upon ABI.

I actually had glitch with rounding mode: there was a grid with some objects on it. All object position calculations were done in FP and then rounded to fit them on grid. I wondered, why objects which supposed to be standing "still" are converging to the center.
It turned out that one library changed rounding mode to truncate instead of nearest, so any movement to the center wil instantly move them 1 unit to the center, where movement away will just keep their position.
I was working on this one project where I was working on the front end and another guy was doing the back end, and I had an if statement that was going to implement something he was going to do in the future so I put a cout in the statement so I could make sure it worked correctly and for some reason the cout statement broke the program. I forget the exact errors I got but they didn't point to that cout statement and it wasn't because I was using it wrong or forgot an include or something because I had other cout usages and they worked fine.

It was weird. If I commented out the cout it worked fine, put it in the program doesn't compile. And the if statement was working correctly too.
While writing an Office add-in, I figured out through trial-and-error that I had to write this code:
1
2
3
4
5
6
7
8
//Note: pane is like a subinstance of TaskPanes[0]
pane.Visible = true;
if (!pane.Visible){
    TaskPanes[0].Visible = true;
    pane.Visible = true;
    TaskPpanes[0].Visible = false;
    pane.Visible = true;
}
Removing any part would cause task panes to behave inconsistently (i.e. disappearing when they should be there, or appearing with stale data or empty) when switching from one window to another.




Hint: It's not C++.
floating points suck and should be avoided like the plague.

This doesn't make any sense to me, unless your game is totally grid based, you have to use floating point arithmetic in plenty of places. How would possibly avoid it?
How would possibly avoid it?
Many games uses fixed point notation (Warcraft 3 for example quantified most data to 1/16 (1/256 for game field AFAIR)).
Every ame is based on a grid. Even if you are using FP, you still have a very fine grid (and non-uniform one: distance between two nearest places constantly changes). It is still possible to use it (Minecrafto does that for example), but with utmost care — in MC previously you could use only about of 1/8 of advertized world size as precision problems would make game almost unplayable if you will be too far away from origin.
So do games like that basically have to re-invent <cmath> to avoid floating point calculations?
Yes.
Developing a program.
All calculations wrong, I was like WTF?
Neither Windows or VS is acting oddly, but very simple calculations (even with integers) failed.
Tried restarting VS at one point, doesn't start up anymore.
Tried rebooting PC, didn't start up (you know, bleeping and stuff).

One of my two 2GB RAM sticks got burned right in the middle of me developing a program.
I would like to share my experience and would like to hear if anyone else has had funny/annoying glitches with their code.

I was working on a daemon. Several dozen instances run 24/7 on a bunch of hosts in a distributed system. There was a bug that caused it to crash when processing one specific job, on one specific host.

Fortunately, I was able to convince my boss that this needed to be fixed. It was probably heap corruption or something similar and an unrelated change in the code could make it occur much more frequently, in which case we'd be in serious trouble. The bug wiped out the call stack so debugging was very difficult.

It took me and another guy a MONTH of full-time work to track it down. Eventually, I had to use the debugger to painstakingly walk up on the bug: put in a breakpoint, if it gets to that point, then put in another breakpoint further along. Oops, it crashes now. Put in the breakpoint somewhere in between. Uh oh, the crash is in a library function, now the breakpoints need to go in assembly code.

Of course the bug turned out to be my own damn fault. I was setting the TZ (timezone) environment variable from a string on the heap. When the string got deallocated and changed, the program went boom.
I was working on the IOCP layer of my game server and after some changes every time I sent an arbitrary amount of packets my server would crash. After a long unfruitful debugging session, I decided that this must be a hardware problem.

Just before I was about to crack open my computer case to see if the problem was faulty RAM, I decided to just make a quick, reassuring glance to make sure I didn't overlook anything.

Turns out every time I sent a "WSARecv" I also sent an uninitialized buffer structure when I handled the completion message in my worker threads. Took me an entire day of hardcore debugging and refactoring for me to just realize it was something trivial.
Last edited on
Topic archived. No new replies allowed.