• Forum
  • Lounge
  • Should complicated code stay complicated

 
Should complicated code stay complicated because it works?

I'm working with a project (which I don't want to draw attention to over this specific matter) but I would like to delve into the mindset of others' opinions.

The project uses OpenGL (and DX but irrelevant to this topci) for rendering. The context creation code is split into backends for each specific platform. As some of you may or may not know, context creation code can get complicated and verbose. There are 3 backends in total, one for GLX (X11/Linux), one for WGL(Windows), and one for Cocoa(MacOSX). In total, there's about 1000 lines of code, each one being slightly different in intention than the other, and there's only two I understand since they're the only two I have access too (X11/Linux and Windows). Cocoa isn't maintainable by me, at all.

To remove the need for the various backends and the complicated code, I instead made an SDL backend. This removes the need for all of the backends and makes platform-specific code minimal (which we still require for integrating correctly with Qt unfortunately). There is no loss in flexibility since the design doesn't require anything that SDL doesn't provide, there is an increase in consistency across platforms, and it's an API that people don't need to have specific hardware/software to understand. Really, anything can be used in place of SDL such as SFML, GLFW, etc. Also, the backend is 300 lines, smaller than any one of the current other backends, and supports platforms that we currently don't suppport (such as Wayland and Android even).

People are against the use of SDL because the current backends work. And apparently nobody wants to go through the hassle of compiling SDL. So, I'm honestly curious as to what others think about this situation.
Last edited on
I support the SDL alternative not because its simpler (it isn't. SDL is larger than all three backends combined), but because it's already been debugged. Having someone else do your work for you is always better than doing it yourself. On the other hand, is SDL going to be included solely to initialize the context? That might be a bit excessive. I'd see if I can't do it with some other library that's already included.

which we still require for integrating correctly with Qt unfortunately
IIRC, Qt is perfectly capable of creating OpenGL contexts. Why not just use that?
Well, we use a QWidget instead of a QWindow for one. Als, Qt doesn't support DirectX, at all. Currently, on Linux, we find it easier to actually make a new window using GLX/Xlib then reparenting that to the QWidget underneath than trying to render to the QWidget. This also lets us choose visual format/pixel format to our liking which would otherwise rely on QWidget which we don't control at all.

TLDR; Qt is bad for rendering mechanisms it seems.

SDL was going to be used for context creation but also for loading the OpenGL function pointers so as to remove the need for something like glLoadGen or GLAD. We have to check for extensions support anyways so it doesn't make sense to check for support after we try and load a function pointer.
Last edited on
helios has a strong point about debugging, and I agree.

However, dependence on another project is yet another thing to maintain. This may cause unforeseen problems:
1. Do all your team mates know SDL? (not all programmers do: for example, myself).
2. What happens when SDL changes version, who is responsible for updating?
3. How much larger does your executable get? How much of the actual SDL functionality are you using?
4. 3 x 1000 lines of code does not seem so horrible. One should be able to read and understand that much code in under a week. How long does it take to download, install and learn SDL?

In other words, I don't see immediately which is better: a tested external solution or an in-house "quick-and-dirty" solution to a relatively small and specific problem. I think it's a close call, and it may actually depend on what your colleagues think.

And apparently nobody wants to go through the hassle of compiling SDL


My personal experience with compiling large "established" projects has been extremely painful: I tried wxWidgets and FOX GUI on separate occasions. The amount of time I spent reading their documentation, going through forums, setting up configuration files and fighting with updates of their system breaking my code was, in my opinion, larger than the amount of time I needed to do everything myself.

I ended up scrapping my GUI and redoing everything within the web browser for 1000 times better result with much less code and hassle.
Last edited on
I think that it might be worth it to change over if it makes the project easier to maintain in the long run.

I doubt that in your case it is something to worry about that much, but in general, I am against the strict mindset of, "if it works don't fix it".

I think the project I am working on now is a product of this mindset. It's a big mess of multithreaded spaghetti code. New and delete are called all over the place. Pointers to dynamically allocated memory are shared all over the place in not so obvious ways by multiple threads. Mutex's are locked and unlocked all over the place, even where it makes no sense. You can tell they just started getting desperate and said, "more mutex locks"; well unfortunately that didn't work, but now we have no clue what is what so we better not mess with it anymore. Excessive use of QInvariant Map, and QVariant in order to store whatever and pass whatever; you cannot look at a header file to find out what something is, you have to grep ".insert(". Things are hardcoded left and right, magic numbers all over the place. There are commented out lines where delete is called in multiple places. There is a long list of header files. Structures, classes and functions are used from them with no obvious way to know where they are defined. No namespaces. Comments are sparse, misleading and confusing. A quarter to a third of each file is commented out with no indication of what the intention of the commented out code is and why it is commented out. There are copy constructors defined, for classes using dynamically allocated memory from raw pointers, which perform shallow copies. There are unused variables and parameters all over the place. No documentation even about the general structure.

Sorry for the rant, I just had to get that out. I don't think most people I am around in the real world would know what I am talking about.

From this I have learned the importance of not settling for horrible code. It is worth it to refactor. It is worth it to go out of your way to leave your final product in a form that another person can easily understand and work on. Using something like SDL vs something programmed in house by who know who, who you cannot trust, who isn't reliable to maintain it, is a great idea.

@tition: I agree with you when it comes to GUI libraries. Most don't give you the level of control you want, or need, and if they do, then you have to spend a lot of time figuring out how. If you want a really slick and custom UI, sometimes it is much better to use a lower level graphics library than a real GUI library IMO.
Last edited on
There is nothing wrong with complex code as long as the complexity is isolated.

In this case it seems the architecture is already very modular, backends are isolated, so there's no point in switching to another backend just for sake of its internal simplicity. There would be a reason, if the new backend offered better performance, better L&F, or was easier to maintain in a long run, but other than this, I don't see any point. Switching to another backend / introducing a new dependency is always a risk and may introduce bugs.

Also, adding a new library to a program is increasing complexity, not decreasing. Very often more complex code which you can fully control is easier to maintain than simple code gluing together 10 different libraries. Sure, your code may be in fact simpler, but the whole project would not.

... was easier to maintain in a long run, but other than this, I don't see any point. Switching to another backend / introducing a new dependency is always a risk and may introduce bugs.


Not switching is also a risk. It depends how robust the existing solution is and how robust the new one is.

Also, adding a new library to a program is increasing complexity, not decreasing.


It's not complexity that matters, it's your ability to manage complexity. There is already a massive amount of complexity under the hood of almost any desktop application, but it is hidden from us.

Very often more complex code which you can fully control is easier to maintain than simple code gluing together 10 different libraries. Sure, your code may be in fact simpler, but the whole project would not.


Like I said, the whole project if you consider all of the hidden complexity, is likely far outside of your ability to understand anyways. It's only because we can depend on many black boxes that we are able to make desktop applications in the first place.

There is the issue that sometimes you need more than what the black box offers. You really have to think it through.

Last edited on
Topic archived. No new replies allowed.