Advices for cross platform code?

Pages: 12
Hi, I'm working on a sdl/opengl game on linux. I want the game to run on windows too(mac eventually, but I don't have one so that's harder), but I don't have any idea of cross platform code. I picked sdl because it's portable but I don't know how my code should be to be cross platform too(ie compile on barious platforms without a lot of changes).

As I said, I haven't ported anything before so I don't know what code is platform dependent or what can cause me trouble when porting a game. It's kind of a broad question because I'm not asking about anything specific but if you could recommend me books or articles(maybs just advices) about how to write cross platform code, I would be very grateful!

Thanks in advance
Writing cross-platform code is actually pretty easy. There are just a few things you have to watch for:

1)
Don't use any platform-specific API or platform SDK. In your case, since you are using SDL+OpenGL, you are fine. Just make sure that any function is call is either:
- an SDL function
- an OpenGL function
- a standard C or C++ library function (ie: from <iostream> or whatever)
- a function you wrote yourself.

As long as everything you call is one of those -- you're fine.


2)
Don't make assumptions about system endianness or size of specific variable types:

This is not portable code:
1
2
int someint;
file.write( (char*)(&someint), sizeof(int) );


The problem with this code is twofold:

- It assumes sizeof(int) is the same across all platforms
- It assumes endianness is the same across all platforms.

And while this code will work on any system you compile it on, it will generate a different file based on whatever platform you're running. So a file created with this method on a Windows machine may not be able to be opened when run by your program on a Mac.


3)
Don't make assumptions about undocumented behavior in functions.

For example, on Linux you can typically open files with a UTF-8 filename through the standard lib. This is typically not true on Windows. If the function documentation does not specifically say it supports a certain behavior, then you must not assume it will, even if it does when you try it.





That's pretty much it. I can't think of anything else. If you follow those 3 rules your code will be extremely easy to port.
thanks, that's a hell of a relief, I thought my game would be full of ifdefs. I'm not particulary familiar with writting files and file system stuff but I get what you meant with examples n° 2 and 3. I shouldn't rely on facts that aren't documented or can vary depending on the platform. Appreciate your response

PD: if instead of sizeof(int) I use sizeof(GLint) it shoulsd be the same across platforms, shouldn't it?

EDIT: also, is it "cross platform-safe" to first check whether the system is big or little endian and then act accordingly?
Last edited on
PD: if instead of sizeof(int) I use sizeof(GLint) it shoulsd be the same across platforms, shouldn't it?


Don't count on it.

If you need the size to be consistent across platforms, use types that are specifically defined to be a certain size... like std::int32_t from the <cstdint> header.

Though generally you should only need this when doing I/O to binary files. Other times it doesn't really matter. If an int is 4 bytes on one machine and 8 bytes on another, it will still operate [mostly] the same.
You should use std::size_t for unsigned types and std::int_fast64_t for signed types. As Disch said, the only time to use any of the types with an exact number of bits is if you are working with binary formats. As for the regular primitive types, in my opinion you should never use them directly - always use an alias that has a known behavior:

http://en.cppreference.com/w/cpp/header/cstdint

Note that storage of characters is an entirely different (and much more complex) topic.
Last edited on
As for the regular primitive types, in my opinion you should never use them directly - always use an alias that has a known behavior:


I dunno. IMO that's overkill.

I still use 'int' all over the place for when I just need a generic integer and the exact size is unimportant (which is frequently)
I can't think of a time when the exact size would be unimportant - I would always either want to match std::size_t or use the largest available singed/unsigned integer type with std::intmax_t and std::uintmax_t. Could you give an example of when the exact size is unimportant?
You'll need platform-specific code to handle saved data.
(In order to locate and open/rewrite the data file(s).)
Yes, we both already covered that example of when the exact size is important in our previous posts. Using raw primitives when working with binary files would make it impossible for users to share files.
Duoas wrote:
(In order to locate and open/rewrite the data file(s).)
Every platform has its own conventions and API for appdata.
Last edited on
boost and qt are platform independent as well, so you can use those if you need some of their functionality (boost::filesystems for fileoperations, qt for graphical user interfaces)

Note: before you use them you should take a look at the spezifications if they support all platforms you need


(In order to locate and open/rewrite the data file(s).)
Every platform has its own conventions and API for appdata.

boost has a boost::filesystem library (a non-header-only-file) which provides cross-platform-fileoperations. I think somewhere in there is a wrapper for dos/unix/etc. fileoperations
Last edited on
I have yet to see a platform API where there were not type aliases to keep you from writing raw primitives.
and what do i do when I need to allocate memory for a struct(this is actually about c)? I can't think of another way of allocating memory for a struct that isn't with malloc(sizeof(struct mystruct))
C has typdefs too (it's where C++ got them from), but I don't really know C at all. Anyway, it doesn't make sense to use a completely different language in a discussion about one language.
Last edited on
I can't think of a time when the exact size would be unimportant - I would always either want to match std::size_t or use the largest available singed/unsigned integer type with std::intmax_t and std::uintmax_t. Could you give an example of when the exact size is unimportant?


Maybe I'm not following, but I've written a program almost entirely using raw primitives in Qt that I initially go to run on a Linux system, and then ported it over to Windows without having to change any of the variables to Qt type aliases. At first I thought porting it would be a huge headache because of that reason, but it actually ended up being really simple. The program isn't exactly trivial either. I know that functions like sizeof() aren't really portable since type size can change from system to system, but you make it sound like porting is impossible without the use of type aliases.
Yeah, that happens most of the time.
The problem does not occur with the "standard operating systems", it occurs when you want to port them to an operating system that does not support those primitive types.
In any case, there is no downfall in not using your own aliases, but that's up to you to decide :)
LB wrote:
I can't think of a time when the exact size would be unimportant - [snip] Could you give an example of when the exact size is unimportant?


99.9% of the time? I find it hard to believe you think it's important all of the time.

Maybe you could give me an example of a time when you think it is important? Other than when doing binary I/O, it really isn't.

I would always either want to match std::size_t [snip]


There you go. size_t has an indeterminate size. You can use it because the actual size of it doesn't matter. It's big enough to hold a size, and that's all you need to care about.

And you absolutely should not read/write size_t's to a file because of their variable size.
Last edited on
I have never stated or implied that it is impossible to do cross-platform development without type aliases. I do not believe in such nonsense.

@Disch
- std::size_t is important when iterating over an array or standard library container
- std::intmax_t and std::uintmax_t are important when reading numbers from text (they can be bounds-checked and narrowed later)
- std::uint8_t is important when holding arbitrary raw data
- std::uint8_t is important when working with RGB colors (unless you can use doubles from 0.0 to 1.0)
- std::intXX_t and std::uintXX_t are important when working with binary files, binary network protocols, etc.
- std::int_leastXX_t and std::uint_leastXX_t are important for when you are concerned about memory but otherwise only care about a minimum upper bound
- std::int_fastXX and std::uint_fast are important for when you only care about a minimum upper bound, and could care less for memory consumption
- std::intptr_t and std::uintptr_t are important when you want to count memory usage (std::size_t can be too small on systems with a lot of RAM)
- std::ptrdiff_t is important when taking the difference of pointers (duh)
- Some systems I work with have 16-bit int (think about robotics microcomputers)
- I want my code to behave the same on all platforms so I don't have to worry about subtle bugs
so, I would use std::intptr_t casts to handle binary i/o wiht objetcs and size_t for unsigned types(more specifically, sizes in bytes). And other than type's sizes I won't have any problems. I'll try to keep in mind that sizes can vary across platforms while coding and use intptr_t's casts when writting to binary files.

Thanks!
I think you misread my post, I never mentioned "std::intptr_t" and "binary files" in the same sentence.

The XX are because I didn't care to write out all the type aliases:
http://en.cppreference.com/w/cpp/header/cstdint
Last edited on
Pages: 12