Sharing Data Between Apps

I'm trying to decide on a best practice for sharing data between many applications in a many to one relationship like client server. Speed is a big consideration. At this point I have several apps that want to share their array of doubles to a master/server that will compile them into a master some stuff...and then give each application an instruction.

I'm thinking about using sockets...but their appears to be a lot of conversion to string and back during the way. While this is not a big deal, it does take some time and every mils counts.

Another idea is to use shared memory or memory mapping to have each app put its array in place in the master array in shared memory and the ping the server telling there was an update and to do stuff.

Some problems I'm having with each:

Sockets: I have a simple Client/Server socket app that works for communicating char[] back and forth, but I'm not sure how to make it handle several clients at once. some feedback on that would be great.

Shared Memory: I have been researching this topic and what I've learned is that it seems way more complex that it needs to be. I'd like to know how to allocate that shared memory and pass the pointer around to the client apps such that they can all read and write to it. Whatever you got here would also be appreciated.

This is a multi-part topic, and for that I'm sorry...but the issue is presented... thanks as always.

The socket API is designed to send bytes, rather than strings.

That is, the char* param of send() and recv() doesn't only take C-style strings. The char* type just happens to be a good type for use with byte-wise data.

So instead of converting your data to C-strings, you can pack your binary data into a buffer and send that. But note that in the general case, you should convert all binary data to network byte order (big-endian) before sending it, and then convert it back the other end.

big-endian is what *nix uses by default, but Windows uses little-endian. So you need to use the (Berkely) socket API's conversion routines: hton* and ntoh* (h is for host, n for network), e.g. htons and ntohl (s for short, l for long, ...)

(Of course, if you know you're constrained to only ever work on the local system, or only a single o/s, you can omit the byte-order conversion. But it's not good practice from a cross-plaform programming perspective)

Last edited on
Thanks for the input Andy,

I realize that sockets send() recv() only take char[] and I guess that is what I was referring to when I said string, I'd convert the char[] to a std::string after the porting and use string methods to manipulate the data...or something like that.

The processes are all running locally so I've decided that probably shared memory using Boost will probably be the fastest and most convenient as I can cast the pointer to shared memory to char* and then re-cast on the other side so the processes know where to do their work...

If this seems like a poor solution please advise...any and all.


Sorry I got confused about which type of string you were referriing to. But I thought you were talking about the sharing of double arrays?

Wouldn't the char* data to std::string cost still be there with the shared memory solution? Or did you plan to work with the char*s directly?

Yeah I'm ultimately using the data as doubles...I was figuring that converting the char* to std::string would give me some more power when sorting the data that is contained in the buffer.

With the shared memory concept I believe I can have each process simply modify the appropriate areas of memory directly (assuming the shared memory is an array of doubles) and then the server/master process can just access the data directly...I think this will provide much faster access and a lot fewer CPU cycles.

So far it seems to be problem free...

Thanks Again...
Topic archived. No new replies allowed.