I'm pretty deep into my first UNIX networking project, but I am lacking some fundamentals still...
I send a command and read the response. What would cause me to get different read() responses when I run the same code? Like I run the program, exit, run the program, exit, run the program, exit. Sometimes I'll get the response I expect but other times I get unicode jibberish... I feel like I'm rolling the dice...
In the example below, sometimes the response is the expected one, "+++", and other times it is something like "k��־�<}�<*n�R". The jibberish changes. I've been tinkering with sleep() to see if that increases the probability of success.
I'm not hoping someone can correct my code... Can anybody point me in the direction of what could cause this? Do bytes just get "lost" frequently? The hardware folks said I just have to tinker with delays, but I'm not sure if that's really helping. I think I might just stay in a loop retrying the command until I read the response I want and only then move on in the program.
I suppose I'm just looking for keywords or anything that might hint me in a possible direction. I have a copy of vol 1 of stevens' unix networking book but I feel like I'm shooting in the dark!
send and read don't ensure that you actually read/send the entire message being sent or received. Every protocol has some method of message closure included in the message. For instance, IRC messages (and quite a few others) end with two bytes, "\r\n" (or in reality, some combination of those two). I don't know what you're sending and reading from so I can't give details.
EDIT: For clarification on how read() works in comparison to recv() (which I just now got doubted on), please read this page: http://pubs.opengroup.org/onlinepubs/009695399/functions/read.html
It claims that read(), when used with a socket descriptor, functions the same way as recv() with no flags.