Application getting delayed write/read from Bottom

Hi all,

I m having Server modules which keep full time connections with other parties. And I m maintaining a echo message to keep the connection live each communication line. And I m sending/receiving messages from/to other party depending on the messages having in each server to/from.

But I notice that in sometimes I have delayed reading the request from the other party as per my Log time. But in the Wireshark logs Machine had received the message from the other party just after they write to me. ( Ex: Wireshark had logged the message after 6 mins later as per the Server logs)

And also I noticed that some messages going to other party has properly write() but as per the logs, no TCP packets are logged in the TCP dump( Wireshark Logs).

I m using standard read(), write() to read and write the messages to/from....


Since I m getting this error in Linux (RH) I change the OS to AIX, but the problem is still the same.

I m using the server port as 2030, 2031, 2032 and 2033 for my four identical server modules to connect other parties. And I also googled these ports and found that these ports are used in other services..


Any idea behind this cause.

Thank u all
Hi Ishara54075,

It could be in how you are reading your messages. Are you reading a fixed number of bytes in a blocking read? Are you using select()/poll() without blocking?

If you post some code, that would be helpful. Mind-reading costs extra! ;)
bool SocketServices::stdWrite(const unsigned char * buffer, int size, string sTermId)
{
int iSentSize = 0;

struct sockaddr sockAdd;//Testing the availability of Socket
//size_t Length;
socklen_t Length;

iSentSize = write(iSockFD, buffer, size);
//cout << "Unsigned - Intended write size : " << size << endl;
//cout << "Actual write size : " << iSentSize << endl;
if (iSentSize == -1)
{
perror("socket write error: ");
}

if(iSentSize != size)
{
//cout << "Error : Total data not properly written" << endl;
//FrameworkManager::printMsg(sTermId + ": Error : Total data not properly written");
PRINT_MESSAGE_NOTIME("ERROR", "Total data not properly written");
return false;
}
else
{
return true;
}

}// End function


Here is the function which is used for writing.

Hope this ll help.....
You should be using send/recv in a while loop until all data has been sent/received. It is not an error for TCP not to send or receive all the data at once.

As for your delay, when is stdWrite called?

Clearly the problem is in your program. I would not expect moving from Linux to Solaris to change anything.
Last edited on
Thank you for the update.

I my program, One thread is always read the messages from the other party( using read() in a while loop). And any thread can write to the same socket.

So any thread can write when ever want to write something.

But I do not have used a Mutex lock when writing to the socket.

Also I thought read() and write() are efficient than the send() and recv().

Yes, but as kooth asked, how do you know when it's time to read?

read/write work on file descriptors, send/recv work on sockets. On Unix, sockets are file descriptors, so they're interchangeable there. But it's not portable.
All the messages from the other party has a format ( Message Length+ actual message content) and Full message is variable in length.

The Reading thread reads the first two byte( Message length represent using first two bytes) and then it reads no of byts remaining from the sockets in the length.

You're not really answering the question. Clearly you have a problem reading the socket in a timely manner, and you're avoiding disclosing how do the reads.
Hi All,

This is how I m reading from the other party through the while loop: as follows



while(true)
{

buffer = new unsigned char[MAXLINE];

if((iReadSize = gatewaySock->stdRead(buffer, INT_DEFAULT_LEN_OF_MSG_LEN )) >= INT_DEFAULT_LEN_OF_MSG_LEN)
{
PRINT_MESSAGE_TIME("INFO ", "-->External Gateway Message received" );


iMessageLength = trx.getMessageLength(buffer+INT_DEFAULT_HEADER_LEN_BEFORE_MSG_LEN);
//iMessageLength = iMessageLength -INT_DEFAULT_ADDITIONAL_HEADER_LEN;

if(iMessageLength < MAXLINE && iMessageLength > 0)
{
iReadSize = 0;
sprintf(bufferTemp2, "Message Length without the header: %d", iMessageLength);
sBuffTemp = (string)(bufferTemp2);
PRINT_MESSAGE("INFO ", sBuffTemp);

iReadSize = gatewaySock->stdRead(buffer, iMessageLength);
buffer[iReadSize] = '\0';
trx.printHex(buffer,iMessageLength+2);

memcpy(bufferTemp, buffer, iReadSize);

if(iReadSize <= iMessageLength)
{

while(gatewaySock->timeRead(buffer, (iMessageLength - iReadSize), TIMEOUT_MICROSECS,iReadSizeSecond, false))
{
memcpy(&bufferTemp[iReadSize], buffer, iReadSizeSecond);
iReadSize = iReadSize + iReadSizeSecond;
iReadSizeSecond = 0;
}

trx.setBuffer(buffer, iMessageLength+INT_DEFAULT_LEN_OF_MSG_LEN);
memcpy(buffer , bufferTemp, iMessageLength);

sHexStr = trx.getHexToString(bufferTemp, iMessageLength);
PRINT_MESSAGE("INFO ", sHexStr);

}

else
{
PRINT_MESSAGE_TIME("ERROR", "Invalid Message Content - Message Length Not Available ..." );
delete [] buffer;
continue;
}

buffer[iMessageLength] = '\0';

if (bSignedOn)
{
PRINT_MESSAGE_TIME("INFO ", "Added to FIFO Queue for an externally originated message" );
myqueue.push (buffer);

}
else
{
PRINT_MESSAGE("ERROR", "SIGNON failed" );
delete [] buffer;
buffer=NULL;
}

}

else
{
PRINT_MESSAGE_TIME("ERROR", "Invalid Message Content - Invalid Message Length ...");
delete [] buffer;
buffer=NULL;
}

}
else
{
PRINT_MESSAGE_TIME("ERROR", "Server termination due to Gateway disconnection" );
delete [] buffer;
gatewaySock->CloseClient();
break;
}

}// End while


Hope this ll give u the idea of reading from the socket.

Thank u
Last edited on
Ok, thanks.

Does stdRead just call:
 
read(iSockFD, buffer, size);
without a loop and without delay?

I'm really trying to work out if:
 
if((iReadSize = gatewaySock->stdRead(buffer, INT_DEFAULT_LEN_OF_MSG_LEN )) >= INT_DEFAULT_LEN_OF_MSG_LEN)
just blocks until read completes.

Also, what does your logs show the time difference to be between these messages:
INFO -->External Gateway Message received
INFO <hex byte sequence>
Thanks KBW,


Does stdRead just call: Yes this is equal to read(iSockFD, buffer, size);

gatewaySock: is the socket where I m reading and writing is done

INFO -->External Gateway Message received
INFO <hex byte sequence>

No time difference between above two debug lines..

Thank u
I can't really see anything wrong.

How does your class SocketServices initialise the socket? Does it just bind(), listen() and then accept() new connections?

Have you set any socket options (with setsockopt)?

You say you write onto the socket from multiple threads. Are these synchronised in any way? Does your Wireshark trace show outgoing packets when you experience these delays between the packet arriving and the read() completing?
Thank u fro the update,

SocketService class uses the normal socket(), bin(), accept() system calls for the sockets it creates.

And I dont have used any socket options.

And one more thing I want to clarify: In my application read() and the write() can be happened at the same time to the same socket connecting with the other party. But there I do not have used any syncronization method for this read and write.

And I want to know one more thing: Now Threads writing with other party are not sync. for writing in to the socket, and so do I need to make explicit mutex lock for the writing half of the socket..


Thank u.
Concurrent reading/writing should be fine, the TCP stack should deal with that. But if you have multiple threads writing to the same socket, you don't have to sync them, but you run the risk of your timestamps or sequence numbers being out of order on the stream.

What about the network traces and logs. What's happening in between the network trace showing the packet arriving and the "INFO -->External Gateway Message received" log message?
Topic archived. No new replies allowed.