Unicomm library

I would like to ask community to make small review of the library we released. It would be really appreciated if it could happen.

http://libunicomm.org

Thanx in advance.
Last edited on
Hm, yes, the documentation looks pretty good and I'm sure the library does its job.

But the handling seems rather complex. Considering the 'echo' example, there are loads of function and classes involved. That is rather overwhelming.

I guess there is potential to make it more simple
Agree, it looks like good implementation but poor design, design could be much better.

In future if have time I would like to make it (3rd revistion) more teplatized with easy tuning and support of most popular applications like http, websockets and so on.

Really useful notice, thank you.
On your site you're refering to boost. So why don't you use some boost mechanics?

For instance 'Propterty Tree' for reading/writing XML (and other formats)

I'd suggest to take a look the 'The Boost Iostreams Library'. You can implement sockets as data source/sink. This way you could use streams to transmit the data.
'echo' could look like this:
1
2
3
4
while(is >> ch)
{
  os << ch;
}


For connection handling you could use signal/slot. If a connection is established the participant could be signaled with the 'data source'

Just some suggestens...
Good suggestions.

The lib was originally made as intrinsic thing so it has artefacts.
XML implementation is the history artefact, the reason was that boost didn't have satisfied xml processing support at the time the xml messaging was implemented.
In next revision I'm going to consider those lacks.

Yes, the boost iostreams is really cool, I thought about this.
But I think that the syntax sugar provided by the chevron (>>,<<) operations is not really significant.
Anyway the "chevron" interface can be implemented as a wrapper. Also the asynchronous nature of i/o operations
complicates the implementation and the usage. E.g. in case of synchronous streams the operations are trivial and on the other hand async streams introduce extra complexity to the interface and implementation.
It's really wanted to know when the async operation completed. So it's needed to bind completion handlers on each operation as option.

What would be a way of implementing those? The write is almost clear, but what with the read. Whether to tell that we want exact message to be read or it doesn't matter? There are plenty of thoughts here. I'm going to have them in future if I have time. There are should be many hours of thinking to achieve the rational(e) design.

And the last reason of the current design is that I didn't have enough experience when the design was made.

1
2
3
4
async_tcp_message_stream message_ios;
// or it would be a just byte stream and the interpretation should be performed on other level.

message_ios << foo_message(&handler) << bar_message(&handler) >> foo_message(&handler) >> any_message(&handler);


Unfortunately, the company the lib was made in context of is dead, so nobody wants help me. Now I involved into another project development and can't attend too much time to this one.

Regards,
Dmitry.
Last edited on
But I think that the syntax sugar provided by the chevron (>>,<<) operations is not really significant.
The stream operators might not, but I would think that the option to use streams is a big plus.

Also the asynchronous nature of i/o operations
Asynchron is bad due to the unpredictable occurrence of events. If it is necessary it should be hidden by the lib.

Using a socket in blocking mode is fine (put it in it's own thread) just closing the socket if not longer needed (from outside the thread) should be ensured.

So I would suggest to make your lib as synchronous as possible and not to use async but better support of thread (states like finished, started ...)
I thought that the world trend is the asynchronous event driven model, the async events model is really popular e.g. Microsoft added native support of async sockets to theirs OS. How would you suggest to handle 1000 connections in blocking mode? It's necessary to produce 1000 threads, it's real overhead. Threads are blocked on each heap operation (new, delete). The huge amount of threads will collapse the system at all. There is no need to have more threads than the cpu cores in the system. I think this is obvious.
Last edited on
Microsoft is trying to veil that async is thread based (everything that runs parallel is based on threads).

Read this:

http://msdn.microsoft.com/en-us/magazine/ff959203.aspx

All of a sudden they call it 'task' instead of thread. It sounds extremely time consuming and complicated.
It could mean that if you ask a device fast enough 1000 times you may have your 1000 threads...
I can't tell because it's really unclear what exactly happens


The synchronous way would be as follows:

You have a thread for the device with a command queue. Whenever you want something form that device you add a command. That's it.
The commands have at least one boolean variable: finish. You can check (not modify!) this variable from any other thread, and you need no synchronizing.
I mean WSA* functions family.

Ok, if you have 1000 devices, so you have 1000 threads, right? It's not a big deal to have 1000 threads in the system till they are sleeping. Even if the queue is used and the command added and fetched by the main thread like this.

1
2
3
4
5
6
device.push_command(command);
...
if (device.is_ready())
{
  command = device.pop_command();
}


It's not guranteed that the underlying entities used for implementation are not allocate/deallocate the memory (e.g. boost::asio)


Anyway write code with no threads intersection on the heap is not trivial. When you do new or delete the thread gets exclusive ownership of the heap, so other threads will be blocked if they want to allocate/deallocate heap memory at the same time. That is all.

And the last, anyway if you have only the integral type variable you should use atomic accessors (on multi processors machine) like e.g. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html, write and read barriers (due to compiler code optimization) and so on.
Last edited on
Topic archived. No new replies allowed.