How can I avoid synchronization when threading?

Hello people. I'm writing a multithreaded server application and getting sick of using locks(CRITICAL_SECTION in my case) for data synchronization.

I thought of using multiple job queues. Each queue having a single consumer and executing specific tasks which wouldn't need to interpret with data from other queues. However, I've never used such a technique and not sure if I can implement it in a correct way or if it's a viable method for a large-scale application.

Hoping to get insight on how you people deal with this kind of situation, or how to avoid being in this situation in the first place.
Last edited on
Why do you hate using locks? Is it the actual synchronization that's bothering you or the fact that you are worried you will miss a place where you need to lock or unlock (or you're tired of typing it)?

If it's the second, write a class to contain the data you need to synchronize around. Let the class contain a CRITICAL_SECTION and do the locking and unlocking for you. The constructor would initialize the lock, the destructor would destroy the lock and get... and set... functions would lock and unlock. You can even make this a template class to be able to reuse it wherever you need it.
@doug4
Thanks for the suggestion. I'm already using RAII for locks whenever I can. It's mainly the synchronization that's bothering me. Having the doubt of having a deadlock somewhere is really bothering me, even though I'm trying to be careful as much as I can.

I feel like using locks for all 'shared' data(and I have a lot of shared data) is also leading me to bad design decisions. That's why I thought have a certain task queue would force me to organize better. As I've said though, I don't have enough experience to make a final decision to switch everything to that approach. Which is why I'm asking for guidance.

I've read that it's used in some multithreaded games, having each system(physics, rendering, etc.) run in a separate thread and avoiding context switches. But not sure if it would be a viable solution for a server application.
Queues are great channels for communication between threads. It's possible to implement linked-list-based queues that are lock-free, using only atomic operations. Lock-free algorithm have their pros and cons, compared to their lockfull counterparts.

Recently I've been working in an application that applies a series of transformations to a file. Using threads linked by queues, the code looks like this:
1
2
3
4
5
6
7
8
9
while (true){
    auto data = this->read();
    if (data.eof()){
        this->write(data);
        break;
    }
    do_something(data);
    this->write(data);
}
read() and write() handle all aspects of communication. Data is passed around through fixed-size circular queues. If the pipeline involves a transformation that takes significantly longer, transformations that occur before it tend to pile up buffers in their output queues and have to wait to continue writing, while transformations that occur after it empty their input queues more quickly and have to wait to continue reading.
Like this, threads limit synchronization to well-defined points. The total amount of synchronization necessary can also be tweaked by changing the maximum size of the buffers that are passed through the queues.

This workflow is not as easily achievable with lock-free algorithms because they're non-blocking; they either fail or succeed immediately. This may or may not be a problem in your particular application. I would say it's preferable to start from a lockfull algorithm and then attempt to replace it with a lock-free version if you detect some lock contention.
@helios
Thanks for the post. Though, I am not sure what the 'data' is here. In my case, there's almost nothing going on(besides threads which do specific things, which I'm hoping to get rid of) unless there's client action. All threads(IOCP threads) are waiting for network messages. To simply represent:

 
IOCP -> protocol -> message_handler_function -> processing_message -> Continue listening.


Shared data is scattered all over the place so I'm uncertain how can I implement your solution. Please correct me if I'm wrong, but from what I understand your threads do very certain jobs. Whereas my threads could be doing anything at any given time which I believe is not a very good solution due to potential context switches and the amount of data which I need to synchronize. Which makes me go back to my original question, is it better to have threads doing certain jobs with very little synchronization needed amongst other threads?
What kind of data are you synchronizing on?
my threads could be doing anything at any given time which I believe is not a very good solution due to potential context switches
I don't know what you mean here.

Your threads are doing specific things: processing messages.
Whereas my solution looks like a pipeline, where every thread further processes the output of the last, your solution will probably look like a hub and spokes, where you have a central thread that schedules jobs in a queue and various worker threads that pop from that queue in a loop.
1
2
3
4
while (!this->exit_requested()){
    auto message = this->read();
    this->process(message);
}
You may or may not be able to avoid synchronization inside process(), depending on what it does.
@NoXzema
All sorts of 'player' data. It's a world server. Positions, stats, effects, whatever comes to mind.

@helios
Sorry for being unclear. What I wanted to say is that they could be processing any type of message at any time. Which forces me to synchronize all shared data.

Wouldn't using a single queue for all types of messages still cause the same problem though? I mean, it's basically a queue version of what I currently have. What I thought could be a good idea is to limit worker threads to parse certain types of messages, which could lead me to reducing amount of locks needed a fair bit. Though I've never built such an application and don't know if it works in practice.
You could give each thread its own queue and have the scheduler round-robin the queues to send the next message, but you lose the automatic load balancing. It's possible for a thread to end up getting all the expensive jobs, which could bottleneck the throughput.

What I wanted to say is that they could be processing any type of message at any time. Which forces me to synchronize all shared data.

Wouldn't using a single queue for all types of messages still cause the same problem though? I mean, it's basically a queue version of what I currently have.
Well, I don't know the specifics of your design, but by limiting your inter-thread communication to narrowly-defined channels you're limiting the places that require synchronization. You just push all the information pertinent to the task into the queue and avoid sharing data during the main processing step. If the processing itself makes use of shared data (e.g. to increment counters or perform I/O), then one way or another you're going to have to lock. Eliminating such locks will involve more careful redesign.
Last edited on

It's possible for a thread to end up getting all the expensive jobs, which could bottleneck the throughput.

Eliminating such locks will involve more careful redesign.


That makes a lot of sense. So what if I use a single queue with a single consumer? Use IOCP threads only to push jobs into this queue. After that I could slowly create new single consumer queues for tasks that can be done independently.
Last edited on
I don't see any difference with your previous idea.
Ideally that's what I hope to accomplish. But as you said eliminating locks which are required during processing needs a well thought design. This way I can slowly move things to new threads once I'm absolutely sure that they're independent tasks.

Though, I feel like I'm kind of tunnel visioning on this idea. If there's a better method I would very much like to use that instead.
It's possible to implement linked-list-based queues that are lock-free, using only atomic operations.



I've heard of these before but could not get a fully implemented queue with which I could test. All I found on google was code snippets that guides/tends towards such concepts but when trying to implement the enqueue method based on what is shown you can't seem to do the same with the dequeue without violating thread safety.

I've once seen something on a java forum where they implemented a queue that could be concurrently processed by one thread that enqueues and one that dequeues. This however isn't the same as the lock-free queue implementations I've heard of that does not suffer the same constraint, ie allows multiple threads to enqueue and dequeue at the same time.

Do you have a link where I can find a fully testable solution for such a lock-free queue?
Topic archived. No new replies allowed.