std::condition_variable

I have an issue with std::condition_variable that I can't seem to get through.

So, the typical usage pattern for condition variables is
1
2
3
4
5
6
7
8
std::unique_lock<std::mutex> lock(mutex);
while (!state)
    cv.wait(lock);

//...

state = true;
cv.notify_one();
My issue with this is that none of those operations is atomic. There's nothing preventing execution from being scheduled like so:
Thread A: while (!state)
Thread B: state = true;
Thread B: cv.notify_one();
Thread A: cv.wait(lock);
If this happens, thread A will wait forever

Compare to Win32 events, where waiting on an event checks the state, waits, then possibly resets the event state, all atomically; and signalling the event sets the state and wakes up a thread, also atomically.

What's the solution to this problem?

EDIT: In the past I've used pthreads semaphores to reproduce the behavior of Win32 events. I've looked at Boost semaphores, but they appear to be only of the inter-process kind, not the intra-process kind.
Last edited on
> If this happens, thread A will wait forever

It should get a spurious wake.

... condition variables have one "feature" which is a common source of bugs: a wait on a condition variable may return even if the condition variable has not been notified. This is called a spurious wake.

Spurious wakes cannot be predicted: they are essentially random from the user's point of view. However, they commonly occur when the thread library cannot reliably ensure that a waiting thread will not miss a notification. Since a missed notification would render the condition variable useless, the thread library wakes the thread from its wait rather than take the risk.

...

Spurious wakes can cause some unfortunate bugs, which are hard to track down due to the unpredictability of spurious wakes. These problems can be avoided by ensuring that plain wait() calls are made in a loop, and the timeout is correctly calculated for timed_wait() calls. If the predicate can be packaged as a function or function object, using the predicated overloads of wait() and timed_wait() avoids all the problems. - Anthony Williams


void wait(unique_lock<mutex>& lock);
...
The function will unblock when signaled by a call to notify_one() or a call to notify_all(),
or spuriously. - IS

Sure, but from what I've seen spurious wakes are infrequent enough that if they're the only thing preventing a full-on deadlock, the resulting performance is basically equivalent to a deadlock.

Maybe this would work as an auto-reset event?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
while (true){
    std::unique_lock<std::mutex> lock(mutex);
    if (state){
        state = false;
        break;
    }
    cv.wait(lock);
}

//...

std::scoped_lock<std::mutex> lock(mutex);
state = true;
cv.notify_one();
I'm wondering if it would be possible to avoid the lock in the notification using an std::atomic<bool>.
The thread that intends to modify the variable has to

1. acquire a std::mutex (typically via std::lock_guard)
2. perform the modification while the lock is held
3. execute notify_one or notify_all on the std::condition_variable (the lock does not need to be held for notification)

Even if the shared variable is atomic, it must be modified under the mutex in order to correctly publish the modification to the waiting thread.

Any thread that intends to wait on std::condition_variable has to

1. acquire a std::unique_lock<std::mutex>, on the same mutex as used to protect the shared variable
2. execute wait, wait_for, or wait_until. The wait operations atomically release the mutex and suspend the execution of the thread.
3. When the condition variable is notified, a timeout expires, or a spurious wakeup occurs, the thread is awakened, and the mutex is atomically reacquired. The thread should then check the condition and resume waiting if the wake up was spurious.

http://en.cppreference.com/w/cpp/thread/condition_variable
Oooh. I had seen several examples where the mutex was not locked at all during notification. I didn't know I needed to lock it while I modified the variable.
That does make more sense. Thanks.
Topic archived. No new replies allowed.