Modern CPUs are pipelined in a way which allows them to run much faster -- they predict and read memory much earlier than is needed, and they postpone writes until it is optimal to actually perform the write. As a side-effect... variable writes do not necessarily physically happen when you think they do.
In your code,
even if one thread sets b_locked to true, that does not mean other threads will see it as true. They might still see it as false -- as the write may not have physically happened yet.
Worse... if two threads attempt to modify m_count at the same time, the results can be disasterous.
So yeah... this code is not thread safe. At all.
To make things thread safe, you need to use what is commonly referred to as a 'memory barrier'. IE: something which guarantees that all earlier writes have physically been written... and no future reads have started yet. This barrier will ensure that everything is in a fixed state at a certain time, which allows you to synchronize two or more threads -- however it comes with a significant performance hit (the pipeline essentially has to stop dead in its tracks). So the less often you can do it the better.
Memory barriers are usually accomplished with one of two common techniques (note: these are not the only techniques, they are just the most common that I've seen):
1:
A Mutex
On top of forming a memory barrier, mutexes are esstentially a locking mechanism where only one thread can have the mutex locked at a time. IE: if thread A is modifying a shared object, it would lock the mutex first... then do it's operation with it locked. If thread B tries to lock the mutex, it will have to wait until thread A finishes and unlocks.
In this code example, I use std::unique_lock to lock/unlock the mutex. It basically is an RAII container which ensures the mutex will always be unlocked when the unique_lock goes out of scope.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
|
#include <mutex>
class MyOwner
{
int m_count;
std::mutex m_countmutex;
};
class MyObject
{
...
MyOwner* m_owner;
void UpdateCount(){
std::unique_lock<std::mutex> lock( m_owner->m_countmutex ); // <- lock the mutex
// the mutex will be unlocked automatically when 'lock' goes out of scope
++m_owner->m_count;
} // <- mutex unlocked here
};
|
Note that you will need to put
ALL ACCESSES of m_count and anything else that is shared behind a mutex lock. If any thread attempts to access that variable while another thread is writing it.... disaster ensues. Therefore, it must
always be guarded.
The only exception is that if no threads will ever be writing -- if all threads are reading you do not need to lock it. But if
any thread is writing, then
all threads must be guarded.
1:
Atomic
Atomics, on top of forming a memory barrier, ensure that the variable in question can be read/written "immediately" so that other threads will see the result, without having to actually lock a mutex. This still disrupts the pipeline, but it might not be as severe.
1 2 3 4 5 6 7 8 9 10 11 12 13
|
#include <atomic>
class MyOwner
{
...
std::atomic<int> m_count(0); // <- note: you MUST construct atomics!!!
// if you don't construct it with an actual value, you will need to call
// std::atomic_init(&m_count, 0);
};
//...
++(m_owner->m_count); // now this is thread-safe. m_count is atomic
|
EDIT:
ninja'd by JLBorges. Though my post went into a bit more detail. =P