Unknow reason porgram stop at while{} in a thread

Variable stop will be 1 in main process.
I want the thread to start its when stop==1
The weird thing is that it is good when I add a line "cout..." in while block;
However, the thread would stop if I do same operation without this line.
Output:
Enter thread stop = 0
before

Thread code:

// static thread func. Parameter is "this"
void MySocket::recvProc( MySocket *p_Socket)
{

cout<<"Enter thread "<<" stop = "<<p_Socket->stop<<endl;
cout<<"before"<<endl;
while(p_Socket->stop!=1)
{
// cout<<""<<endl;
}
cout<<"after"<<endl;
cout<<"at thread "<<" stop = "<<p_Socket->stop<<endl;

...
}

It is very strange, isn't it? Hope someone know the reason.
You really should use OS objects to sync threads in this way (conditions or events).

If you are going to use variables, be sure to declare them as volatile, but it's quite expensive to use variables.
Last edited on
Thanks for your suggestion of volatile.

I use windows function to create the thread.
CreateThread(NULL,0,
(LPTHREAD_START_ROUTINE)recvProc,this,
0,NULL);


Besides this thread, there is only one place which changes "stop"'s value exactly once. So I don't why without a condtion or lock is the reason for this error.

I still don't get the reason of this problem

If you're using Windows, use an Event to start the loop. You wait in the thread with WaitForSingleObject, you signal the event in some other thread with SetEvent, I vaguely remember some problem with PulseEvent, so I avoid it. Instead, use an auto reset event.

Don't use CreateThread. Instead, use _beginthreadex as it initialises the statics in the C runtime library for that thread (things like errno).
You're right, the key point is volatile!
By experients, I find out that volatile is important in release mode.
If I don't use it, debug mode is good, but release mode is bad.
If I add volatile, bote mode are good.

I appreciate your help a lot!
Noooooooo

Volatile does not do what you think. All it means is that the compiler will not optimize out memory accesses for that variable. However due to the complexity of the memory pipeline, volatile alone does not guarantee that all previous writes have taken effect before this one. To guarantee this, you need to use a memory barrier.

This is actually quite a complex topic. But very simply.... any variable used to sync two or more threads must be behind a memory barrier. Ways to put things behind a memory barrier are:

1) make them atomic with std::atomic<> (though it's a new addition to the language... so your compiler might not support it. MSVS doesn't, I don't know about gcc).

2) put all accesses to them behind a mutex or other thread locking mechanism.

3) some versions of VS add memory blocking functionality to the volatile keyword, but you shouldn't rely on it, as other compilers do not (and not even all versions of VS do it!)




This will bite you in the ass in a big way if you don't get on top of it right away. So do it right!


EDIT: I just did a bunch of reading and research on this topic... and there are a LOT of bad/incorrect tutorials floating around, even from seemingly credible sources. If you are learning multithreading from a tutorial that is saying you don't need this... it's wrong.



EDIT 2: If your compiler does not support std::atomic, here is a miniature version of it I made that has a similar effect. It will properly ensure all accesses to this variable are threadsafe:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
// note... use:
//      namespace threadlib = std;
//  if your compiler supports C++11 threads.
// otherwise use:
//      namespace threadlib = boost;
//  if it doesn't (but then you need to install boost)
//  or you can change the below mutexes to match whatever thread lib you're using.

template <typename T>
class Atomic
{
public:
                Atomic(T x = T())           : v(x)  { }
                Atomic(const Atomic<T>& x)  : v(x)  { }

    operator T () const
    {
        T copy;
        {
            threadlib::lock_guard<threadlib::mutex> lk(m);
            copy = v;
        }
        return copy;
    }

    void operator = (const T& x)
    {
        threadlib::lock_guard<threadlib::mutex> lk(m);
        v = x;
    }

    void operator = (const Atomic<T>& x)
    {
        *this = static_cast<T>(x);
    }

private:
    volatile T                  v;
    mutable threadlib::mutex    m;
};


// typical usage:
atomic<bool> stop;

// then just use it as if it were a normal bool:
stop = true;
if(stop == false)
{
  //...
}



EDIT 3:

And of course... for performance reasons... access to these variables should be minimized because it is extremely expensive.
Last edited on
I want the thread to start its when stop==1

In other words, you want the thread to wait until the condition "stop==1" becomes true.

This is done with a condition variable and a mutex. (in general, this is actually done with a semaphore, but core C++ doesn't have them).

And yes, forget you heard the keyword "volatile" unless you want to stay Windows-only (in which case you might as well use Windows API for waiting as well).

@Disch: as I tried to explain to you earlier, atomics would be so much less useful if they always came coupled with memory barriers.. and none of that is particularly useful for user-space thread synchronization.
Last edited on
Cubbi: Atomics would be completely useless without memory barriers.

The classic example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
int somedata;  // data we want to fill
atomic<bool> dataready = false; // flag to indicate data has been written to 'somedata'

void provider_thread()
{
  somedata = 4;
  dataready = true;
}

void consumer_thread()
{
  // wait for the data to be ready
  while(!dataready);

  // once it's ready, print it
  cout << somedata;
}


For this to work, two things must happen:

1) All writes in the provider thread prior to dataready=true must be completed.
2) No reads in the consumer thread after the dataready poll can be started

These two things describe exactly what a memory barrier does. Which is why you need it. If an atomic does not do this, then what is the point of it?



EDIT:

References:

http://en.cppreference.com/w/cpp/atomic/atomic
"In addition, accesses to atomic objects may establish inter-thread synchronization and order non-atomic memory accesses as specified by std::memory_order. "

The 'may' in the wording here is because behavior can be adjusted by passing a more lax memory_order constant to load()/store() functions. However by default, the value used is the "safest" memory_order_seq_cst.
http://en.cppreference.com/w/cpp/atomic/memory_order


This is also outlined in detail with many examples on page 1013 of The C++ Standard Library Second Edition:

http://www.amazon.com/The-Standard-Library-Tutorial-Reference/dp/0321623215/ref=sr_1_1?ie=UTF8&qid=1342191071&sr=8-1&keywords=The+C%2B%2B+standard+library

(amazon let's you preview the book, so you can take a look for yourself)
Last edited on
Atomics would be completely useless without memory barriers.

I hold the opposite view. I also think you are mistaking atomics for some sort of monitors. Synchronization is not their defining characteristic.

The classic example:

That's an example of a spinlock that's being used for inter-thread communication. As written, it indeed requires both atomicity and release/acquire barriers. It is also almost never justified in user code - mutexes/condition variables/etc are vastly superior in most cases.

PS: i wrote those cppreference.com pages
Last edited on
I hold the opposite view. I also think you are mistaking atomics for some sort of monitors. Synchronization is not their defining characteristic.


I guess I'm just not understanding what you're saying.

Their defining characteristic is that they are guaranteed not to have race conditions when accessed simultaneously from multiple threads. The memory barrier is a bonus to make that more useful.

If that isn't there for interthread communication/synchronization, then what's it there for?

It is also almost never justified in user code - mutexes/condition variables/etc are vastly superior in most cases.


I agree.

But my point was not to show that it was an ideal solution. My point was to show that atomics form a memory barrier.

PS: i wrote those cppreference.com pages


That's rad. They're great reference pages. Kudos.

So then why are you saying the opposite of what those pages appear to be?
They're great reference pages. Kudos.

They are still work-in-progress, like much of that site.

My point was to show that atomics form a memory barrier.

And my point was to remind that while atomicity may be bundled with synchronization (and often is, as in case of std::atomic's defaults), they are independent concepts, they do not require each other at all.

why are you saying the opposite of what those pages

I don't: the top-level page gives the definition:
Each atomic operation is indivisible with regards to any other atomic operation that involves the same object.

and the main property with regards to C++
Atomic objects are the only C++ objects free of data races

The page about std::atomic repeats the main property and introduces their optional property
In addition, accesses to atomic objects may...

which is indeed turned on by default. The memory_order page gives an example where it's best turned off, but more interesting cases of non-synchronizing atomics show up in actual lockfree algorithms.
Topic archived. No new replies allowed.