take a baby step in mutex programming, result is unconclusive

Here is my code I "designed".
1. Mutex is a global var.
2. CONFIG_THREAD_COUNT number of threads are launched with hello() function target.
3. In hello() function, the mutex is used to protect the int sum object which is inintialized to zero.
4. When each thread launched, it will read the current display of sum display it and add a random number and display the sum again after addition. So that the sum value before and after addition printed out consistently (no race condition). Why because I thought just printing out the SUM value after addition will be hard to detect RACE condition by compiling without the use of mutex (by commenting out the Line 17 below) because final result will be the same since race condition does not matter when series of addition involved. Basically I have attempted to create deliberately the RACE condition so that SUM printout will look like: 40 + 3 = 47.

With mutex used, printouts are always consistent. But it is inconclusive because whether the consistency of printout has really been protected by mutex. For that, I compiled the version of this code by not using mutext (line 17 //).
With that, the printout was always consistent too therefore I thought it might be inconclusive.

My suspicion is there are too few threads and processor execution so fast that regardless of mutex is used or not, race condition really did not occur.

I am thinking to increased the CONFIG_NUMBER_THREADS to sufficient large enough to see if I can create the race condition. So far, it is currently set to 20 and highest I have tried setting was 100.

Do you see any problem with my thinking or issue with the code below. I doubt not because example is simple.

Thanks

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
[root@httpd-server cpp.concurrency]# cat CH3-2-1-mutex.cpp -b
     1  #include <list>
     2  #include <mutex>
     3  #include <algorithm>
     4  #include <thread>
     5  #include <iostream>

     6  using namespace std;

     7  std::mutex mutex_sum;
     8  int sum = 0;

     9  void hello(int pId, int pStat[])
    10  {
    11      int pNum = rand() % 4 + 2;
    12      int sleep_duration = rand() % 2 + 0;
    13      sleep_duration = 0;
    14      std::cout << pId << ": Hello CONCURRENT WORLD, sleeping for " << sleep_duration << endl;
    15      sleep(sleep_duration);

    16      // Create lock guard.

    17      // std::lock_guard<std::mutex> sum_guard(mutex_sum);
    18      cout << "thread: " << pId << ": R0: " << sum;
    19      sum += pNum;
    20      cout << " + " << pNum << " = " << sum << endl;

    21      pStat[pId]  = 1;
    22      std::cout << pId << ": Done sleeping exiting now..." << endl;
    23  }

    24  class data_wrapper
    25  {
    26  private:
    27      int sum;
    28      std::mutex m;

    29  public:
    30      data_wrapper()
    31      {
    32          cout << "data_wrapper constructor." << endl;
    33          sum = 0;
    34      }

    35      void add_using_mutex(int pThreadId, int pNum)
    36      {
    37          std::lock_guard<std::mutex> guard(m);

    38          // read current value.
    39          // add random value.
    40          // read back sum.
    41          // (if no race condition, R + ADD + R should be consistent.

    42          cout << "thread: " << pThreadId << ": R0: " << sum;
    43          sum += pNum;
    44          cout << " + " << pNum << " = " << sum << endl;
    45      }
    46  };

    47  int main()
    48  {
    49      // Declare, initialize variables.

    50      int i;
    51      const int CONFIG_THREAD_COUNT = 20;
    52      int stat[CONFIG_THREAD_COUNT];
    53      int sum = 0;

    54      // launch threads.

    55      for ( i = 0; i < CONFIG_THREAD_COUNT; i ++ ) {
    56          stat[i] = 0;
    57          std::thread t(hello, i, stat);
    58          t.detach();
    59      }

    60      cout << "Checking thread status-s..." << endl;

    61       while (sum != CONFIG_THREAD_COUNT)  {
    62          sum = 0;

    63          for (i = 0; i < CONFIG_THREAD_COUNT; i++) {
    64              cout << stat[i] << ", ";
    65              sum += stat[i];
    66          }

    67          cout << "main(): sum: " << sum << ". waiting for all threads to finish..." << endl;
    68          sleep(5);
    69      }

    70      return 0;
    71  }
Last edited on
1. You shouldn't detach the thread unless you really mean it.

2. You should join each of those threads before you end. You're termination condition isn't to wait long enough, it should be to stop when all threads are done.

3. You don't need data_wrapper.

4. Uncomment line 17.
data_wrapper I left there later to make more adv. programming to put the mutex and the sum variable in there in one class for OOP. However I left there and yes you are right, there is no purpose for now.

Each thread will set its corresponding flag in stat[] array and main() will wait loop. If I remember correctly, doing wait() caused to launch each thread sequentially but I might be wrong on this one. I just preferred detach.

Main objective of this is to try and keep improving to see if I can deliberately create race condition.

Line 17 is uncommented when need to use mutex.
You should only lock the resource you need, and for the minimal amount of time. Your lock applies from line 17 to the end of the function. You could use a scope to control this.
yes, i understood, but please note that i am emphasizing to deliberately create race condition. If you are good expert, can you give opinion on this whether it is remotely possible?

--- Slapping a mutex line with following few lines of code is easy, that is why i wanna take the challenge to the next level in hopes of enforcing my knowledge with practice ---

I can definitely stop there and assume and trust it is working well or....
I am changing the code as follows:

Instead of cout, I output the X + Y = Z to file instead.
Each thread will, for example output to filename that is uniquely identifed by its thread ID.

Once done, each file has content "X + Y - Z".

Now, I increased the thread count to 20,000 to increase the possibility that one of the thread will mess up the SUM variable. This means 20,000 files is created. It is too many to inspect each of it to see if I can find incorrect addition due to race condition:
i.e. 2 + 3 = 10.
Meaning it could possibly happen when thread X+1 was adding 2 + 3 and just before printing out 5, thread X + 2 jumps in (since I commented out line 17 without use of mutex) then adds 5. Then X+1 jumps in (hopefully) and then adds (2+5) + 3 instead of 10 and then outputs 2 + 3 = 10 to output file.

to make it easier to spot the race condition, perhaps i was thinking to merge all the output into one file and convert to excel. Col1=Sum Col2=Num Col3=Sum(after addition), I use excel formula to compute col1+col2 and publish it in Col4. And then compare col3 to col4.

Last edited on
This is the guts of your 'race condition' at the assembler level.
1
2
3
4
        movl    sum(%rip), %edx
        movl    -4(%rbp), %eax
        addl    %edx, %eax
        movl    %eax, sum(%rip)

You're only going to see a problem if a thread gets suspended during the middle two instructions. Two instructions in the millions that the rest of your code is not much of a window, but it's still a window.

You could run this for months with 100's of threads, and still not see a problem.

Here is your stripped down code.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#include <list>
#include <mutex>
#include <algorithm>
#include <thread>
#include <iostream>

using namespace std;

std::mutex mutex_sum;
int sum = 0;

void hello(int pId, int pStat[])
{
    int pNum = rand() % 4 + 2;
    // Create lock guard.
    //std::lock_guard<std::mutex> sum_guard(mutex_sum);
    sum += pNum;
    pStat[pId]  = 1;
}

int main()
{
    // Declare, initialize variables.
    int i;
    const int CONFIG_THREAD_COUNT = 2;
    int stat[CONFIG_THREAD_COUNT];
    int sum = 0;    //!! main.cpp:27:9: warning: declaration of ‘sum’ shadows a global declaration [-Wshadow]

    // launch threads.
    for ( i = 0; i < CONFIG_THREAD_COUNT; i ++ ) {
        stat[i] = 0;
        std::thread t(hello, i, stat);
        t.detach();
    }

     while (sum != CONFIG_THREAD_COUNT)  {
        sum = 0;
        for (i = 0; i < CONFIG_THREAD_COUNT; i++) {
            sum += stat[i];
        }
    }

    return 0;
}


Now I'd introduce you to helgrind, a tool within valgrind.

Without the mutex call in hello(), we get this.

$ valgrind --tool=helgrind --ignore-thread-creation=yes ./a.out 
==6316== Helgrind, a thread error detector
==6316== Copyright (C) 2007-2015, and GNU GPL'd, by OpenWorks LLP et al.
==6316== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==6316== Command: ./a.out
==6316== 
<<snipped noise>>
==6316== Possible data race during read of size 4 at 0x6051A8 by thread #3
==6316== Locks held: none
==6316==    at 0x400E45: hello(int, int*) (in /home/sc/Documents/a.out)
==6316==    by 0x4028E3: void std::_Bind_simple<void (*(int, int*))(int, int*)>::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) (in /home/sc/Documents/a.out)
==6316==    by 0x40279F: std::_Bind_simple<void (*(int, int*))(int, int*)>::operator()() (in /home/sc/Documents/a.out)
==6316==    by 0x40272F: std::thread::_Impl<std::_Bind_simple<void (*(int, int*))(int, int*)> >::_M_run() (in /home/sc/Documents/a.out)
==6316==    by 0x4EF8C7F: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
==6316==    by 0x4C34DB6: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so)
==6316==    by 0x53DF6B9: start_thread (pthread_create.c:333)
==6316==    by 0x56FC41C: clone (clone.S:109)
==6316== 
==6316== This conflicts with a previous write of size 4 by thread #2
==6316== Locks held: none
==6316==    at 0x400E50: hello(int, int*) (in /home/sc/Documents/a.out)
==6316==    by 0x4028E3: void std::_Bind_simple<void (*(int, int*))(int, int*)>::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) (in /home/sc/Documents/a.out)
==6316==    by 0x40279F: std::_Bind_simple<void (*(int, int*))(int, int*)>::operator()() (in /home/sc/Documents/a.out)
==6316==    by 0x40272F: std::thread::_Impl<std::_Bind_simple<void (*(int, int*))(int, int*)> >::_M_run() (in /home/sc/Documents/a.out)
==6316==    by 0x4EF8C7F: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
==6316==    by 0x4C34DB6: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so)
==6316==    by 0x53DF6B9: start_thread (pthread_create.c:333)
==6316==    by 0x56FC41C: clone (clone.S:109)
==6316==  Address 0x6051a8 is 0 bytes inside data symbol "sum"
==6316== 
==6316== ----------------------------------------------------------------
==6316== 
==6316== Possible data race during write of size 4 at 0x6051A8 by thread #3
==6316== Locks held: none
==6316==    at 0x400E50: hello(int, int*) (in /home/sc/Documents/a.out)
==6316==    by 0x4028E3: void std::_Bind_simple<void (*(int, int*))(int, int*)>::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) (in /home/sc/Documents/a.out)
==6316==    by 0x40279F: std::_Bind_simple<void (*(int, int*))(int, int*)>::operator()() (in /home/sc/Documents/a.out)
==6316==    by 0x40272F: std::thread::_Impl<std::_Bind_simple<void (*(int, int*))(int, int*)> >::_M_run() (in /home/sc/Documents/a.out)
==6316==    by 0x4EF8C7F: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
==6316==    by 0x4C34DB6: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so)
==6316==    by 0x53DF6B9: start_thread (pthread_create.c:333)
==6316==    by 0x56FC41C: clone (clone.S:109)
==6316== 
==6316== This conflicts with a previous write of size 4 by thread #2
==6316== Locks held: none
==6316==    at 0x400E50: hello(int, int*) (in /home/sc/Documents/a.out)
==6316==    by 0x4028E3: void std::_Bind_simple<void (*(int, int*))(int, int*)>::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) (in /home/sc/Documents/a.out)
==6316==    by 0x40279F: std::_Bind_simple<void (*(int, int*))(int, int*)>::operator()() (in /home/sc/Documents/a.out)
==6316==    by 0x40272F: std::thread::_Impl<std::_Bind_simple<void (*(int, int*))(int, int*)> >::_M_run() (in /home/sc/Documents/a.out)
==6316==    by 0x4EF8C7F: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
==6316==    by 0x4C34DB6: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so)
==6316==    by 0x53DF6B9: start_thread (pthread_create.c:333)
==6316==    by 0x56FC41C: clone (clone.S:109)
==6316==  Address 0x6051a8 is 0 bytes inside data symbol "sum"


If you put the guard into hello(), you get less errors.

If you want to eliminate all the errors, you also need to put the guard into main() as well.
Or you remove t.detach(); and wait for the threads to exit before trying to access data.
Looks very useful tool thank you. However the example I have is one of the simples, do you have ideas on good this tools are at looking at much complicated scenarios? I am currently studying the C++ concurrency by Anthony A. Williams and some of the topics are pretty dreadfully complicated and I have trouble catching up. The book is well-reviewed but if has gotten unyieldy almost from the start.
Thanks.,
Concurrency starts with a good design telling you what is shared and needs to be protected, followed by a good implementation.

Tools like valgrind / helgrind are used to tease out those last few corner cases that may have been missed.
Also, the tools would be used on an on-going basis during development, so new mistakes are spotted early and corrected whilst still fresh in the mind of the developer.

But if you have basically no design in a single threaded code and decide "I know, threads will improve performance", that's just a disaster.
Helgrind would produce so many reports as to completely overwhelm the developer.
In an effort to fix all the race conditions, locks will be thrown in at random - to the point of turning the multi threaded code back into single threaded code because the profusion of locks now forces what amounts to sequential execution.

As you said yourself, baby steps.
If you're in unfamiliar territory, you build lots of small examples to test single aspects of understanding. Breaking them in specific ways, and observing how the tools respond with various diagnostics prepares you for when you see the same kinds of diagnostic on larger programs.

> do you have ideas on good this tools are at looking at much complicated scenarios?
Did you read the manual?

Starting to read, it says it will serializes the thread therefore program may behave very differently because of serializing. I guess I will have to dig further.
OK, here is my science experiment:
To see if I can create race condition, I decided to reduce the thread count to 2 and then each thread will loop X times (where X currently set to several thousand).
In each loop it will read sum, add random No. re-read sum and writes them into file:
file-<threadNo>-<loop>.

I took out the mutex for now.
So it will create <No. of threads> * <loop> many files where each of them will contain the result in a format:
threadNo.<threadNo>:<sum>:<random>:<sum+random>

Next I did: cat file-* > file.all.log to merge all lines onto file.all.log

And then instead of scouring through thousands of lines, wrote a simple python to read line by line the file.all.log and then parse it and then add. If any disrepancy found it output to race.found.log. Yes there were 4 occurrences where sum + random did not sum up correctly.
Last edited on
Here the changed code. I set the loop to 1M times but either software crashed or something else happened I only see about 37000 files generated, not sure what caused it to exit prematurely, perhaps it is memory alloc? or something else, it 37,000 times was but enough to see disrepancy happening 4 times where each of the 2 thread is looping through.
C++ :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
#include <list>
#include <mutex>
#include <algorithm>
#include <thread>
#include <iostream>
#include <fstream>
#include <sstream>
#include <unistd.h>

using namespace std;

std::mutex mutex_sum;
int sum = 0;

void hello(int pId, int pStat[])
{
    int pNum;
    int i;

    std::cout << pId << ": Hello CONCURRENT WORLD " << endl;

    for (int i = 0; i < 1000000; i++ ) {
        pNum  = rand() % 4 + 2;

        cout << pId << ": " << ", loop: " << i << endl;
        // read + sum + read and write to file.

        ofstream myfile;
        ostringstream oss;
        oss << "file-" << pId << "-" << i;
        myfile.open(oss.str());
        myfile << "threadNo. " << pId << ":" << sum;
        sum += pNum;
        myfile << ":" << pNum << ":" << sum << endl;
        pStat[pId]  = 1;
        myfile.close();
    }
    std::cout << pId << ": Done sleeping exiting now..." << endl;
}

int main()
{
    // Declare, initialize variables.

    int i;
    const int CONFIG_THREAD_COUNT = 2;
    int stat[CONFIG_THREAD_COUNT];
    int sum = 0;

    // launch threads.

    for ( i = 0; i < CONFIG_THREAD_COUNT; i ++ ) {
        stat[i] = 0;
        std::thread t(hello, i, stat);
        t.detach();
    }
    cout << "Checking thread status-s..." << endl;

     while (sum != CONFIG_THREAD_COUNT)  {
        sum = 0;

        for (i = 0; i < CONFIG_THREAD_COUNT; i++) {
            cout << stat[i] << ", ";
            sum += stat[i];
        }

        cout << "main(): sum: " << sum << ". waiting for all threads to finish..." << endl;
        usleep(2 * 1000000);
    }

    return 0;
}

Python code: race-check.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

fp = open("file.all.log")
fpOut = open("race.found.log", 'w')

if not fp:
    print "Failed to open."

line = fp.readline().strip()

counter = 0

while line:
    print line
    operands= line.split(":")[1:4]
    print "operands: ", operands

    if int(operands[0]) + int(operands[1]) != int(operands[2]):
        fpOut.write("RACE!: " + str(operands) + "\n")
        counter += 1

    line = fp.readline().strip()

if counter == 0:
    fpOut.write("No race condition found.")
    print "No race condition found."
else:
    print "At least one race condition found: ", counter

fp.close()
fpOut.close()


Result:

1
2
3
4
5
6
7
8
9
10
11
12
[root@dev-learn-rhel7 cpp.concurrency]# cat race.found.log
RACE!: ['75330', '2', '75877']
RACE!: ['103486', '2', '104581']
RACE!: ['34712', '3', '35232']
RACE!: ['105220', '5', '105751']

[root@dev-learn-rhel7 cpp.concurrency]# egrep -ir "75330:2" file.all.log -A 2 -B 2
threadNo. 0:75321:4:75325
threadNo. 0:75325:5:75330
threadNo. 0:75330:2:75877
threadNo. 0:75877:2:75879
threadNo. 0:75879:4:75883
Last edited on
Repeated the test with CONFIG_THREAD_COUNT =1 which essentially just one thread being launched and manipulating sum variable and yes, there is no race condition. I'd say I successfully created race condition in controlled environment?
With this, now I can really head on to real use of mutex.
Thanks.,
Whilst your CONFIG_THREAD_COUNT = 2 test gave you a positive answer, had it provided a negative result, that would not have been proof that your solution was free of race conditions.

CONFIG_THREAD_COUNT =1 still has them, but just far more difficult to detect or provoke on demand. As I said, negative results do not prove your success, only your failure to detect an error.

https://en.wikipedia.org/wiki/Heisenbug
Padding your code with I/O is a sure-fire way of transforming random failure into guaranteed success (or vice-versa).
With CONFIG_THREAD_COUNT = 2, the race condition supposedly generated.
CONFIG_THREAD_COUNT = 1, why you think it still "has then", can you elaborate? Do you mean it has "race condition"? with CONFIG_THREAD_COUNT =1, I am wondering how it would be possible to have race condition with one thread launched only aside from main() which does not even bother with sum variable.

Now I am ignoring the potential pitfalls in regards to special CPU features i.e. out of order execution, branching and other capabilities to improve performance that can possibly result in a situation similar to race condition.
You're still writing to pStat in one thread, and reading it in another thread.

Your testing won't show that at the moment, because you're writing a constant every time.

But you're only one small edit away with say pStat[pId] = pNum; and you're instantly in a world of pain.

You CANNOT use brute-force repetition, lots of cout statements and log file analysis to find race conditions.
You sure you understand what i am talking? Yours sound like a confused person.
I understand what you're talking, but I've no idea what you're taking.

Your code, one thread, no brute force loops, no arbitrary sleeps to try and avoid the problem, no performance altering I/O.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
#include <list>
#include <mutex>
#include <algorithm>
#include <thread>
#include <iostream>
#include <fstream>
#include <sstream>
#include <unistd.h>

using namespace std;

std::mutex mutex_sum;
int sum = 0;

void hello(int pId, int pStat[])
{
    int pNum;
    pNum  = rand() % 4 + 2;
    sum += pNum;
    pStat[pId]  = 1;
}

int main()
{
    int i;
    const int CONFIG_THREAD_COUNT = 1;
    int stat[CONFIG_THREAD_COUNT];
    int sum = 0;

    // launch threads.
    for ( i = 0; i < CONFIG_THREAD_COUNT; i ++ ) {
        stat[i] = 0;
        std::thread t(hello, i, stat);
        t.detach();
    }

     while (sum != CONFIG_THREAD_COUNT)  {
        sum = 0;
        for (i = 0; i < CONFIG_THREAD_COUNT; i++) {
            sum += stat[i];
        }
    }

    return 0;
}

Result:
$ g++ -std=c++11 -g foo.cpp -pthread
$ valgrind --tool=helgrind --ignore-thread-creation=yes ./a.out 
==3690== Helgrind, a thread error detector
==3690== Copyright (C) 2007-2015, and GNU GPL'd, by OpenWorks LLP et al.
==3690== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==3690== Command: ./a.out
==3690== Thread-Announcement
==3690== Thread #1 is the program's root thread
==3690== Thread-Announcement
==3690== Thread #2 was created
==3690==    at 0x56FC3DE: clone (clone.S:74)
==3690==    by 0x53DE149: create_thread (createthread.c:102)
==3690==    by 0x53DFE83: pthread_create@@GLIBC_2.2.5 (pthread_create.c:679)
==3690==    by 0x4C34BB7: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so)
==3690==    by 0x4EF8DC2: std::thread::_M_start_thread(std::shared_ptr<std::thread::_Impl_base>, void (*)()) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
==3690==    by 0x4012D6: std::thread::thread<void (&)(int, int*), int&, int (&) [1]>(void (&)(int, int*), int&, int (&) [1]) (thread:137)
==3690==    by 0x400ECC: main (foo.cpp:33)
==3690== 
==3690== 
==3690== Possible data race during read of size 4 at 0xFFEFFFD90 by thread #1
==3690== Locks held: none
==3690==    at 0x400F10: main (foo.cpp:40)
==3690== 
==3690== This conflicts with a previous write of size 4 by thread #2
==3690== Locks held: none
==3690==    at 0x400E6A: hello(int, int*) (foo.cpp:20)
==3690==    by 0x4028E1: void std::_Bind_simple<void (*(int, int*))(int, int*)>::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) (functional:1531)
==3690==    by 0x40279D: std::_Bind_simple<void (*(int, int*))(int, int*)>::operator()() (functional:1520)
==3690==    by 0x40272D: std::thread::_Impl<std::_Bind_simple<void (*(int, int*))(int, int*)> >::_M_run() (thread:115)
==3690==    by 0x4EF8C7F: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
==3690==    by 0x4C34DB6: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so)
==3690==    by 0x53DF6B9: start_thread (pthread_create.c:333)
==3690==    by 0x56FC41C: clone (clone.S:109)
==3690==  Address 0xffefffd90 is on thread #1's stack
==3690==  in frame #0, created by main (foo.cpp:24)
==3690== 
==3690== 
==3690== For counts of detected and suppressed errors, rerun with: -v
==3690== Use --history-level=approx or =none to gain increased speed, at
==3690== the cost of reduced accuracy of conflicting-access information
==3690== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 1 from 1)


What part of this is confusing you?

==3690== Possible data race during read of size 4 at 0xFFEFFFD90 by thread #1
==3690== Locks held: none
==3690==    at 0x400F10: main (foo.cpp:40)
==3690== 
==3690== This conflicts with a previous write of size 4 by thread #2
==3690== Locks held: none

Topic archived. No new replies allowed.