Coin Flip Script

Hello I'm new to c++.
I was wondering if someone here could do me a favor potentially.
I'm somewhat familiar with Python and Scratch.

I was wondering if someone here could quickly write a program that asks the user how many times they would like to flip a coin. Then the program proceeds to flip the a coin that many times and then prints out how many times heads was flipped and how many times tails was flipped.

I keep getting errors and stuff in code block. My purpose for this is to see how much faster c++ is at this than Python and Scratch.
So Far Scratch is 3x slower than Python.

A coin flip is pretty much a random boolean ("true" or "false"). See here on how to generate random booleans:
https://stackoverflow.com/a/43329456

If you want the user to input the number of coin flips (instead of fixed count), use std::cin from <iostream>:

1
2
int number;
std::cin >> number;
Last edited on
This is a useless exercise: its not enough meat to it to get a meaningful time (I had to do 10M just to get nonzero ms value) and its not enough code to exercise the language against another. That is you can't really use this to say c++ is X times faster than python.
regardless, here you go:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
int main()
{
  
  std::default_random_engine generator;   
  std::uniform_int_distribution<int> distribution(0,1);   
  generator.seed(time(0));
  int flips{};
  cout << "how many flips\n";
  cin >> flips;
  //you should not time the cin statement, as that introduces user response time in result. 
  auto start = chrono::steady_clock::now();	
  int heads{};
  for(int i{}; i<flips; i++)
	  {	  
		heads += distribution(generator);	
	  }
   cout << "heads: " << heads << " tails: " << flips-heads << endl;	
   auto end = chrono::steady_clock::now();
   cout << "Elapsed time: "
        << chrono::duration_cast<chrono::milliseconds>(end - start).count()
		<< "ms\n";
        
}



C:\c>a
how many flips
10000000
heads: 5001992 tails: 4998008
Elapsed time: 37ms


note that by default most c++ compilers generate slow debug code. Be sure to set it for a release / optimized build for these kinds of tests.
Last edited on
@jonin I think that's kinda the point. It being uniform maybe the key takeaway considering in a perfect world the odds of 50/50 should be even. I'm thinking using the seed(time(0)) rand()%2 etc would be the way to go to get some more randomness out of it. I'm actually kinda curious what the results would be now.
@jonnin it's probably not the most definitive test. But it's something. Thanks for the code. Code Blocks is still throwing a ton of errors for me... But that's ok. I just wanted to quickly test to see exactly how fast C++ is compared to Python. From this I will focus my future efforts into C++.

https://tinypic.host/i/capture.G3y8k

It takes Scratch 30 seconds to flip a coin 10 000 000 times.
It takes Python 10 seconds to flip a coin 10 000 000 times.
It takes C++ 0.047 seconds to flip a coin 10 000 000 times. (Roughly since it's a different CPU)
LeoManechest wrote:
It takes Python 10 seconds to flip a coin 10 000 000 times.


What sort of program did you write? The following was effectively instantaneous.

1
2
3
4
5
6
7
8
9
10
11
import numpy as np
import time

SIZE = 10000000

start_time = time.time()
a = np.random.randint( 0, 2, SIZE )       # SIZE integers in [0,2);  i.e. 0 or 1
nheads = np.count_nonzero( a )
time_taken = time.time() - start_time

print ( "Number of heads: ", nheads, "\ntails: ", SIZE - nheads, "\nTime: ", time_taken, "s" )


Number of heads:  4997844 
tails:  5002156 
Time:  0.1001119613647461 s



You can try it in an online compiler:
https://onlinegdb.com/oE1W5eqmx

Last edited on
It being uniform maybe the key takeaway considering in a perfect world the odds of 50/50 should be even. I'm thinking using the seed(time(0)) rand()%2 etc would be the way to go to get some more randomness out of it. I'm actually kinda curious what the results would be now.

Actually, the probability of getting "0" (false) or "1" (true) should be exactly 50%. But this does not mean that, in an actual sample, you get exactly 50% 0's and exactly 50% 1's. Even if you generate 1000 values (coin flips) with a "perfect" RNG, then it is absolutely possible to get 1000 times 0 in a row – it's just not very likely ;-)

In fact, if in every sample you generate, there always are exactly 50% 0's and exactly 50% 1's, then this would indicate that your RNG is "broken", because that's not what we'd expect from "true" random coin flips.

Note that, in general, rand() is not a very good RNG. And here "not good" means it can easily be distinguished from "true" random numbers by statistical tests. Even worse, by rand() % 2 you only take the least-significant bit of each number generated by the RNG. For many RNG algorithms, the least-significant bit may not be evenly distributed! That's why you probably should be using std::uniform_int_distribution<>(0,1), together with a "good" RNG algorithm, such as std::mt19937 (Mersenne Twister), and seeded from std::random_device.

1
2
3
4
5
6
7
8
9
10
11
12
13
#include <iostream>
#include <functional>
#include <random>

int main()
{
    std::random_device seed;
    auto coin_flip = std::bind(std::uniform_int_distribution<int>(0, 1), std::mt19937(seed()));
    for (int i = 0; i < 1000; ++i)
    {
        std::cout << coin_flip() << std::endl;
    }
}
Last edited on
@lastchance

---

import random

h = 0
t = 0
c = 0
n = 0

n = int(input("Number of Flips?:"))

for num in range(0, (n)):
----c = random.randint(0,1)

----if (c == 0):
--------h = (h + 1)
----else:
--------t = (t + 1)

print("Heads = ", (h))
print("Tails = ", (t))

---
Last edited on
@kigar https://en.cppreference.com/w/cpp/numeric/random/uniform_int_distribution

Maybe I'm looking too far into "uniformly distributes numbers according to a discrete probability function ". Not to mention due to how probability works the bigger the sample size the more likely it will be net 0. Throw 10 might be 7 one way 3 the other throw 100 might only be 45 one way 55 the other and so on. That's just picking randomly, a probability function might roll a 0 and change the odds for rolling a 1 log(2) until 0 is pulled and then scale up and down accordingly to ensure the odds resemble real life.
Last edited on
Yeah, the bigger you make your sample (number of coin flips), the closer you'll get to a 50:50 distribution - in the average case. But even if you make your sample very big (like some millions of coin flips), it still is perfectly possible to get only 0's (or only 1's) in a concrete sample – that just becomes extremely unlikely.

If you generate a large number of samples, each with a fixed size of n, then you will see a certain fluctuation – even with "true" random numbers. It is not expected to get an exact 50:50 distribution in every sample. In fact, if you see an exact 50:50 distribution in every sample, that would indicate a "broken" (biased) RNG.
Last edited on
I'm referring to the specific method that jonin chose to get said "random" numbers and he explicitly stated it was an unusually even split...im assuming consistently. What I quoted was in the description of said method and I just offered up my best guess as to the why based off of the posted description. I would have to assume that a dedicated probability function would have to in some regard lean the probability in some way for a specific purpose. It says it simulates a dice roll for all I know using 2 numbers might break it. Anyone got a 2 sided die we can try this out on?
Just look at the probabilities, assuming a "fair" (non-biased) coin:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
[n=1]

0 -> 0
1 -> 1

0 -> p=0.5
1 -> p=0.5


[n=2]

00 -> 0
01 -> 1
10 -> 1
11 -> 2

0 -> p=0.25
1 -> p=0.50
2 -> p=0.25


[n=3]

000 -> 0
001 -> 1
010 -> 1
011 -> 2
100 -> 1
101 -> 2
110 -> 2
111 -> 3

0 -> p=0.125
1 -> p=0.375
2 -> p=0.375
3 -> p=0.125


[n=4]

0000 -> 0
0001 -> 1
0010 -> 1
0011 -> 2
0100 -> 1
0101 -> 2
0110 -> 2
0111 -> 3
1000 -> 1
1001 -> 2
1010 -> 2
1011 -> 3
1100 -> 2
1101 -> 3
1110 -> 3
1111 -> 4

0 -> p=0.0625
1 -> p=0.2500
2 -> p=0.3750
3 -> p=0.2500
4 -> p=0.0625


This shows all possible outcomes of a sample of size n. For each possible outcome, the number of 1's in the sample is shown. Then, the probability of getting a specific number of 1's in the sample is calculated.

Of course, the expected value for the number of 1's is 0.5 * n. But, as you can see, the probability of getting a sample with exactly 0.5 * n 1's is not 1.0. And the probability of getting a sample with n 1's is not zero.
Last edited on
Anyone got a 2 sided die we can try this out on?

A 2 sided die is a coin! but you can divide a d4 by 2 and many others as well to map it to 2 results
3 sides, typically use d6 & divide by 2.
4 exists (pyramidal die)
5 is d10 divided by 2
6 is of course standard die
7 ... not easy.
8 exists (double pyramid like the 4)
9 ... not easy
10 exists
11 not easy
12 exists
then nothing to 20, then 30, and 100 (golf ballish thing)
There may be some others, not aware of them from anything I have played.

py makes it hard to get speed. Its strengths are in its libraries which are easier to use than most languages so you can do anything with minimal actual work, if you don't mind it being slower than just about anything else. There is a cython fork that is supposedly faster, I have not gone off into the alternates. However you can speed it up if you dig into the nuances of what it does well and what it does poorly.

One of the things you are testing here is the performance of the random algorithm in play. If you did as suggested and used the MT instead of uniform the c++ would take longer, for example.
Last edited on
If you throw a coin 10 million times the probability of getting exactly 5 million heads is about
0.0002523

So, if you do the experiment 4000 times then you might expect perfect balance just once.

Somebody code it!
Topic archived. No new replies allowed.