Now that I’ve built a basic framework for simulating “hands” of dice and I’ve also run enough unit tests to convince me that the “counting” algorithms work, the next thing to do is to make sure that when we roll the dice a lot of times, that the results come close to what probability theory says we should expect.

So, first, let’s figure out what some key probabilities are for a 3 dice hand:

**P(3 of the same number or “triple”) = 6 / 216 (2.78%)**`[(1,1,1), (2,2,2), (3,3,3), (4,4,4), (5,5,5), (6,6,6)] / 6 * 6 * 6`

**P(2 of the same number or “double) = 90 / 216 (41.67%)**`[(1, 1, not 1), (2, 2, not 2)... (1, not 1, 1), (2, not 2, 2) ... (not 1, 1, 1), (not 2, 2, 2)] / 6 * 6 * 6`

**P(Everything else or “singles”) = 120 / 216 (55.56%)**`(216 - 96) / 216`

Now, let’s run the following code over 100,000 hands to see how close we come to these probabilities:

def verify_simple_hand(num_rolls: int = 10000): h = SimpleHand(num_sides=6, num_dice=3) num_dups = { 1: 0, 2: 0, 3: 0 } for r in range(num_rolls): h.roll_all() nd = h.max_duplicates() num_dups[nd] += 1 for (duplicates, occurrences) in num_dups.items(): print(f"{duplicates} duplicates: {occurrences:8d} / {num_rolls} -- {(occurrences / num_rolls):>8.2%}")

1 duplicates: 55600 / 100000 -- 55.60% 2 duplicates: 41628 / 100000 -- 41.63% 3 duplicates: 2772 / 100000 -- 2.77% Process finished with exit code 0

This looks pretty good to me! Our simulated value for triples is within 6 of our expected number (2778) over 100k rolls. This is well in line with the variability we would expect. If we were being really serious, we probably could compute a p-value here… but this is just for fun… so repeatable results like this lead me to believe that we are correctly simulating the dice rolls and should have a good foundation for our future exploration!