Do you ever feel like probability is a load of codswallop? That the odds are just a farce? That the 90% punctuality rate boasted by your train operator in no way reflects their actual performance, or that the 10% chance of rain forecast by meteorologists is never accurate when you leave your umbrella at home? Sometimes it seems as if these percentages are entirely fabricated, more wishful thinking than anything else. Truthfully, though, that’s rarely the case.

In general, we humans are pretty bad at distinguishing random results from ones that obey a pattern. A classic example of our folly relates to the shuffle algorithms used by Apple’s various iDevices. Since the very first iPod, users have complained that the random playlists are anything but. Multiple tracks from the same artist in succession *can’t *be random, so the algorithm must be wrong.

The truth is not so simple. In a music library consisting of 200 distinct artists with on average the same number of tracks, the chance of a recurrence begins at around 1/200 and tends to increase the fewer tracks there are remaining. If you factor in skipping tracks, it’s quite likely that over the course of two or three sessions, 200 or more songs will be served. Simple probability dictates that at least one recurrence is thus highly probable.

Once you extrapolate these odds to the approximately 400 million iPods sold since 2001 – not to mention the additional use of iTunes software on computers and iPhones – the number of complaints no longer seems so damning. Nevertheless, the shuffle algorithm continues to cop the blame, while we cling to our faulty understanding of what randomness really is.

Memory is a fickle thing. Rather than recording everything, our brains typically store only those things that attract our focus: stimuli that evoke strong emotional reactions, or information we suspect will be useful in the future. These fragments serve as memory seeds; each time we reflect upon them, they sprout into slightly different scenes, the gaps filled with details influenced by the present instead of the past.

Using the tactical RPG *XCOM *as an example, we might witness two consecutive misses on a 90% hit chance and baulk at the apparent error in the game’s calculations. The truth, though, is that we have simply lost track of all the times the prediction proved correct. Thanks to the negativity bias, which causes memories associated with negative emotions to burn brighter than their positive counterparts, the numerous 90% hits shrink to a mere handful, while the two misses bloom to a plurality.

In contrast, we tend to forget the times probability erred in our favour. When we hit a dozen 70% shots without fail, we think little of it, even though it’s just as improbable as a 90% shot missing multiple times in a row. Because we are content, our emotions are not piqued, and the memory swiftly fades. Only when we get the short end of the stick do we question the reliability of the underlying systems.

—

One of the most notorious cognitive biases in the field of probability is the gambler’s fallacy. It works like this: in a series of independent events, we tend to adjust our expectations based on past performance. For example, if we’re flipping coins and we turn up five heads in a row, we typically expect the next five flips to land tails in order to balance the odds. But that’s not how probability works. Every flip has the same chance of coming up heads or tails. The odds do not care about ‘balance’.

Of course, over a large number of trials, the aggregate results will trend towards the underlying probability. This phenomenon, known as the normal distribution, is often depicted by a bell-shaped curve on a graph, with the space below the curve representing the observed outcomes. The most common outcomes are clustered around the peak, also referred to as the expected value – 50/50, in the case of flipping a coin.

But normal distribution does not dictate the outcomes of individual events. A 70% chance does not mean every 10 trials will deliver 7 successes and 3 failures. And yet, we regularly behave like it does. Predictions like ‘her luck is about to turn’ and ‘he’s due for a win’ prevail against the reality of independent probability.

On the more mathematical side of things, there are two laws that play a prominent role in our problems with probability: the law of large numbers, and the law of small numbers.

The law of large numbers is fairly simple. Essentially, it states that the more trials we conduct, the closer the average of those trials will get to the underlying probability. Flip a million coins, and the heads/tails split likely won’t be far from 50/50. Flip a billion, and the split will almost certainly be even closer.

At the other end of the spectrum, the law of small numbers stipulates that in a low number of trials, the chances of the results straying from the underlying probability are quite large. Flip just three coins and the most accurate split obtainable is 67/33. Flip ten, and even though it’s possible to observe an even split, it’s more likely that one of the two 40/60 outcomes will come up. Limited data sets often deviate significantly from the expected value, which makes them a poor source to rely upon – and yet, most of us do so on a frighteningly regular basis, even if we don’t realise it.

—

The laws of large and small numbers trap us in a catch-22. When we judge probability based on a small number of trials, such as a single mission of *XCOM*, the law of small numbers ensures at least some of our observations will stray from the underlying probability. These discrepancies average out as we conduct more trials, but since our brains pay more attention to surprises than confirmed expectations, our memories don’t always reflect the true nature of events. In short, more data doesn’t necessarily yield a more accurate conclusion.

That’s why we yell at our computer screens when our *XCOM* soldiers miss their 90% shots, and we curse the local meteorologist when their prediction of a 10% chance of rain leaves us drenched on the sidewalk without an umbrella. Probability pervades so many facets of our lives, and yet we still frequently misunderstand how it works. With a firmer grasp on the concept of chance, our lives would be a lot more predictable.

Probably.