# What is the Central Limit Theorem (CLT)?

The central limit theorem is the idea that the mean (average) of samples from a population will have the shape of a normal distribution.

## 🤔 Understanding central limit theory

The central limit theorem (CLT) comes from probability theory (a branch of mathematics dealing with randomness). It states that the distribution of the means (averages) of sufficiently large random samples will approximate a normal distribution, aka a bell curve. Larger sample sizes result in distributions that more closely approximate a normal distribution. The theorem holds regardless of the shape of the distribution from which the samples are taken. Consequently, the average of all equally sized sample means is equal to the average of the population from which the samples were drawn.

The New York Stock Exchange consists of more than 3,000 companies. Each of these companies sees its stock price rise and fall each trading day. Some move up, others move down. If an investor randomly selects five of these companies to invest in, they may have picked the five best stocks. It’s also possible they picked the five worst ones. But it’s more likely that they chose a mix of winners and losers. Now imagine the investor spreading their money across 500 random companies. It becomes far more likely that the investor’s return on investment is something closer to the average performance of the whole stock market. That’s the central limit theorem in action.

## Takeaway

The central limit theorem is like going fishing…

Imagine that you have a small boat you like to take out on the lake for an hour at a time. You throw a line in the water and see what bites. Sometimes, you get lucky and catch five fish. Other times, you strike out and go home with nothing. But, most days, you reel in two or three. The number of fish you catch forms a normal distribution. There are a few bad days, a few good days, and a lot of mediocre days. That is what the central limit theorem says will happen in statistics, too.

The free stock offer is available to new users only, subject to the terms and conditions at rbnhd.co/freestock. Free stock chosen randomly from the program’s inventory.

## What is the Central Limit Theorem (CLT)?

The central limit theorem states that the distribution of the means of a sufficiently large sample size would approximate a normal distribution. It is a critical component of statistics, but it can be pretty confusing. To understand it, we need to break down some terms.

First, a sample is a small portion of a larger group, called a population. We usually use the word population to talk about the number of people in a country, state, or city. But a population is the entire set of all things in any group.

For example, say you wanted to study the average size of a conch shell on the beaches of the Caribbean. The population of your study would be every conch shell that exists on those beaches. It would be a monumental task to measure every single shell. Instead, you can take a sample from the population and measure it. Say you collect a sample of 30 shells. The average size of those 30 shells creates one observation. If you went out and collected another 30 shells, the average size of that sample is a second observation.

The central limit theorem says that if you keep taking these same-sized samples, and plot the frequency that the average value comes up, it would look like a normal distribution. That is, the average values would mostly cluster around some central point, with fewer averages appearing as you move away from the center.

At this point, the confusion usually comes in. You are not plotting the actual size of each member in the sample. Instead, you plot the average value of the members in a sample, called the sample mean. You are not attempting to replicate the distribution of the population itself. Rather, you are creating a distribution of the sample means.

The value at the center of that distribution is the mean of the sample means. The CLT says that the mean of the sample means is equal to the mean of the population. That is a powerful conclusion. If the distribution of sample means is a normal distribution, with a mean equal to the population mean, then the average of a sample cannot be too far from the average of the population.

In fact, the normality of the distribution allows us to determine just how confident we can be in using the average of the sample to make generalizations about the population. And that assertion holds regardless of the shape of the population's distribution.

## How does the Central Limit Theorem work?

It might be easiest to understand the central limit theorem (CLT) by walking through a simple example. Consider a six-sided die with an equal chance of landing on any side. The chances of landing on any number, one through six, is one-sixth.

Because we know the universe of possible outcomes, and the chances of each occurrence, we know the probability distribution of outcomes. Therefore, we can calculate the real mean outcome of this die. It is the average of 1, 2, 3, 4, 5, and 6, which is 3.5. In the real world, it is usually impossible to know the actual average of anything. But we can use the knowledge we have to demonstrate the CLT.

Imagine rolling this die twice, then recording the average value of the two rolls. You could roll a one and then a one (denoted 1:1 here), in which case the average is 1. Or, you could get 2:4, in which case the average is 3. Here is a chart of all the possible outcomes:

Average | Ways to get it |
---|---|

1 | 1:1 |

1.5 | 1:2, 2:1 |

2 | 1:3, 2:2, 3:1 |

2.5 | 1:4, 2:3, 3:2, 4:1 |

3 | 1:5, 2:4, 3:3, 4:2, 5:1 |

3.5 | 1:6, 2:5, 3:4, 4:3, 5:2, 6:1 |

4 | 2:6, 3:5, 4:4, 5:3, 6:2 |

4.5 | 3:6, 4:5, 5:4, 6:3 |

5 | 4:6, 5:5, 6:4 |

5.5 | 5:6, 6:5 |

6 | 6:6 |

You can start to see the normal distribution take shape. Technically, this is called a binomial distribution because there are a limited number of possible outcomes. But it has a shape similar to a normal distribution. Remember, we started with a uniform distribution (all outcomes were equally likely).

The most common result in the table above is 3.5. That implies that your sample of two rolls is more likely to average 3.5 than any other number. The result could fall anywhere from one to six, but it is more likely to get an average close to 3.5 than a number further away. The chance of getting a significant error (in this case, one or a six) is equally likely in either direction. These are characteristics of a normal distribution.

Now, think about taking the average of 10 rolls of the die. There is still only one way to get an average of one (rolling 10 ones in a row). That’s far less likely than getting it twice in a row.

Meanwhile, there are lots more ways to get an average closer to 3.5. The shape will still be a normal distribution, but more results will be in the middle. That implies that a larger sample reduces the ability to fall too far away from the real mean value. In other words, the results are less dispersed. As the sample size increases, it is harder to stray too far from the true mean value.

This outcome demonstrates precisely what the central limit theorem says. The distribution of sample means creates a normal distribution, whose mean is the population mean. And a larger sample size creates a normal distribution with a smaller variance.

## Why is the CLT important?

The central limit theorem (CLT) is important for two reasons. First, it gives us confidence that the average of a simple random sample from a population will reasonably approximate the average of that population. And the larger the sample size is, the more likely it is to represent the entire group. Small samples can be unreliable, but with as few as 30 observations, the CLT starts to take shape.

The fact that measuring a few dozen people, results, or items can provide reliable insights into something much larger is critical to scientific exploration. Imagine if understanding how something worked required the measurement of every single possible observation. For example, imagine if a drug trial on 10,000 people didn’t give us any reliable information about its effectiveness on the rest of the population. If the central limit theorem didn’t hold, we could never really know anything at all.

Second, the CLT is essential because it always approximates a normal distribution. Even if the distribution of the underlying population is anything but ordinary, the CLT says that the distribution of sample means is approximately normal. That fact is valuable because normal distributions adhere to many of the rules of statistical analysis. By being able to say confidently that a number came from a normal distribution, statisticians can rest assured that their work generates valid conclusions.

## How is the CLT used?

The central limit theorem (CLT) is commonly used in inferential statistics (statistics that use small amounts of data to estimate things about bigger datasets) because any sample is prone to some level of sampling error.

For instance, assume we want to know the poverty rate in the state of New Mexico. It might be challenging to look at every resident’s income level. So we could randomly select 100 people who live in the state and ask them how much money they make. The number of people in poverty out of that sample should provide some insight into the rest of the population. However, it’s possible that we randomly selected only rich people. Or, we may have accidentally oversampled the number of low-income earners.

The CLT allows us to determine how much faith we should have in the estimate provided by the sample. For instance, if 19 people in that sample were in poverty, we can assume that the real poverty rate of the population is close to 19%. A statistician might construct a confidence interval (a range of feasible values given a specified level of confidence) from that data to say that the population’s poverty rate is somewhere between 15% and 25%. (This is a fictitious example for illustrative purposes only.)

The normality of the distribution provided by the CLT allows the use of various hypothesis-testing methods. For instance, statisticians might use the CLT to test a null hypothesis and determine if an observation is statistically significant. One process for doing this is called a T-test, which determines if the difference between outcomes is explainable by randomness or if it indicates something else.

## What is the CLT formula?

The central limit theorem (CLT) doesn’t have a formula per se, but there are some things that come out of it. First, the CLT results in one crucial conclusion. The average of all the sample means is equal to the average of the population. Second, the standard deviation (a measure of spread in the dataset) of the population and the distribution of sample means are related. The larger the sample size becomes, the less variance there will be in the distribution of sample means. There is a formula for this relationship.

σx = σ/√n σx = standard deviation of the sample means σ = standard deviation of the population n = size of samples

This part of the CLT says that the standard deviation of the distribution of sample means is equal to the population standard deviation divided by the square root of the number of samples of size n. Put into plain English, if you take bigger samples, the normal distribution of sample means gets narrower.

Think about that for just a second. If you take a sample size of one, the average of the sample is the same as the one member in the sample. Randomly selecting samples of one would imply that the range of outcomes would be the same as the range in the population. Dividing by one leaves you with the full variance of the population.

If you increase the sample size to 30, it becomes much more unlikely to end up with a sample of only the lowest or highest values. That implies the average of a larger sample is much less likely to match the ends of the population values. In other words, the variance of the average gets smaller as the size of the sample gets larger.

## Does the central limit theorem apply to all distributions?

The great thing about the central limit theorem is that it applies to almost any population distribution you are likely to encounter. The population can consist of data that is heavily skewed in either direction, uniformly distributed, or even completely random. If you can take random samples from the population, and add up the value of the sample, then the central limit theorem says the samples will approximate a normal distribution. That’s a big deal in statistics. Having a normal distribution is a huge help when you want to test the data in a variety of ways. The fact that you can get a normal distribution out of a population of any shape is pretty amazing.

The free stock offer is available to new users only, subject to the terms and conditions at rbnhd.co/freestock. Free stock chosen randomly from the program’s inventory.