Encyclopedia  |   World Factbook  |   World Flags  |   Reference Tables  |   List of Lists     
   Academic Disciplines  |   Historical Timeline  |   Themed Timelines  |   Biographies  |   How-Tos     
Sponsor by The Tattoo Collection
Standard deviation
Main Page | See live article | Alphabetical index

Standard deviation

In probability and statistics, the standard deviation is the most commonly used measure of statistical dispersion. Standard deviation is defined as the square root of the variance. It is defined this way in order to give us a measure of dispersion that is 1) a non-negative number; and 2) has the same units as the data.

We distinguish between the standard deviation σ (sigma) of a whole population or of a random variable, and the standard deviation s of a sample. The formulas are given below.

The term standard deviation was introduced to statistics by Karl Pearson (On the dissection of asymmetrical frequency curves, 1894).

Table of contents
1 Interpretation and application
2 Definition and shortcut calculation of standard deviation
3 Rules for normally distributed data
4 Relation between standard deviation and mean
5 Geometric interpretation
6 Standard Deviation as a confidence level
7 Related articles

Interpretation and application

Simply put, the standard deviation tells us how far a typical member of a sample or population is from the mean value of that sample or population. A large standard deviation suggests that a typical member is far away from the mean. A small standard deviation suggests that members are clustered closely around the mean.

For example, the sets {0,5,9,14} and {5,6,8,9} each have a mean of 7, but the second set has a much smaller standard deviation.

Standard deviation is often thought of as a measure of uncertainty. In physical science for example, when making repeated measurements the standard deviation of the set of measurements is the precision of those measurements. When deciding whether measurements agree with a prediction, the standard deviation of those measurements is of crucial importance: if the mean of the measurements is too far away from the prediction (with the distance measured in standard deviations), then we consider the measurements as contradicting the prediction. This makes sense since they fall outside the range of values that could reasonably be expected to occur if the prediction were correct. See prediction interval.

Definition and shortcut calculation of standard deviation

Suppose we are given a population x1,...,xN of values (which are real numbers). The mean of this population is defined as

(see summation notation) and the standard deviation of this population is defined as

A slightly faster way to compute the same number is given by the formula

The standard deviation of a random variable X is defined as

Note that not all random variables have a standard deviation, since these expected values need not exist. If the random variable X takes on the values x1,...,xN with equal probability, then its standard deviation can be computed with the formula given earlier.

Given only a sample of values x1,...,xn from some larger population, many authors define the sample standard deviation by

The reason for this definition is that s2 is an unbiased estimator for the variance σ2 of the underlying population. Note however that s itself is not an unbiased estimator for the standard deviation σ; it tends to underestimate the population standard deviation.

Rules for normally distributed data

In practice, one often assumes that data is approximately normally distributed. If that assumption can be justified, then 68% of the values are at most 1 standard deviation away from the mean, 95% of the values are at most two standard deviations away from the mean, and 99.7% of the values lie within 3 standard deviations of the mean. This is known as the "68-95-99.7 rule".

Relation between standard deviation and mean

The mean and the standard deviation of a data set go hand in hand and are usually reported together. In a certain sense, the standard deviation is the "natural" measure of statistical dispersion if the center of the data is measured by the mean. The precise statement is the following: suppose x1,...,xN are real numbers and define the function

Using calculus, it is not difficult to show that σ(r) has a unique minimum for

Geometric interpretation

To gain some geometric insights, we will start with a population of three values, x1,x2,x3. This defines a point P= (x1,x2,x3) in R3. Consider the line L = {(r,r,r) : r in R}. It's the "main diagonal" going through the origin. If our three given values were all equal, then the standard deviation would be zero and P would lie on L. So it is not unreasonable to assume that the standard deviation is related to the distance of P to L. And that is indeed the case. Moving orthogonally from P to the line L, one hits the point

whose coordinates are the mean of the values we started out with. A little algebra shows that the distance between P and R (which is the same as the distance between P and the line L) is given by σ√3. An analogous formula (with 3 replaced by N) is also valid for a population of N values; we then have to work in RN.

Standard Deviation as a confidence level

In experimental science, the confidence one has that a measured event is the result of a signal, rather than just statistical noise. So the higher your sigma confidence level, the less likely it is that the measured event is a result of noise.

Related articles