# Standard deviation

In probability and statistics, the**standard deviation**is the most commonly used measure of statistical dispersion. Standard deviation is defined as the square root of the variance. It is defined this way in order to give us a measure of dispersion that is 1) a non-negative number; and 2) has the same units as the data.

We distinguish between the standard deviation σ (sigma) of a whole *population* or of a random variable, and the standard deviation *s* of a *sample*. The formulas are given below.

The term standard deviation was introduced to statistics by Karl Pearson (*On the dissection of asymmetrical frequency curves*, 1894).

## Interpretation and application

Simply put, the standard deviation tells us how far a typical member of a sample or population is from the mean value of that sample or population. A large standard deviation suggests that a typical member is far away from the mean. A small standard deviation suggests that members are clustered closely around the mean.

For example, the sets {0,5,9,14} and {5,6,8,9} each have a mean of 7, but the second set has a much smaller standard deviation.

Standard deviation is often thought of as a measure of uncertainty. In physical science for example, when making repeated measurements the standard deviation of the set of measurements is the precision of those measurements. When deciding whether measurements agree with a prediction, the standard deviation of those measurements is of crucial importance: if the mean of the measurements is too far away from the prediction (with the distance measured in standard deviations), then we consider the measurements as contradicting the prediction. This makes sense since they fall outside the range of values that could reasonably be expected to occur if the prediction were correct. See prediction interval.

## Definition and shortcut calculation of standard deviation

Suppose we are given a population *x*_{1},...,*x*_{N} of values (which are real numbers). The mean of this population is defined as

*X*is defined as

*X*takes on the values

*x*

_{1},...,

*x*

_{N}with equal probability, then its standard deviation can be computed with the formula given earlier.

Given only a sample of values *x*_{1},...,*x*_{n} from some larger population, many authors define the *sample standard deviation* by

*s*

^{2}is an unbiased estimator for the variance σ

^{2}of the underlying population. Note however that

*s*itself is

*not*an unbiased estimator for the standard deviation σ; it tends to underestimate the population standard deviation.

## Rules for normally distributed data

In practice, one often assumes that data is approximately normally distributed. If that assumption can be justified, then 68% of the values are at most 1 standard deviation away from the mean, 95% of the values are at most two standard deviations away from the mean, and 99.7% of the values lie within 3 standard deviations of the mean. This is known as the "68-95-99.7 rule".

## Relation between standard deviation and mean

The mean and the standard deviation of a data set go hand in hand and are usually reported together. In a certain sense, the standard deviation is the "natural" measure of statistical dispersion if the center of the data is measured by the mean. The precise statement is the following: suppose *x*_{1},...,*x*_{N} are real numbers and define the function

*r*) has a unique minimum for

## Geometric interpretation

To gain some geometric insights, we will start with a population of three values, *x*_{1},*x*_{2},*x*_{3}. This defines a point *P*= (*x*_{1},*x*_{2},*x*_{3}) in **R**^{3}. Consider the line *L* = {(*r*,*r*,*r*) : *r* in **R**}. It's the "main diagonal" going through the origin. If our three given values were all equal, then the standard deviation would be zero and *P* would lie on *L*. So it is not unreasonable to assume that the standard deviation is related to the *distance* of *P* to *L*. And that is indeed the case. Moving orthogonally from *P* to the line *L*, one hits the point

*P*and

*R*(which is the same as the distance between

*P*and the line

*L*) is given by σ√

*3*. An analogous formula (with 3 replaced by

*N*) is also valid for a population of

*N*values; we then have to work in

**R**

^{N}.

## Standard Deviation as a confidence level

## Related articles

- Chebyshev's inequality
- saturation (color theory)
- root mean square
- mean
- skewness
- kurtosis
- raw score
- standard score