Standard Deviation vs Mean
In descriptive and inferential statistics, several indices are used to describe a data set corresponding to its central tendency, dispersion and skewness. In statistical inference, these are commonly known as estimators since they estimate the population parameter values.
Central tendency refers to and locates the center of the distribution of values. Mean, mode and median are the most commonly used indices in describing the central tendency of a data set. Dispersion is the amount of spread of data from the center of the distribution. Range and standard deviation are the most commonly used measures of dispersion. Pearson’s skewness coefficients are used in describing the skewness of a distribution of data. Here, skewness refers to whether the data set is symmetric about the center or not and if not how skewed it is.
What is mean?
Mean is the most commonly used index of central tendency. Given a data set the mean is calculated by taking the sum of all the data values and then dividing it by the number of data. For example, the weights of 10 people (in kilograms) are measured to be 70, 62, 65, 72, 80, 70, 63, 72, 77 and 79. Then the mean weight of the ten people (in kilograms) can be calculated as follows. Sum of the weights is 70 + 62 + 65 + 72 + 80 + 70 + 63 + 72 + 77 + 79 = 710. Mean = (sum) / (number of data) = 710 / 10 = 71 (in kilograms).
As in this particular example, the mean value of a data set may not be a data point of the set but will be unique for a given data set. Mean will have the same units as the original data. Therefore, it can be marked on the same axis as the data and can be used in comparisons. Also, there is no sign restriction for the mean of a data set. It may be negative, zero or positive, as the sum of the data set can be negative, zero or positive.
What is standard deviation?
Standard deviation is the most commonly used index of dispersion. To calculate the standard deviation, first the deviations of data values from the mean are calculated. The root square mean of deviations is called the standard deviation.
In the previous example, the respective deviations from the mean are (70 – 71) = 1, (6271) = 9, (6571) = 6, (7271) = 1, (8071) = 9, (7071) = 1, (6371) = 8, (7271) = 1, (7771)= 6 and (7971) = 8. The sum of squares of deviation is (1)2+ (9)^{2}+ (6)^{2}+ 1^{2}+9^{2}+ (1)^{2}+ (8)^{2}+ 1^{2}+ 6^{2} + 8^{2} = 366. The standard deviation is √(366/10) = 6.05 (in kilograms). From this, it can be concluded that the majority of the data is in the interval 71±6.05, provided the data set is not greatly skewed, and it is indeed so in this particular example.
Since the standard deviation has the same units as the original data, it gives us a measure of how much deviated the data is from the center; greater the standard deviation greater the dispersion. Also, the standard deviation will be a nonnegative value regardless of the nature of data in the data set.
What is the difference between standard deviation and mean? • Standard deviation is a measure of dispersion from the center, whereas mean measures the location of the center of a data set. • Standard deviation is always a nonnegative value, but mean can take any real value.

Related posts:
 Difference Between Variance and Standard Deviation
 Difference Between SLST (Sri Lanka Standard Time) and IST (Indian Standard Time)
 Difference Between SST (Sri Lanka Standard Time) and UTC
 Difference Between Standard and Framework
 Difference Between Microsoft Visio 2007 Standard and Visio 2007 Professional