Suppose a data set represents the heights of different towers. The mean of this data set gives an idea of a typical height. Calculating the mean could be seen as rearragning the blocks so that all the towers have the same height.
After the blocks are rearranged, the towers each have a height of Therefore, the mean is If the heights are written as , then the mean is sometimes written as The towers' mean height can then be written as
When on vacation in Mexico, Peter finds a rose species he has never seen before. He decides to study how many petals each flower has. The result of his study is this. How many petals do the flowers on average have?
A measure of spread is a way of quantifying how spread out, or different, the points in a data set are. A small spread means data points are similar, while a large spread means they are different. This is illustrated by the two data sets below. Both have a mean, median and mode of but, we can assume the second data set has a larger spread because of how different its data points are.
Some commonly used measures of spread are range, mean absolute deviation, standard deviation, and interquartile range. These are often used together with a measure of center, to give an idea both of what a typical value is and how much the data can be expected to deviate from it.
Standard deviation is a commonly used measure of spread. It is a measure of how much a randomly selected value from a data set is expected to differ from the mean. To denote the standard deviation, the Greek letter is used, which is read as "sigma."To calculate a standard deviation, the rule is used, where is the number of values in the data set and is the mean of the set.
The standard deviation, of a data set is calculated using the rule where is the number of values in the data set and is the mean of the set. Performing this calculation in one step makes for a convoluted expression. Therefore, it is best divided into a few, smaller steps. Consider the following data set as an example.
First, the mean, should be calculated. The example data set has values, so the denominator is
For each data value, can now be calculated and added to a table. This shows how much each data point varies from the mean.
Square the deviations, and add them to a new column in the table.
The squared deviations should be added and divided by the number of data values. In other words, the mean of the squared deviations is found.
This value is called the variance of the data set.
Finally, take the square root of the just found quotient to get the standard deviation. Here, the fraction is used instead of the quotient, to avoid rounding errors. Thus, a randomly chosen value from this data set is expected to deviate roughly units from the mean.