The least-squares regression line is a linear representation of the general trend of our data.
Once we have determined the least-squares regression line, we can use it as a model to predict the likely value of the response variable based on a given value of the explanatory variable.
The process of predicting has two parts:
Substitute the x value into the rule for the least-squares regression line to get the predicted \hat{y} value
Then we need to consider whether our prediction is reliable or not
We can make predictions 'by hand' either using a graph of the line of best fit or substituting into the equation of the least-squares regression line.
The example below shows a scatter plot, with the least-squares regression line for the relationship of daily ice-cream sales versus the maximum daily temperature.
Although there is a clear trend of increasing sales as the temperature increases, it would be difficult to predict the sales for a given day from the raw data. However, we can use the least-squares regression line.
If we want to predict the sales on a day when the temperature reaches 30 degrees, we could do so with these steps that follow the arrows on the graph:
From 30 degrees on the horizontal axis, draw a vertical line to intersect with the line of best fit.
From the point of intersection, draw a horizontal line to the vertical axis.
Read the predicted sales, approximately 245 ice-creams, form the vertical axis.
When we are making predictions from a graph we should take care to work accurately, but still expect a small amount of variation due to the limited precision of working with graphs.
For the example above, the equation of the least-squares regression line is: S=10\times T-45 where S is the number of ice cream sales and T is the maximum daily temperature.
To predict the sales on a day where the maximum temperature is 30 degrees, we simply substitute the temperature into the equation: \begin{aligned} S &= 10 \times 30 -45 \\ &= 245 \end{aligned}So, we can predict that 245 ice creams will be sold on that day.
A bivariate data set has a line of best fit with equation y=-8.71x+6.79.
Predict the value of y when x=3.49.
We can make predictions either using a graph of the line of best fit or substituting into the equation of the least-squares regression line.
An important consideration when we are making predictions is recognising if the prediction is within the range of data values for which we have actual measurements. If it is, we refer to the prediction as an interpolation. If not, we refer to the predication as an extrapolation.
The diagram below illustrates these terms.
Interpolation means you have used an x value in your prediction that is within the range of x values in the data that you were working with.
Extrapolation means you have used an x value in your prediction that is outside the range of x values in the data.
The scatter plots below are annotated to show examples of interpolation (left) and extrapolation (right) from the line of best fit.
To judge the reliability of the prediction we need to consider two things:
How strong is the correlation?
Is the data interpolated or extrapolated?
How many points are contained in the data set?
We have already seen how to calculate the correlation coefficient and interpret the strength of a relationship, using this chart.
If the correlation is weak, then the data values are more widely scattered, so there will be greater uncertainty in the prediction than for data with a strong correlation.
When we are extrapolating, we are making a prediction for values that we have not made any similar measurements, It is possible that a linear relationship is not valid outside of a certain range so we always have to treat an extrapolation as unreliable.
Reliability of predictions:
If we are interpolating and the correlation is strong, then the prediction will be reliable.
If we are interpolating and the correlation is moderate or weak and we are interpolating, then the prediction is less reliable.
If we are extrapolating, the prediction is unreliable, especially if the correlation is weak or we extrapolate far beyond the range of available data.
Predictions from a data set with a large number of points (e.g. more than 30) will be more reliable than predictions from a small data set.
Research on the number of cigarettes smoked during pregnancy and the birth weights of the newborn babies was conducted.
\text{Average} \\ \text{number of} \\ \text{cigarettes} \\ \text{per day }(x) | 46.3 | 13.0 | 21.4 | 25.0 | 8.6 | 36.5 | 1.0 | 17.9 | 10.6 | 13.4 | 37.3 | 18.5 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
\text{Birth} \\ \text{weight in}\\ \text{kilograms }\\(y) | 3.9 | 5.8 | 5.0 | 4.8 | 5.5 | 4.5 | 7.0 | 5.1 | 5.5 | 5.1 | 3.8 | 5.7 |
Using technology, calculate the correlation coefficient between the average number of cigarettes per day and birth weight. Round your answer to three decimal places.
Choose the description which best describes the statistical relationship between these two variables.
Use technology to form an equation for the least squares regression line of y on x.
Give all values to two decimal places. Give the equation of the line in the form y=mx+c.
Use your regression line to predict the birth weight of a newborn whose mother smoked on average 5 cigarettes per day. Round your answer to two decimal places.
Choose the description which best describes the validity of the prediction in part (d).
Interpolation means you have used an x value in your prediction that is within the range of x values in the data that you were working with.
Extrapolation means you have used an x value in your prediction that is outside the range of x values in the data.
Reliability of predictions:
If we are interpolating and the correlation is strong, then the prediction will be reliable.
If we are interpolating and the correlation is moderate or weak, then the prediction is less reliable.
If we are extrapolating, the prediction is unreliable, especially if the correlation is weak or we extrapolate far beyond the range of available data.
Predictions from a data set with a large number of points (e.g. more than 30) will be more reliable than predictions from a small data set.