 # 2.02 Making predictions

Lesson

## Making predictions

The least-squares regression line is a linear representation of the general trend of our data.

Once we have determined the least-squares regression line, we can use it as a model to predict the likely value of the response variable based on a given value of the explanatory variable.

The process of predicting has two parts:

• Substitute the $x$x value into the rule for the least-squares regression line to get the predicted $y$y value
• Then we need to consider whether our prediction is reliable or not

We can make predictions 'by hand' either using a graph of the line of best fit or substituting into the equation of the least-squares regression line.

### Predictions from a graph

The example below shows a scatter plot, with the least-squares regression line for the relationship of daily ice-cream sales versus the maximum daily temperature.

Although there is a clear trend of increasing sales as the temperature increases, it would be difficult to predict the sales for a given day from the raw data.  However, we can use the least-squares regression line.

If we want to predict the sales on a day when the temperature reaches $30$30 degrees, we could do so with these steps that follow the red arrows on the graph:

1. from $30$30 degrees on the horizontal axis, draw a vertical line to intersect with the line of best fit.
2. from the point of intersection, draw a horizontal line to the vertical axis
3. read the predicted sales, approximately $245$245 ice-creams, form the vertical axis. When we are making predictions from a graph we should take care to work accurately, but still expect a small amount of variation due to the limited precision of working with graphs.

### Predictions from an equation

For the example above, the equation of the least-squares regression line is

 $S$S $=$= $10\times T-45$10×T−45

where $S$S is the number of ice cream sales and $T$T is the maximum daily temperature.

To predict the sales on a day where the maximum temperature is $30$30 degrees, we simply substitute the temperature into the equation:

 $S$S $=$= $10\times30-45$10×30−45 $=$= $245$245

So, we can predict that $245$245 ice creams will be sold on that day.

### Making predictions with CAS

Let's look at some examples to see how we can make a prediction once we've input our data and calculated the least-squares regression line. Select the brand of calculator you use below to work through an example.

Casio Classpad

How to use the CASIO Classpad to complete the following tasks regarding linear regression and making predictions.

Consider the data obtained from a chemical process where the yield of the process is thought to be linearly related to the reaction temperature.

 Temperature ($x,^\circ C$x,°C) Yield ($y,g$y,g) $50$50 $53$53 $54$54 $56$56 $59$59 $62$62 $65$65 $67$67 $71$71 $74$74 $122$122 $118$118 $128$128 $125$125 $136$136 $144$144 $142$142 $149$149 $161$161 $168$168
1. Using your calculator, find an equation for the least squares regression line of $y$y on $x$x.

Give your answer in the form $y=ax+b$y=ax+b. Give all values to two decimal places.

2. What yield is predicted at a reaction temperature of $60$60$^\circ$°C?

3. What yield is predicted at a reaction temperature of $85$85$^\circ$°C?

4. What temperature gives a predicted yield of $155$155 g?

TI Nspire

How to use the TI Nspire to complete the following tasks regarding linear regression and making predictions.

Consider the data obtained from a chemical process where the yield of the process is thought to be linearly related to the reaction temperature.

 Temperature ($x,^\circ C$x,°C) Yield ($y,g$y,g) $50$50 $53$53 $54$54 $56$56 $59$59 $62$62 $65$65 $67$67 $71$71 $74$74 $122$122 $118$118 $128$128 $125$125 $136$136 $144$144 $142$142 $149$149 $161$161 $168$168
1. Using your calculator, find an equation for the least squares regression line of $y$y on $x$x.

Give your answer in the form $y=ax+b$y=ax+b. Give all values to two decimal places.

2. What yield is predicted at a reaction temperature of $60$60$^\circ$°C?

3. What yield is predicted at a reaction temperature of $85$85$^\circ$°C?

4. What temperature gives a predicted yield of $155$155 g?

#### Practice questions

##### Question 1

A bivariate data set has a line of best fit with equation $y=-8.71x+6.79$y=8.71x+6.79.

Predict the value of $y$y when $x=3.49$x=3.49.

## Interpolation and extrapolation

An important consideration when we are making predictions is recognising if the prediction is within the range of data values for which we have actual measurements.  If it is, we refer to the prediction as interpolation.  If not, we refer to the predication as extrapolation.

 The bivariate data set on the right has generated a line of best fit, and the range of the $x$x-values has been highlighted. Making predictions within this range is interpolation, and making predictions outside this range is extrapolation. Interpolation and extrapolation

Interpolation means you have used an $x$x value in your prediction that is within the range of $x$x values in the data that you were working with.

Extrapolation means you have used an $x$x value in your prediction that is outside the range of $x$x values in the data.

The scatter plots below are annotated to show examples of interpolation (left) and extrapolation (right) from the line of best fit.  ## Reliability of predictions

To judge the reliability of the prediction we need to consider three things:

• How strong is the correlation?
• Is the data interpolated or extrapolated?
• How many points are contained in the data set?

We have already seen how to calculate the correlation coefficient and interpret the strength of a relationship, using this chart.

Correlation coefficient If the correlation is weak, then the data values are more widely scattered, so there will be greater uncertainty in the prediction than for data with a strong correlation.

An interpolated prediction is made within the range of $x$x-values upon which the model was based. Since the model is designed to align closely to the data in this range, interpolated predictions will be more reliable than extrapolated predictions.

When we are extrapolating, we are making the assumption that the linear model continues. This assumption is generally valid when making a prediction relatively close to the given data range. A sudden change in trend is unlikely for most of the phenomena we will model, so we will consider such predictions made to be reliable when using a strong model.

When we extrapolate well beyond the data range, even if the linear trend continues, we will consider such predictions unreliable. As we are making a prediction for values that we have not made any similar measurements, it is possible that the underlying linear relationship is slightly different from the one generated by the given data. This will lead to predictions fairing worse as we predict further beyond the given data range. There are also many cases where it is clear the linear trend cannot continue well beyond the data range. In such cases, the predictions will be unreliable and may also be unreasonable - that is give values that are not possible. For example, a model may show a negative linear trend between a chicken's age and the number of eggs it lays per week, however, the linear model could not continue indefinitely as it would suggest the chicken would lay a negative number of eggs at some age.

Reliability of predictions

If we are interpolating:

• And the correlation is weak, then the prediction will not be reliable.
• And the correlation is moderate, then the prediction is moderately reliable.
• And the correlation is strong, then the prediction will be reliable.

If we are extrapolating:

• And the correlation is weak, then the prediction will not be reliable.
• Just beyond the data range and the correlation is moderate, then the prediction is moderately reliable
• Just beyond the data range and the correlation is strong, then the prediction will be reliable
• Well beyond the data range, then the prediction will be not be reliable and may be unreasonable.

Predictions from a data set with a large number of points (e.g. more than $30$30) will be more reliable than predictions from a small data set.

#### Worked example

A farmer sprays his wheat fields with a fertiliser. The data below gives the yield of wheat per hectare for various spray concentrations:

 Spray concentration ($x$x ml/l) Yield of wheat, ($y$y tonnes per hectare) $1$1 $2$2 $4$4 $7$7 $8$8 $10$10 $1.19$1.19 $1.50$1.50 $1.62$1.62 $1.65$1.65 $2.05$2.05 $2.07$2.07

The least-squares regression line for this data is $\hat{y}=0.088x+1.212$^y=0.088x+1.212 and $r=0.929$r=0.929 (given to $3$3 decimal places),

(a) Use the least-squares regression line to make predictions for the yield when using the following concentrations of fertiliser: $5$5 ml/l, $10.8$10.8 ml/l and $35$35 ml/l. (Give answers to $2$2 decimal places)

Think: Substitute the concentration for the value of $x$x in the least-squares regression line and evaluate.

Do:

Spray concentration ($x$x ml/l) Predicted yield of wheat ($\hat{y}$^y tonnes/hectare)
$5$5 $1.65$1.65
$10.8$10.8 $2.16$2.16
$35$35 $4.29$4.29

(b) Comment on the reliability of each prediction made in part (a).

Think: Consider the strength of the model and whether the prediction is interpolation, extrapolation just beyond the data range or extrapolation well beyond the data range.

Do:

The correlation coefficient tells us we have a strong model.

The prediction for $5$5 ml/l is interpolation on a strong model and hence, should be reliable.

The prediction for $10.8$10.8 ml/l is extrapolation just beyond the data range on a strong model and hence, should be reasonably reliable.

The prediction for $35$35 ml/l is extrapolation well beyond the data range and we know the linear trend cannot continue indefinitely since a crop of wheat must have a maximum possible yield and at some point the fertiliser will no longer be effective and may even be toxic. Hence, despite the strong model this prediction is not reliable and may not even be reasonable - may be beyond the maximum yield possible.

#### Practice questions

##### Question 2

Research on the number of cigarettes smoked during pregnancy and the birth weights of the newborn babies was conducted.

 Average number of cigarettes per day ($x$x) Birth weight in kilograms ($y$y) $45.5$45.5 $13.2$13.2 $22.4$22.4 $24.4$24.4 $8.4$8.4 $36.7$36.7 $1.4$1.4 $18$18 $10.4$10.4 $13.3$13.3 $36.5$36.5 $19.4$19.4 $3.7$3.7 $5.4$5.4 $4.7$4.7 $4.6$4.6 $5.3$5.3 $4.1$4.1 $6.7$6.7 $4.8$4.8 $5.3$5.3 $4.9$4.9 $3.6$3.6 $5.4$5.4
1. Using technology, calculate the correlation coefficient between the average number of cigarettes per day and birth weight.

2. Choose the description which best describes the statistical relationship between these two variables.

Strong positive linear relationship

A

Weak relationship

B

Moderate negative linear relationship

C

Moderate positive linear relationship

D

Strong negative linear relationship

E

Strong positive linear relationship

A

Weak relationship

B

Moderate negative linear relationship

C

Moderate positive linear relationship

D

Strong negative linear relationship

E
3. Use technology to form an equation for the least squares regression line of $y$y on $x$x.

Give all values to two decimal places. Give the equation of the line in the form $y=mx+c$y=mx+c.

4. Use your regression line to predict the birth weight of a newborn whose mother smoked on average $5$5 cigarettes per day.

5. Choose the description which best describes the validity of the prediction in part (d).

Unreliable due to extrapolation and weak correlation.

A

Despite a strong correlation, unreliable due to extrapolation far from the data range where the linear trend does not continue.

B

Reliable due to interpolation and a strong correlation.

C

Despite an interpolated prediction, unreliable due to a weak correlation.

D

Unreliable due to extrapolation and weak correlation.

A

Despite a strong correlation, unreliable due to extrapolation far from the data range where the linear trend does not continue.

B

Reliable due to interpolation and a strong correlation.

C

Despite an interpolated prediction, unreliable due to a weak correlation.

D

##### Question 3

During an alcohol education programme, $10$10 adults were offered up to $6$6 drinks and were then given a simulated driving test where they scored a result out of a possible $100$100.

 Number of drinks ($x$x) Driving score ($y$y) $3$3 $2$2 $6$6 $4$4 $4$4 $1$1 $6$6 $3$3 $4$4 $2$2 $65$65 $60$60 $43$43 $59$59 $57$57 $73$73 $32$32 $63$63 $55$55 $61$61
1. Using technology, calculate the correlation coefficient between these variables.

2. Choose the description which best describes the statistical relationship between these two variables.

Moderate negative linear relationship

A

Strong negative linear relationship

B

Moderate positive linear relationship

C

Weak relationship

D

Strong positive linear relationship

E

Moderate negative linear relationship

A

Strong negative linear relationship

B

Moderate positive linear relationship

C

Weak relationship

D

Strong positive linear relationship

E
3. Use your graphing calculator to form an equation for the least squares regression line of $y$y on $x$x.

Give your answer in the form $y=mx+c$y=mx+c. Give all values to one decimal place.

4. Use your regression line to predict the driving score of a young adult who consumed $5$5 drinks.

5. Choose the description which best describes the reliability of the prediction in part (d).

Unreliable due to extrapolation and weak correlation.

A

Despite a strong correlation, unreliable due to extrapolation far from the data range where the linear trend does not continue.

B

Despite an interpolated prediction, unreliable due to a weak correlation.

C

Reliable due to interpolation and a strong correlation.

D

Unreliable due to extrapolation and weak correlation.

A

Despite a strong correlation, unreliable due to extrapolation far from the data range where the linear trend does not continue.

B

Despite an interpolated prediction, unreliable due to a weak correlation.

C

Reliable due to interpolation and a strong correlation.

D

## Interpreting correlation

As we have discussed in previous lessons, when a change in the value of one variable quantity seems to be associated with a proportional change in another variable, we say there is a correlation between the two variables.

A strong correlation might seem to indicate a cause and effect relationships between the variables.  However, we need to be careful to understand the situation, as this is not always the case.

Correlation does not imply causation

These are common reasons for correlation between variables without a causal relationship:

• Confounding due to a common response to another variable (also described as contributing variables).
• e.g. sales of ice-creams and sunscreens have a strong positive correlation because they both increase in response to hot summer weather.
• Coincidence
• it is possible that the data we are analysing shows a correlation purely by chance.
• The causation is in the opposite direction
• e.g. strong winds are correlated to tree branches waving.  But the waving branches don't cause the strong winds.
• The two variable may interplay, that is the causation may go in both directions
• e.g. There is a negative correlation between number of hawks in an area and rodents in the area. In this type of predator/prey relationship the predator numbers affect prey numbers, but prey numbers, i.e. food supply, also affect predator numbers.

When we are asked to analyse a relationship between variables, we should consider whether a causal relationship can be justified.  If not, we should say so, and identify possible non-causal reasons for the association.

#### Practice questions

##### Question 4

The table shows the number of fans sold at a store during days of various temperatures.

 Temperature ($^\circ$°C) $6$6 $8$8 $10$10 $12$12 $14$14 $16$16 $18$18 $20$20 Number of fans sold $12$12 $13$13 $14$14 $17$17 $18$18 $19$19 $21$21 $23$23
1. Consider the correlation coefficient $r$r for temperature and number of fans sold. In what range will $r$r be?

$r=0$r=0

A

$r>0$r>0

B

$r<0$r<0

C

$r=0$r=0

A

$r>0$r>0

B

$r<0$r<0

C
2. Is there a causal relationship?

Yes

A

No

B

Yes

A

No

B

##### Question 5

A study found a strong correlation between the approximate number of pirates out at sea and the average world temperature.

1. Does this mean that the number of pirates out at sea has an impact on world temperature?

Yes

A

No

B

Yes

A

No

B
2. Which of the following is the most likely explanation for the strong correlation?

Contributing variables - there are other causal relationships and variables that come in to play and these may lead to an indirect positive association between the approximate number of pirates out at sea and the average world temperature.

A

Coincidence - there are no other contributing factors or reasonable arguments to be made for the strong positive association between the approximate number of pirates out at sea and the average world temperature.

B

Contributing variables - there are other causal relationships and variables that come in to play and these may lead to an indirect positive association between the approximate number of pirates out at sea and the average world temperature.

A

Coincidence - there are no other contributing factors or reasonable arguments to be made for the strong positive association between the approximate number of pirates out at sea and the average world temperature.

B
3. Which of the following is demonstrated by the strong correlation between the approximate number of pirates out at sea and the average world temperature?

If there is correlation between two variables, then there must be causation.

A

If there is correlation between two variables, there isn't necessarily causation.

B

If there is correlation between two variables, then there is no causation.

C

If there is correlation between two variables, then there must be causation.

A

If there is correlation between two variables, there isn't necessarily causation.

B

If there is correlation between two variables, then there is no causation.

C

### Outcomes

#### ACMGM061

use the equation of a fitted line to make predictions

#### ACMGM062

distinguish between interpolation and extrapolation when using the fitted line to make predictions, recognising the potential dangers of extrapolation

#### ACMGM064

recognise that an observed association between two variables does not necessarily mean that there is a causal relationship between them

#### ACMGM065

identify possible non-causal explanations for an association, including coincidence and confounding due to a common response to another variable, and communicate these explanations in a systematic and concise manner