3 Linear Regression
3.1 Conceptual
3.1.1 Question 1
Describe the null hypotheses to which the _p_values given in Table 3.4 correspond. Explain what conclusions you can draw based on these _p_values. Your explanation should be phrased in terms of
sales
,TV
,radio
, andnewspaper
, rather than in terms of the coefficients of the linear model.
3.1.2 Question 2
Carefully explain the differences between the KNN classifier and KNN regression methods.
3.1.3 Question 3
Suppose we have a data set with five predictors, \(X_1\) = GPA, \(X_2\) = IQ, \(X_3\) = Level (1 for College and 0 for High School), \(X_4\) = Interaction between GPA and IQ, and \(X_5\) = Interaction between GPA and Level. The response is starting salary after graduation (in thousands of dollars). Suppose we use least squares to fit the model, and get \(\hat\beta_0 = 50\), \(\hat\beta_1 = 20\), \(\hat\beta_2 = 0.07\), \(\hat\beta_3 = 35\), \(\hat\beta_4 = 0.01\), \(\hat\beta_5 = -10\).
Which answer is correct, and why?
- For a fixed value of IQ and GPA, high school graduates earn more on average than college graduates.
- For a fixed value of IQ and GPA, college graduates earn more on average than high school graduates.
- For a fixed value of IQ and GPA, high school graduates earn more on average than college graduates provided that the GPA is high enough.
- For a fixed value of IQ and GPA, college graduates earn more on average than high school graduates provided that the GPA is high enough.
Predict the salary of a college graduate with IQ of 110 and a GPA of 4.0.
True or false: Since the coefficient for the GPA/IQ interaction term is very small, there is very little evidence of an interaction effect. Justify your answer.
3.1.4 Question 4
I collect a set of data (\(n = 100\) observations) containing a single predictor and a quantitative response. I then fit a linear regression model to the data, as well as a separate cubic regression, i.e. \(Y = \beta_0 + \beta_1X + \beta_2X^2 + \beta_3X^3 + \epsilon\).
Suppose that the true relationship between \(X\) and \(Y\) is linear, i.e. \(Y = \beta_0 + \beta_1X + \epsilon\). Consider the training residual sum of squares (RSS) for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.
Answer (a) using test rather than training RSS.
Suppose that the true relationship between \(X\) and \(Y\) is not linear, but we don’t know how far it is from linear. Consider the training RSS for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.
Answer (c) using test rather than training RSS.
3.1.5 Question 5
Consider the fitted values that result from performing linear regression without an intercept. In this setting, the ith fitted value takes the form \[\hat{y}_i = x_i\hat\beta,\] where \[\hat{\beta} = \left(\sum_{i=1}^nx_iy_i\right) / \left(\sum_{i' = 1}^n x^2_{i'}\right).\] show that we can write \[\hat{y}_i = \sum_{i' = 1}^na_{i'}y_{i'}\] What is \(a_{i'}\)?
Note: We interpret this result by saying that the fitted values from linear regression are linear combinations of the response values.
3.1.6 Question 6
Using (3.4), argue that in the case of simple linear regression, the least squares line always passes through the point \((\bar{x}, \bar{y})\).
3.1.7 Question 7
It is claimed in the text that in the case of simple linear regression of \(Y\) onto \(X\), the \(R^2\) statistic (3.17) is equal to the square of the correlation between \(X\) and \(Y\) (3.18). Prove that this is the case. For simplicity, you may assume that \(\bar{x} = \bar{y} = 0\).
3.2 Applied
3.2.1 Question 8
This question involves the use of simple linear regression on the Auto data set.
- Use the
lm()
function to perform a simple linear regression withmpg
as the response andhorsepower
as the predictor. Use thesummary()
function to print the results. Comment on the output. For example:- Is there a relationship between the predictor and the response?
- How strong is the relationship between the predictor and the response?
- Is the relationship between the predictor and the response positive or negative?
- What is the predicted mpg associated with a horsepower of 98?
- What are the associated 95% confidence and prediction intervals?
Plot the response and the predictor. Use the
abline()
function to display the least squares regression line.Use the
plot()
function to produce diagnostic plots of the least squares regression fit. Comment on any problems you see with the fit.
3.2.2 Question 9
This question involves the use of multiple linear regression on the
Auto
data set.
Produce a scatterplot matrix which includes all of the variables in the data set.
Compute the matrix of correlations between the variables using the function
cor()
. You will need to exclude the name variable,name
which is qualitative.Use the
lm()
function to perform a multiple linear regression withmpg
as the response and all other variables except name as the predictors. Use thesummary()
function to print the results. Comment on the output. For instance:
- Is there a relationship between the predictors and the response?
- Which predictors appear to have a statistically significant relationship to the response?
- What does the coefficient for the
year
variable suggest?Use the
plot()
function to produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?Use the
*
and:
symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?Try a few different transformations of the variables, such as \(log(X)\), \(\sqrt{X}\), \(X^2\). Comment on your findings.
3.2.3 Question 10
This question should be answered using the
Carseats
data set.
Fit a multiple regression model to predict
Sales
usingPrice
,Urban
, andUS
.Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative!
Write out the model in equation form, being careful to handle the qualitative variables properly.
For which of the predictors can you reject the null hypothesis \(H_0 : \beta_j = 0\)?
On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.
How well do the models in (a) and (e) fit the data?
Using the model from (e), obtain 95% confidence intervals for the coefficient(s).
Is there evidence of outliers or high leverage observations in the model from (e)?
3.2.4 Question 11
In this problem we will investigate the t-statistic for the null hypothesis \(H_0 : \beta = 0\) in simple linear regression without an intercept. To begin, we generate a predictor
x
and a responsey
as follows.
Perform a simple linear regression of
y
ontox
, without an intercept. Report the coefficient estimate \(\hat{\beta}\), the standard error of this coefficient estimate, and the t-statistic and _p_value associated with the null hypothesis \(H_0 : \beta = 0\). Comment on these results. (You can perform regression without an intercept using the commandlm(y~x+0)
.)Now perform a simple linear regression of
x
ontoy
without an intercept, and report the coefficient estimate, its standard error, and the corresponding t-statistic and _p_values associated with the null hypothesis \(H_0 : \beta = 0\). Comment on these results.What is the relationship between the results obtained in (a) and (b)?
For the regression of \(Y\) onto \(X\) without an intercept, the t-statistic for \(H_0 : \beta = 0\) takes the form \(\hat{\beta}/SE(\hat{\beta})\), where \(\hat{\beta}\) is given by (3.38), and where \[ SE(\hat\beta) = \sqrt{\frac{\sum_{i=1}^n(y_i - x_i\hat\beta)^2}{(n-1)\sum_{i'=1}^nx_{i'}^2}}. \] (These formulas are slightly different from those given in Sections 3.1.1 and 3.1.2, since here we are performing regression without an intercept.) Show algebraically, and confirm numerically in R, that the t-statistic can be written as \[ \frac{(\sqrt{n-1}) \sum_{i-1}^nx_iy_i)} {\sqrt{(\sum_{i=1}^nx_i^2)(\sum_{i'=1}^ny_{i'}^2)-(\sum_{i'=1}^nx_{i'}y_{i'})^2}} \]
Using the results from (d), argue that the t-statistic for the regression of y onto x is the same as the t-statistic for the regression of
x
ontoy
.In
R
, show that when regression is performed with an intercept, the t-statistic for \(H_0 : \beta_1 = 0\) is the same for the regression ofy
ontox
as it is for the regression ofx
ontoy
.
3.2.5 Question 12
This problem involves simple linear regression without an intercept.
Recall that the coefficient estimate \(\hat{\beta}\) for the linear regression of \(Y\) onto \(X\) without an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of \(X\) onto \(Y\) the same as the coefficient estimate for the regression of \(Y\) onto \(X\)?
Generate an example in
R
with \(n = 100\) observations in which the coefficient estimate for the regression of \(X\) onto \(Y\) is different from the coefficient estimate for the regression of \(Y\) onto \(X\).Generate an example in
R
with \(n = 100\) observations in which the coefficient estimate for the regression of \(X\) onto \(Y\) is the same as the coefficient estimate for the regression of \(Y\) onto \(X\).
3.2.6 Question 13
In this exercise you will create some simulated data and will fit simple linear regression models to it. Make sure to use
set.seed(1)
prior to starting part (a) to ensure consistent results.
Using the
rnorm()
function, create a vector,x
, containing 100 observations drawn from a \(N(0, 1)\) distribution. This represents a feature, \(X\).Using the
rnorm()
function, create a vector,eps
, containing 100 observations drawn from a \(N(0, 0.25)\) distribution—a normal distribution with mean zero and variance 0.25.Using x and
eps
, generate a vector y according to the model \[Y = -1 + 0.5X + \epsilon\] What is the length of the vectory
? What are the values of \(\beta_0\) and \(\beta_1\) in this linear model?Create a scatterplot displaying the relationship between
x
andy
. Comment on what you observe.Fit a least squares linear model to predict
y
usingx
. Comment on the model obtained. How do \(\hat\beta_0\) and \(\hat\beta_1\) compare to \(\beta_0\) and \(\beta_1\)?Display the least squares line on the scatterplot obtained in (d). Draw the population regression line on the plot, in a different color. Use the
legend()
command to create an appropriate legend.Now fit a polynomial regression model that predicts
y
usingx
andx^2
. Is there evidence that the quadratic term improves the model fit? Explain your answer.Repeat (a)–(f) after modifying the data generation process in such a way that there is less noise in the data. The model (3.39) should remain the same. You can do this by decreasing the variance of the normal distribution used to generate the error term \(\epsilon\) in (b). Describe your results.
Repeat (a)–(f) after modifying the data generation process in such a way that there is more noise in the data. The model (3.39) should remain the same. You can do this by increasing the variance of the normal distribution used to generate the error term \(\epsilon\) in (b). Describe your results.
What are the confidence intervals for \(\beta_0\) and \(\beta_1\) based on the original data set, the noisier data set, and the less noisy data set? Comment on your results.
3.2.7 Question 14
This problem focuses on the collinearity problem.
Perform the following commands in R :
> set.seed(1) > x1 <- runif(100) > x2 <- 0.5 * x1 + rnorm(100) / 10 > y <- 2 + 2 * x1 + 0.3 * x2 + rnorm(100)
The last line corresponds to creating a linear model in which
y
is a function ofx1
andx2
. Write out the form of the linear model. What are the regression coefficients?What is the correlation between
x1
andx2
? Create a scatterplot displaying the relationship between the variables.Using this data, fit a least squares regression to predict
y
usingx1
andx2
. Describe the results obtained. What are \(\hat\beta_0\), \(\hat\beta_1\), and \(\hat\beta_2\)? How do these relate to the true \(\beta_0\), \(\beta_1\), and _2$? Can you reject the null hypothesis \(H_0 : \beta_1\) = 0$? How about the null hypothesis \(H_0 : \beta_2 = 0\)?Now fit a least squares regression to predict
y
using onlyx1
. Comment on your results. Can you reject the null hypothesis \(H 0 : \beta_1 = 0\)?Now fit a least squares regression to predict
y
using onlyx2
. Comment on your results. Can you reject the null hypothesis \(H_0 : \beta_1 = 0\)?Do the results obtained in (c)–(e) contradict each other? Explain your answer.
Now suppose we obtain one additional observation, which was unfortunately mismeasured.
Re-fit the linear models from (c) to (e) using this new data. What effect does this new observation have on the each of the models? In each model, is this observation an outlier? A high-leverage point? Both? Explain your answers.
3.2.8 Question 15
This problem involves the
Boston
data set, which we saw in the lab for this chapter. We will now try to predict per capita crime rate using the other variables in this data set. In other words, per capita crime rate is the response, and the other variables are the predictors.
For each predictor, fit a simple linear regression model to predict the response. Describe your results. In which of the models is there a statistically significant association between the predictor and the response? Create some plots to back up your assertions.
Fit a multiple regression model to predict the response using all of the predictors. Describe your results. For which predictors can we reject the null hypothesis \(H_0 : \beta_j = 0\)?
How do your results from (a) compare to your results from (b)? Create a plot displaying the univariate regression coefficients from (a) on the \(x\)-axis, and the multiple regression coefficients from (b) on the \(y\)-axis. That is, each predictor is displayed as a single point in the plot. Its coefficient in a simple linear regression model is shown on the x-axis, and its coefficient estimate in the multiple linear regression model is shown on the y-axis.
Is there evidence of non-linear association between any of the predictors and the response? To answer this question, for each predictor X, fit a model of the form \[ Y = \beta_0 + \beta_1X + \beta_2X^2 + \beta_3X^3 + \epsilon \]