Simple Regression
Multiple Regression
Moderated Regression
Nonlinear Regression
Interpretations &
Formulas of Regression Model
100

Yi = B0 + B1Xi + UNKNOWN ERROR

There are two estimates of the two parameters B0 and B1. Identify and interpret the two parameter estimates in a simple regression model based on a single continuous predictor 

What is b0 and b1?

b0 is our prediction of Y when X equals zero (intercept) 

b1 is the predicted change in Y when X increases by one (slope)



100

Melanie's plan for her masters thesis is to study how children's biological factors influence judgments. Her first plan of action, before studying judgement, is to predict the weights of elementary school children with two predictors, height in inches and height in centimeters.


What is the problem with these two predictors if she were to create this as a multiple regression model?

What is redundancy? *Redundancy makes it difficult to sort out the unique contributions and relative importance of the predictors

100

This model assumes that the contribution of each predictor does not depend on the value of the other prediction. It differs from an interaction as it does not detect the simple relationship between one of the variables and the outcome variance depending on the value of the other variable. 

What is additive assumption?

100

There are three main rules to remember when calculating partial derivates that are used in multiple regression. Name two of the three rules!

What is...

1. The partial derivative of a sum, with respect to Xj, equals the sum of the partial derivatives of the components of that sum.
2. The partial derivative of a Xj with respect to X is amX(m-1)j , where a can be either a constant or another variable.
3. The partial derivative of a component of a sum, with respect to Xj , is zero whenever that component does not contain Xj .

100

ei = Yi − ˆYi
= Yi − (b0 + b1Xi )

What does this formula mean?

What is ...

This formula shows the residuals that are the differences between the actual scores and the model prediction. The residuals are an estimate of the true errors, εi . The raw residuals always sum to zero. 

200

For a study on productivity, we regressed percentage of discussion audition questions grades on average hours of sleep per night among 10 Cognitive PhD students.

Yi = 30.5 + 6.2Xi

Interpret the regression equation!

What is for a PhD student getting zero hours of sleep, the predicted success rate is 30.5?

What is the slope indicates that for every addtitonal hour of sleep, discussion audition questions grades increase by 6.2?

200

For statistical inferences in a multiple regression, the researchers are given the freedom on how to define Model A and Model C.

For the overall model, how many parameters are ALWAYS in model C? How many would model A have?

What would the null hypothesis look like if testing the partial regression coefficients as equal to zero? The alternative hypothesis?

What is...

Model C is the simple model with only one parameter.

Model A is p + 1 parameters: one for each predictor, plus the intercept

H0: B1 = B2 = B3 = ... 0

HA: at least one of the partial regression coefficients is nonzero

200

When we regress the stats exam score on both hours studied and hours slept, we get the following estimated model:

Scorei = 78.230 + 1.540Hours_Studied - 0.760Hours_Slept

Rewrite the multiple regression equation as a simple relationship between hours studied and statistics exam score, allowing the intercept in the simple regression to vary with changes in hours slept.

Once creating the model, rewrite this expression for a student who only slept 4 hours.

Score = (78.230 - 0.760Hours_Slept) + 1.540Hours_Studied

Score = (78.230 - 0.76(4)) + 1.540

Score = 75.19 + 1.540Hours_Studiedi



200

Consider this basic interactive model!

Happiness = b0 + b1Income + b2Friends + b3IncomeFriends

Calculate the simple slope for income by calculating the partial derivates.

Calculate the simple slope for friends by calculating the partial derivates.

What is...

△Happiness / △Income = b1 + b3friends

Happiness / △Friends = b2 + b3income 

200

Consider the following estimated regression equation...

Time = 18.899 + 0.308(Age) - 0.069Miles - 0.005AgeMiles

If we mean center the component predictor variables in the interactive variables, we receive this regression:

Time = 23.341 + .166Age0 - 2.58Miles0 - .005Age0Miles0

What are the differences between the estimated regression equations before and after mean centering the regression coefficients? What are the new interpretations for the mean centered estimated coefficients?

What is... 

All of the estimated coefficients associated with the product predictor have changed. This change is because they are now estimating different things than they were before.

The intercept (23.431) is the predicted value of Time when all predictors equal 0 (i.e., when Age and Miles are at their mean values).
The coefficient for Age0 (.166) is the simple slope for Age when Miles0
equals 0 (i.e., at the mean value of Miles).
The coefficient for Miles0 (-.258) is the simple slope for Miles when Age0 equals 0 (i.e., at the mean value of Age)

Interaction does not change as it is simply the product of two centered predictions but it does not matter where this centering takes place (0, mean, etc.)



300

Mean-centering a predictor only affects one thing in the simple regression model

What is the estimated intercept because it simply shifts the origin of the x-axis?

300

Melanie is interested in seeing if she picks up Taylor's caffeine habits to see if it would increase her productivity.

Suppose I wanted to know whether it is useful to add cups of coffee to the model when hours studying and hours of sleep are already in the model. 

Create model C and model A to test this.

Model C: Yi: B0 + B1(Hours Studying) + B2(Hours Sleep) + UNKNOWN ERROR

Model A: Yi: B0 + B1(Hours Studying) + B2(Hours Sleep) + B3 (Cups of Coffee) + UNKNOWN ERROR

300

Consider the following interacting model:

Revenue = b0 + b1Ads_Spent + b2Product_Price + b3(Ads_Spent*Product_price)

If we wanted to focus on the simple relationship between product_price and revenue, how would you rewrite this equation and interpret the coefficients?

What is...

Revenue = (b0 + b1Ads_Spent) + (b2 + b3Ads_Spent)Product_Price

What is...

b0 = the intercept between the relationship of product price and revenue when ad_spent is equal to 0

b1 = the change in the intercept for product price - revenue relationship when ad_spent increases by one unit

b2 = the change in slope for the product-price revenue relationship when ad_spent is equal to 0

b3 = the change in slope for the product-price revenue relationship when ad_spent increases by one


300

Consider this complex model

Yi = b0 + b1X1i + b2X2i + b3X^21i + b4X1i X2i

Calculate the simple slope for X1i by calculating the partial derivates.

Calculate the simple slope for X2i by calculating the partial derivates.

△Yi / △X1i = b1+ + 2b3X1i + b4X2i

△Yi / △X1i = b2 + b4X1i

300

Interpret the following model when focusing on the simple relationship between experience and salary:

Salary = b0 + b1 (experience) + b2(education) + b3 (experience x education)

What is ...

b0 = the intercept for the experience-salary relationship when education is equal to 0

b1 = the slope for experience-salary relationship when education is equal to 0

b2 = the change in the intercept for the experience-salary relationship when education increases by one

b3 = the change in the slope for the experience-salary relationship when education increases by one

400

In simple linear regression, we discussed that the most common form of test for the compact model is when the a prior value in the compact model is set to 0.

Based on this guideline, create your own version of model C (simple model that makes the same prediction for all observations in the data) and model A (a simple regression model which makes conditional predictions based on a continuous predictor) based on someone's research in the department. Don't forget to include the null and alternative hypothesis!

What is ...

Model C: DM: B0 + UNKNOWN ERROR

Model A: DM: B0 + B1(# of Options) + UNKNOWN ERROR

H0: B1 = 0

HA: B1 ≠ 0

400

One topic that we learned about in multiple regression is overall model tests. They are often not very useful, leading to the other tests that we learned about.

Why?

What is if a model includes many predictors, but only one is useful, the F-statistic becomes smaller due to the distribution of predictive performance and error across more parameters (reducing the test's power)? 

What is results are often too ambiguous in that if the null hypothesis is rejected, then we only know that at least one of the partial regression coefficients is not equal to zero but we do not know which predictor or predictors are useful?

400

Consider the following interacting model:

Revenue = b0 + b1Ads_Spent + b2Product_Price + b3(Ads_Spent*Product_price)

If we wanted to focus on the simple relationship between ads_spent and revenue, how would you rewrite this equation and interpret the coefficients?

What is...

Revenue = (b0 + b2Product_Price) + (b1 + b3Product_Price)Ads_Spent

What is...

b0 = the intercept between the relationship of ads_spent and revenue when product_price is equal to 0

b1 = the change in the slope for ads_spent and revenue relationship when product_price is equal to zero

b2 = the change in the intercept for the ads_spent and revenue relationship when product_price is equal to 0

b3 = the change in slope for the ads_spent/revenue relationship when product_price increases by one

400

Here is the model for friends by income interaction and the quadratic friends term:

Happiness = β0 + β1Income + β2Friends + β3Friends^2 + β4IncomeFriends + ε

The estimated regression equation is:

Happiness = 10.00 + .273Income − .565Friends + .008Friends^2 − .004IncomeFriends

We derived the simple slope for happiness of income and friends:

△Happiness / △Income = .237 + + 0.004Friends

△Happiness / △Friends = -.565 + (2).0008Friends - 0.004Income

Interpret the coefficients in the simple slopes for the relationships of income/happiness & friends/happiness 

b1: the simple slope for the income-happiness relationship when friends is equal to 0.

b2: the simple slope for the friends-happiness relationship when income is equal to 0. 

b3: half of the change in the simple slope of the relationship for the happiness and friends relationship when friends increases by one unit

b4: the change in the simple slope for friends(income) as income(friends) increases by one


400

b1 ± √MSE(A)/SSX · Fcrit; 1,n−2;α

What is this formula? What does this formula tell us?

What is ...

CI formula to tell us what range of
hypothetical values B1 would we fail to reject the null hypothesis that β1 = B1 at the α significance level? You can use the CI to decide whether to reject the null hypothesis

500

Researchers can use the CI to decide whether to reject the null hypothesis. There are ways to widen the CIs with less precision in estimating the parameter with higher risk of Type II error and ways to narrow the CIs with greater precision in estimating the parameter with lower risk of Type II error.

How would one know to reject the null hypothesis? 

Name two factors that widen CIs and decreases power. 

Name two factors that narrow CIs and increase power.




What is ...

if we tested a null hypothesis that the parameter equals a value that lies within the CI (B1 = 0), we would fail to reject the null hypothesis. If 0 lies outside the CI, we would reject the null hypothesis.

Widen CI: decreasing a, increasing model error, decreasing n, and decreasing the variance of X

Narrow CIs: increasing a, decreasing model error, increasing n, increasing the variance of X

500

A researcher is exploring factors that influence consumer satisfaction with online shopping. Participants are asked to complete surveys measuring their satisfaction levels of product quality, delivery speed, customer service, and ease of website navigation.


Fill out the appropriate columns in the source table *p-value has been excluded and R is not needed to complete these formulas.

What is ... 

*on the final version of the table 

500

How would you test whether the interactive model is significantly better than the additive model? 

Create your own models A and C with its respective null and alternative hypothesis. You only need two predictors in your additive model. Then walk through the steps of what you need to do to find significance (you do not need to use fake numbers but write down all the formulas/steps)

What is...

*Example of what it should look like (*different variables are obviously okay)

Model C: Happiness: B0 + B1(Friends) + B2(Income) 

H0 : B3 = 0

Model A: Happiness: B0 + B1(Friends) + B2(Income) + B3(Friends*Income)

HA: B3 is not equal to 0

Steps:

Calculate the SSE for C and A with the change in the residuals. Calculate the reduction of error (PRE) from adding the interaction in model A. Calculate the F statistic, then determine the p-value or critical values. If less than 0.05, reject model C (the null) and conclude that the interactive model does a significantly better job of predicting happiness than the additive model 

500

Consider this three-way interaction and assume that the component predictors have been mean-centered prior to computing the product predictors:

Y = b0 + b1X1 + b2X2 + b3X3 + b4X1X2 + b5X1X3 + b6X2X3 + b7X1X2X3

Compute the simple slope for X1, X2, and X3

△Yi / △ X1 = b1 + b4X2 + b5X3 + b7X2X3

△Yi / △ X2 = b2 + b4X1 + b6X3 + b7X1X3

△ Yi / △ X3 = b3 + b5X1 + b6X2 + b7X1X2

500

We discussed three cautions when using multiple regressions. What are the three cautions?

1. Casual Conclusions From Non-Experiments: we cannot interpret the estimated partial regression coefficient bj as representing a causal relationship between
Xj and Y

2. Relative Importance of Predictor Varible: we cannot directly compare the relative
magnitudes of the bj because they depend on the unit of measurement of the predictor variables. To account for this, some researchers normalize the regression
coefficients by multiplying them by the standard deviations of their respective predictor variables

3. Challenges in using automatic model building 

M
e
n
u