Lectures 1 & 2 (Intro & Testing Assumptions)
Lectures 3 & 4 (Correlation & Linear Regression)
Lectures 5 & 6 (Moderation, Mediation, and t-tests)
Lectures 7 & 8 (ANOVA & ANCOVA)
Lectures 9 & 10 (Factorial ANOVA and Chi-Square)
100
What does the Central Limits Theorem tell us?

for a sufficiently large sample size (n=30), the distribution of the sample means will approximate a normal distribution as n increases

100

What is the difference between correlation and covariance?

Correlation measures the relationship between two variables standardized z-scores while covariance does the same in raw scores.

100

What is a moderator?

Moderators show when or how much the IV will affect the DV.

100
  • The covariate in ANCOVA analyses should be correlated with the ______ and independent from the _______ variable

dependent; independent

100

What is a two-way independent ANOVA? What is a three-way independent ANOVA?

A two-way independent ANOVA (also called a factorial ANOVA) examines the effects of two independent variables (factors) on a single dependent variable.A three-way independent ANOVA is the same with three IVs.

200

In a negatively skewed sample, which statistic would appear first? 

A) mode, B) median, or C) mean

C) Mean

"The tail tells the tale"

The mean is most affected by outliers, and the tail appears to the left in negatively skewed data.

200

A Pearsons correlation of .33 indicates a ____ effect size.

Medium

±.1 = small effect
±.3 = medium effect
±.5 = large effect

200

Based on this output, is there a significant mediation effect? 

Causal Mediation Analysis
Nonparametric Bootstrap Confidence Intervals
Estimate  95% CI    Lower 95%  CI Upper   p-value
ACME     4427.176   3094.002    5598.62    0.002 **
ADE      -1469.583  -3318.940    463.13     0.144
TotalEffect2957.593  1227.475   4688.00  <2e16***
Prop.Mediat1.497      0.892        3.45        0.002 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Sample Size Used: 90
Simulations: 999

Yes becuase ACME is significant. Full mediation because ADE is not significant. 

200

What does the Bonferroni, Scheffe, and Tukey test do? Which is the most strict?

help control for experimentwise Type I error (reduces culmative probability of false positives across multiple statistical tests)

Bonferroni is the most strict (alpha at .05)

200

True or false: ANCOVA combines the analysis of variance with regression

True

300

List factors that affect power.

Increasing the alpha level, doing a one-tailed vs two-tailed test, larger sample size, reduced error variance, increasing effect size of IV
300

What do B weights and Beta weights tell us? How are they different?

They represent the unique effect of that predictor on the DV. B weights for raw scores and Beta weights (fancy B) for z-scores.

300

When would one use a Mann-Whitney U test instead of a Wilcoxon signed-rank test?

Use a Mann-Whitney U (or Wilcoxon rank-sum) when you have independent groups and nonparametric data. Use a Wilcoxon signed-rank test when you have dependent groups and nonparametric data.

300

What is the Kruskal-Wallis test?

the non-parametric counterpart of the one-way independent ANOVA

300

When and why do we use a Chi-square goodness of fit test?

To determine if the observed frequencies of a categorical variable match an expected distribution.

used for Goodness of Fit and Test of association

400

List all the assumptions for a parametric test.

1. Normally distributed sampling distribution
2. Homogeneity of variance/ Homoscedasticity
3. Interval or ratio data
4. Independence of scores.

400

List all the assumptions for regression.

1. Linearity

2. Normality 

3. Independence of scores

4. Independence of errors (errors should not be correlated)

5. Minimal multicollinearity: The predictors (IVs) should not be highly correlated with each other. Rule of thumb is no higher than r = .80 between predictors. 

6. Homoscedasticity





400

Why do we center continuous varibles around the mean in moderation analyses?

To reduce multicolinearity

400

What does this output tell us?

Anova Table (Type III tests)
Response: ActivityLevel
                   SumSq Df  F value    Pr(>F)
(Intercept)   53.542  1   21.9207  9.323e-05 ***
ActivityLevel 17.182  1   7.0346    0.01395 *
dose            36.558  2   7.4836    0.00298 **
PartnerActivityLevel:dose20.427 2 4.1815 0.02767*
Residuals 58.621 24
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The relationship between the partner's activity level and the participant's activity level differs depending on the dose administered.

400

What is the Cramer's V?

measure of effect size for catergoical variables in ch-squared test. interpreted like Pearson's r.

500

How can we assess for normality?

Central Limit Theorum, Graphical displays (Q-Q plot, histogram, values of skew and kurtosis, Shapiro-Wilk

500

What can we say from this output about the variance explained from advertising budgets on sales?

                  Estimate   Std. Error   t value  Pr(>|t|)
(Intercept) 1.341e+02 7.537e+00 17.799 <2e16***
adverts      9.612e-02  9.632e-03  9.979  <2e-16***


Residual standard error: 65.99 on 198 degrees of freedom Multiple R-squared: 0.3346, Adjusted R-squared: 0.3313 F-statistic: 99.59 on 1 and 198 DF, p-value: < 2.2e-16

Advertising budget significantly explained
33.1% of the variance of sales.

500

What is bootstrapping and why do we use it?

Bootstrapping is a resampling technique that is used when assumptions (like normaility) are broken.

500

What is a simple effect in a factorial ANOVA?

The variability among treatment means associated with one IV at a particular level of another IV

500

List two assumptions specific to a chi-squared test.

1. Independence: Each person, item or entity contributes to only one cell of the contingency table.
2. The expected frequencies should be greater than n = 5.

M
e
n
u