What is the difference between reliability and validity in measurement?
Reliability is consistency of a measure; validity is whether it measures what it's intended to.
What is the main distinction between random sampling and random assignment?
Sampling selects who is in the study; assignment decides which group participants go into.
What is the purpose of descriptive statistics in research?
To summarize or describe data (e.g., mean, median, mode).
Which scale of measurement involves ranking items but not assuming equal intervals between ranks?
Ordinal scale
Which research method involves observing people in their natural environments without interference?
Field observation / Naturalistic inquiry
Which type of validity assesses whether a study’s results can be generalized to other populations and settings?
External validity
What is a pretest-posttest control group design? Why is it better than a one-group pretest-posttest design?
Design includes two groups: one gets treatment, one doesn’t. It controls for testing and maturation.
What is the difference between confidence interval and confidence level?
Interval = range of values; level = likelihood interval contains the true value (e.g., 95%).
What is the operational definition of a variable, and why is it important?
It explains how the variable is measured. Needed for clarity and replication.
What is a key limitation of using focus groups in applied communication research?
Groupthink or dominant voices can bias results; lack of generalizability.
Name one threat to internal validity and give a brief example.
Example: Maturation – participants naturally change over time, such as becoming more tired or experienced.
In survey design, what is a filter question and how is it used effectively?
It screens respondents (e.g., “Do you own a smartphone?” → only owners see follow-up questions).
Define skewness and give an example of a positively skewed distribution.
Skewness = asymmetry; positively skewed = most values on the left, like income data.
Give an example of a double-barreled survey question and explain why it's problematic.
“Do you like your instructor and the class?” → two ideas in one; unclear which one is being rated.
Describe the basic logic of grounded theory research.
Theory emerges inductively from patterns found in qualitative data.
How do you improve construct validity when designing a measurement instrument?
Use established scales, pilot testing, and clear operational definitions.
Define and give an example of a factorial experiment.
A study that tests two or more IVs simultaneously. Ex: effect of ad color (red/blue) and gender (M/F) on recall.
A study finds a significant p-value (p < .05). What does this mean in terms of hypothesis testing?
There is less than a 5% chance the results occurred by random chance, so we reject the null hypothesis.
What’s the difference between manifest and latent content in content analysis?
Manifest = surface-level (e.g., word count); Latent = underlying meaning (e.g., tone, values).
How does triangulation enhance the credibility of a research study?
It uses multiple methods or sources to cross-check findings, increasing trustworthiness.
A researcher finds high reliability across items in a scale but poor validity. What might be happening, and why is it problematic?
The scale consistently measures something, but not the intended concept. It's like a clock that always tells the wrong time.
What is a major threat to internal validity in a field experiment, and how might it be addressed?
Lack of control over environment. Can be addressed using random assignment and matched groups.
How do Type I and Type II errors differ, and what are the implications for interpreting research findings?
Type I: false positive (reject true null); Type II: false negative (fail to reject false null). Trade-off affects conclusions.
Explain the process of concept explication with an example, including conceptual and operational definitions.
Conceptual: define abstract idea (e.g., stress = mental strain). Operational: how it’s measured (e.g., survey score or cortisol level).
In a mixed methods study, what distinguishes a sequential design from a concurrent design? Give an example.
Sequential = one after the other (e.g., survey → interview); Concurrent = both done at the same time.