What are suboptimal and ideal approaches for measuring the dependent variable?
Suboptimal approaches to measuring the dependent variable rely on self-reported attitudes or intentions, which may be biased or weakly linked to behavior. Ideal approaches involve observing actual behavior or performance, as these measures provide stronger validity and reduce reliance on subjective reporting.
What are surveys, and why are they quantitative?
Surveys are structured instruments that collect standardized responses from a large number of respondents. They are quantitative because they transform perceptions, attitudes, or characteristics into numerical data that can be analyzed statistically and compared across individuals or groups.
What are internal and external validity?
Internal validity refers to whether observed effects can be confidently attributed to the variables studied rather than to confounding factors. External validity refers to the extent to which findings can be generalized to other contexts, populations, or settings.
What is power analysis, and why is it important?
Power analysis is used to determine the minimum sample size required to detect an expected effect with a given level of confidence. It helps researchers balance statistical rigor with practical constraints and reduces the risk of inconclusive results.
What is the difference between correlation and causation?
Correlation indicates that two variables move together, but it does not imply that one causes the other. Causation requires evidence that changes in one variable directly produce changes in another, ruling out confounding variables, reverse causality, and spurious relationships.
What are manipulation checks?
Manipulation checks are measures used to verify whether participants perceived the experimental manipulation as intended. They help ensure that any observed effects can be attributed to the manipulation rather than misunderstanding or inattention, thereby strengthening internal validity.
What response options can be used in surveys?
Survey responses may take the form of nominal categories, ordinal rankings, interval or ratio scales, open-ended answers, or non-response options. The choice of response format affects both measurement precision and the types of statistical analyses that can be performed.
What are the implications of pragmatism?
Pragmatism emphasizes choosing methods based on what best answers the research question rather than adhering strictly to one philosophical stance. It supports methodological flexibility and often underpins mixed-methods research.
What are effect size and error probabilities?
Effect size measures the magnitude of a relationship or difference, while error probabilities refer to the risk of false positives (Type I error) and false negatives (Type II error). Together, they determine the statistical power of a study.
How do experiments, surveys, and archival data differ in their ability to establish causality, and when would each be most appropriate?
Experiments are best suited for establishing causal relationships because they involve manipulation of an independent variable and random assignment, which minimizes confounding factors and maximizes internal validity. Surveys are appropriate when the goal is to measure attitudes, perceptions, or self-reported behavior across larger samples, but they typically allow only correlational inference. Archival data rely on pre-existing records collected for other purposes and are valuable for studying real-world behavior at scale, offering high external validity but limited control over variables and weaker causal inference. The choice depends on whether the research aims to test causality, describe patterns, or analyze naturally occurring outcomes.
What is the difference between between-subjects and within-subjects designs?
In a between-subjects design, each participant is exposed to only one experimental condition, which reduces carryover effects but typically requires larger samples. In a within-subjects design, the same participants are exposed to multiple conditions, increasing statistical power but introducing risks such as order and learning effects. Each design involves trade-offs between control, efficiency, and validity.
What are prerequisites for high-quality surveys?
High-quality surveys use clear and precise language, rely on everyday terms, avoid assumptions, and ensure neutrality. Questions should focus on one issue at a time, avoid suggestive wording and superlatives, and be designed to minimize misunderstanding and bias.
What is archival data, and why is it suitable for quantitative research?
Archival data consist of pre-existing records collected for purposes other than the current research question, such as organizational databases, financial reports, or government statistics. These data are suitable for quantitative research because they are typically pre-structured, large-scale, and allow statistical analysis of real-world phenomena.
What considerations matter when selecting participants?
Participant selection should align with the research question while balancing practical constraints, generalizability, and ethical considerations. Researchers often use proxies for populations of interest, but must justify their appropriateness.
Why is power analysis important in quantitative sampling, and how does it relate to effect size and error probabilities?
Power analysis is important because it determines the minimum sample size needed to detect an effect of a given magnitude with acceptable statistical confidence. It balances the risk of Type I errors (false positives) and Type II errors(false negatives) by considering expected effect size, significance level, and statistical power. Without sufficient power, studies risk producing inconclusive or misleading results, even when true effects exist.
Explain the role and limitations of incentives in experiments
Incentives are used to motivate participants to take tasks seriously and to align experimental behavior with real-world decision-making. However, incentives can also distort behavior if participants misunderstand how rewards are earned or focus narrowly on maximizing payoffs rather than behaving naturally. Misunderstood incentives may therefore undermine validity rather than improve it.
What survey biases may arise, and how can they be addressed?
Surveys may suffer from halo effects, social desirability bias, common method bias, and order effects. These biases can be mitigated through careful question design, anonymity, separation of measures, randomization of question order, and thoughtful survey structure.
What is an exogenous shock, and how is it used in research?
An exogenous shock is an external, unexpected event that affects some units but not others independently of their characteristics. Researchers exploit such shocks as quasi-experiments to strengthen causal inference when randomized experiments are not feasible.
Why is pilot testing important?
Pilot testing allows researchers to identify problems in design, measurement, and procedures before full-scale data collection. It improves clarity, reduces error, and enhances overall study quality.
What are the key trade-offs between internal validity and external validity across experiments and surveys?
Experiments typically prioritize internal validity by tightly controlling conditions and isolating causal effects, but this can reduce external validity if the setting is artificial. Surveys often achieve higher external validity because they are conducted in natural contexts and can reach broader populations, but they sacrifice causal control and are more susceptible to biases such as self-reporting and common method bias. Researchers must therefore trade off causal precision against generalizability depending on the research question.
Explain the concept of scales and why they are appropriate
Scales are standardized sets of items used to measure abstract constructs such as attitudes, preferences, or moral values. They are appropriate because they allow latent concepts to be measured reliably and consistently across respondents. Validated scales have been tested for reliability and validity, reducing measurement error and enhancing comparability across studies.
What must be considered when sampling for surveys?
Survey sampling requires careful definition of the population and sampling frame, attention to representativeness, and ideally the use of random sampling. These considerations determine whether findings can be generalized beyond the sample.
What are the main strengths and limitations of using surveys compared to archival data in quantitative research?
Surveys are well suited for measuring attitudes, perceptions, and self-reported behaviors, allowing researchers to design variables that directly align with theoretical constructs and research questions. This provides high construct validity and flexibility, but surveys are vulnerable to biases such as social desirability, common method bias, and non-response. Archival data, in contrast, consist of pre-existing, naturally occurring records that are often large-scale and less prone to self-report bias, which strengthens objectivity and external validity. However, archival data are constrained by how variables were originally collected, limiting control over operationalization and making causal inference more challenging. The choice between surveys and archival data therefore involves a trade-off between theoretical precision and researcher control versus scale, realism, and objectivity.
What is the role of attention checks?
Attention checks identify respondents who are disengaged or inattentive. Excluding such responses improves data quality and reduces noise and bias in analysis.
How do attention checks, manipulation checks, and pilot testing jointly contribute to data quality in quantitative research?
Attention checks ensure that respondents are engaged and providing meaningful answers, manipulation checks verify that experimental treatments are perceived as intended, and pilot testing identifies design flaws before full data collection. Together, they improve data quality by reducing noise, detecting misunderstanding, and strengthening internal validity. High failure rates in any of these checks signal potential issues with survey design, task complexity, or respondent fatigue and must be addressed transparently.