The Belmont Report outlines three main ethical principles. What are they?
Respect for persons, beneficence, and justice.
A study observes untreated syphilis in 400 Black men without informing them of the true purpose. What major ethical issue does this highlight?
Lack of informed consent.
A researcher is conducting a study on life satisfaction. They ask participants to fill out a survey where they rate their life satisfaction from 1 to 7 on various statements. What type of measurement is the researcher using?
Self-report measure.
Which type of variable would be used to categorize individuals into male, female, or non-binary in a psychological study?
Categorical (nominal) variable.
A new assessment claims to measure leadership skills, but many participants feel that the test items, which focus on basic math problems, don’t seem to match what they expect leadership to involve. What kind of validity is likely to be questioned based on this perception?
Face validity.
A survey asks, "How often do you use social media?" but doesn’t clearly define what "often" means. How might this affect the survey's results?
It may harm construct validity, as respondents might interpret "often" differently, leading to unclear or inconsistent data.
A study asks participants to report on their gender identity, income, and life satisfaction. In this case, why might self-report data be the best method for collecting this information?
Self-report data is good in this case because gender identity, income, and personal life satisfaction are subjective experiences that are best understood directly from the individual.
A researcher conducts a study on road rage using a sample of drivers from a small town. What question would you ask to assess if the results can apply to drivers across the entire country?
"Does this sample of drivers adequately represent the population of American drivers?" This addresses generalizability and external validity.
A researcher surveys 1,000 voters about an election. Why might we be able to predict the election results from this survey, even though we didn’t ask everyone?
Because the sample might be representative of the larger voting population, allowing for accurate generalization.
A researcher conducts a study where participants are not given a full explanation of the risks involved. One participant later finds out that the study involved a potential health risk they were unaware of. Which ethical principle from the Belmont Report has been violated?
Respect for persons (informed consent was not properly obtained).
A study involves minimal risk, such as participants completing an anonymous online survey. Should the IRB meet in person to review this study, and why or why not?
No, the IRB typically does not meet in person for studies with little to no risk, as long as the study does not involve vulnerable populations.
A research team is studying gratitude. They define gratitude as the appreciation a person feels when receiving help. They then ask participants how often they thank their partner when they receive help. What type of definition is "how often they thank their partner"?
Operational definition.
In a study, participants take an IQ test twice, one month apart. What type of reliability is being assessed by comparing the consistency of their scores over time?
Test-retest reliability.
A psychologist develops a new test to measure creativity by assessing how many different colors a person can name in 30 seconds. Although the test provides consistent results, many experts argue that color-naming doesn’t accurately reflect creative thinking. What type of validity is in question?
Construct validity.
A political poll asks, "Do you support Candidate A or Candidate B?" without allowing for a neutral option. What kind of question format is this?
This is a forced-choice question, which limits respondents to two options and doesn’t allow for neutrality.
During a study, a manager is told that a certain group of employees are "high performers." As a result, the manager unknowingly gives them more positive feedback, and their performance improves. What is this called?
This is an example of the observer effect (also known as the expectancy effect), where the observer’s expectations influence the participants' behavior.
A study randomly selects students using every 5th person from a list of all students at a school. What is this sampling technique called?
Systematic sampling, where participants are chosen at regular intervals from a larger population.
If a pollster only surveys people who are easy to reach, like students on campus or people in their neighborhood, what type of sampling is this, and why is it problematic?
This is convenience sampling, which can lead to biased results because the sample may not be representative of the entire population.
A research team is planning a medical study where participants will be given a new treatment that has potential side effects. The team believes the treatment could be beneficial but hasn’t fully evaluated the risks. What ethical principle should they carefully assess before proceeding?
Beneficence (they need to ensure the benefits outweigh the risks and protect participants from harm).
In a study, participants are told they are completing a task to improve memory, but the true purpose of the experiment is to test their reactions to stress. What ethical standard is being applied when participants are not fully informed about the study's real purpose?
Deception (Standard 8.07), but the researcher must ensure the deception does not cause harm and should debrief participants afterward.
Researchers studying social behavior ask participants how many times they smiled during a day. They also observe participants in a social setting and count the actual number of smiles. What two types of measures are they using?
Self-report and observational measures.
A researcher uses a scale to measure subjective well-being, with items worded differently but all designed to measure the same concept. What type of reliability is the researcher assessing?
Internal reliability.
A gym uses a new fitness assessment to predict how well clients will perform in an upcoming marathon. Later, they compare the assessment scores to the actual marathon finishing times and find that higher fitness scores didn’t correspond with faster times. What type of validity is lacking?
Criterion validity.
A survey asks, "Do you agree that the company should improve work conditions and increase pay?" Why is this a poor question, and how could it be improved?
It’s a double-barreled question because it asks about two different things (work conditions and pay). It should be split into two separate questions.
A researcher asks participants to recall their experience of a natural disaster that happened five years ago. Many provide specific details that differ from their earlier accounts given right after the event. What does this tell us about self-reporting on memories?
Self-reporting on memories can be unreliable, as memories can change over time, influenced by emotions, media, or other people’s accounts.
A researcher selects only people who responded to an email survey. Why might this sample not be generalizable to the entire population?
This creates a self-selection bias, where only those who are willing and available respond, making the sample unrepresentative of the entire population.
A study includes 100 South Asian Canadians in a sample of 1,000, even though this group only makes up 4% of the population. What is this technique called, and why might it be used?
Oversampling, used to ensure that smaller groups are adequately represented for statistical accuracy.
Researchers are studying a new medical treatment, but they only test it on individuals from a low-income neighborhood. Meanwhile, the treatment is intended to benefit people from all backgrounds. Which principle of research ethics is being ignored, and how?
Justice (the burden of risk is unfairly placed on one group while others benefit).
A professor publishes a study claiming to have collected data from hundreds of participants, but in reality, no participants were involved, and the data was entirely invented. What term describes this unethical behavior?
Data fabrication, where the researcher creates data that was never collected.
Researchers measure body temperature in degrees Celsius. What type of scale is this?
Interval scale.
A correlation coefficient of r = .93 is found between two measurements. What does this indicate about the strength of the relationship?
The relationship is very strong.
A new scale measuring job satisfaction is compared to a measure of physical health. The researcher finds a very low correlation between the two measures, supporting what type of validity for the job satisfaction scale?
Discriminant validity.
A respondent answers "Strongly agree" to every question on a life satisfaction survey, regardless of the content. What bias is this an example of, and how does it affect the survey’s results?
This is an example of acquiescence bias (yea-saying), where respondents agree with everything. It reduces the construct validity of the survey by failing to measure their true opinions.
A researcher wants to observe the natural behavior of employees in a workplace but notices that productivity spikes whenever the employees know they are being watched. What can the researcher do to reduce reactivity?
The researcher could use unobtrusive observation methods, such as hidden cameras or long-term observation, where participants get used to being observed and return to their normal behavior.
A university randomly selects 100 freshmen, 100 sophomores, 100 juniors, and 100 seniors to participate in a study on academic stress. What kind of sampling method is this?
This is stratified random sampling, where participants are divided into subgroups (strata), and random samples are taken from each group.
A researcher asks participants with Crohn’s disease to invite other people they know with the condition to join the study. What sampling method is this, and what is a key limitation?
This is snowball sampling, which may lead to biased results because the sample is not random and may not be representative of the larger population.
A study is recruiting participants from a local care home, where many elderly people have cognitive impairments. The researchers do not take special care to explain the study in a way these individuals can understand. Which group from the Belmont Report’s principles is at risk in this scenario, and what should the researchers have done?
Vulnerable populations (the researchers should provide extra protection and ensure informed consent is obtained in a way participants can understand).
A psychologist plans to conduct a study on rats to test a new drug but hasn’t yet proven that using animals is necessary. What ethical standard must the researcher meet before proceeding with the study?
The researcher must meet the ethical standard of justifying why animals are necessary, as outlined in the legal guidelines and the "Three Rs" (Replacement, Reduction, Refinement).
In a survey, participants are asked to rank their satisfaction from "very satisfied" to "not satisfied at all." What type of measurement scale does this represent?
Ordinal scale.
What is considered an acceptable value for Cronbach’s alpha to indicate a reliable scale?
.80 or higher.
A researcher designs a stress survey and administers it to two groups: a group of fire-fighters just after a long shift and a group of students on a calm vacation. The fire-fighters score higher on the survey. What type of validity is demonstrated by this comparison?
Known-groups evidence for criterion validity.
A professor is rated on a scale from "Profs get F's too" (1) to "A real gem" (5). What type of scale is being used here?
This is a semantic differential scale, where respondents rate an object along a continuum between two opposing adjectives.
A researcher expects that a new teaching method will improve student performance, and this expectation leads the teacher to give the students more attention during the study. What is the difference between observer bias and observer effects in this scenario?
Observer bias occurs when the researcher interprets the data to fit their expectations, while observer effects occur when the participants' behavior changes due to the researcher’s presence or actions, like giving more attention to students.
In a study, researchers randomly puts participants to different treatment groups. What is this process called, and how does it differ from random sampling?
This is random assignment, which enhances internal validity by ensuring treatment groups are comparable. It differs from random sampling, which enhances external validity by making the sample representative of the population.
In a study, the researcher sets a target to survey 80 Asian Americans, 80 African Americans, and 80 Latinx participants but does not select them randomly. What type of sampling is being used?
This is quota sampling, where specific subgroups are targeted but selected nonrandomly.
A research team wants to study how video games affect decision-making in children under 12. To attract more participants, they offer children $500 in exchange for joining the study, which is a large sum for the children. Why might this be considered unethical?
Offering a large sum of money could unduly influence minors and violate respect for persons, particularly vulnerable populations like children.
A research lab is inspected every 6 months to ensure animals are treated ethically and according to the rules. What organization conducts these inspections, and what happens if the lab violates these rules?
The IACUC conducts the inspections, and if the lab violates rules, the IACUC or government agencies can shut down the experiment or withdraw funding.
What statistical measure combines the average inter-item correlation and the number of items in a scale to evaluate internal reliability?
Cronbach’s alpha.
A bathroom scale consistently shows the same weight, but the weight is always off by 10 pounds. What does this scenario demonstrate?
High reliability, but low validity.
A researcher develops a pedometer that consistently provides the same reading every day, but it doesn’t match the actual number of steps walked. Which is the pedometer showing: reliability or validity?
Reliability (but not validity).
A researcher gives respondents the option to answer "neutral" on every question in a survey about environmental policies. Why might this lead to inaccurate data, and how could the researcher prevent this behavior?
Fence sitting might lead to inaccurate data by not capturing true opinions. The researcher could remove the neutral option or use forced-choice questions to prevent this.
A survey asks students how many hours they study each week. The students know their professors will see the results, so they exaggerate their study time. In what case would self-report data not be good, and why?
Self-report data would not be good in this case because of social desirability bias. Students might report what they think looks good rather than their actual behavior.
Researchers ensure that a smaller ethnic group makes up at least 20% of their sample, even though this group is less than 10% of the population. After the sample is collected, participants are split into treatment and control groups. What combination of techniques might they be using?
Oversampling and random assignment.
A researcher studying a rare medical condition recruits participants from a specific clinic known for treating this condition. After the initial participants are recruited, they are asked to refer others they know with the same condition. What combination of sampling techniques is being used?
Purposive sampling and snowball sampling.