What is the primary difference between an exploratory and an explanatory research question?
Exploratory = understanding / generating ideas
Explanatory = testing relationships or causes
A nonprofit reports that clients who completed their job-readiness program had higher employment rates six months later, so the program was declared successful.
What research design issue is present here, and what can not be concluded from this information?
No comparison or control group
Cannot claim the program caused the improvement
Alternative explanations (selection, motivation, time)
What is one key difference between experimental and observational studies?
Random assignment vs no random assignment
Why is ethical research more than just following IRB rules?
Ongoing responsibility
Interpretation and use matter
What is operationalization?
Defining how a concept is measured
A group of authors deliberately submitted nonsensical or ideologically loaded papers to academic journals to test whether they would be accepted through peer review.
True or False: This actually happened?
True!
Refers to the “grievance studies” hoax associated with James Lindsay and colleagues.
What is the difference between N and n?
Total sample vs analytic sample
Your supervisor sends you an article in an email with the statement: "The study proves that CBT is ineffective for trauma survivors."
How would you revise this claim to reflect what a study can actually support?
“This study did not find evidence of effectiveness…”
Naming sample limits
Avoiding universal language (“proves,” “ineffective for all”)
Which type of research question is most appropriate when little prior research exists?
Exploratory
A headline reads: ‘New Study Proves Social Media Causes Anxiety in Teens.’ The study surveyed teens once about social media use and anxiety symptoms.
Identify the mismatch between the study design and the claim being made.
Cross-sectional design
Association/Correlation is not Causation
Overclaiming in media
Why does random assignment strengthen causal claims?
Reduces selection bias
Balances confounders
What are the three components of Evidence-Based Practice?
Best research evidence
Clinical expertise
Client values/context
What is the difference between reliability and validity?
Reliability = consistency
Validity = accuracy/meaning
Some journals have been found to accept articles with obvious methodological flaws or plagiarized content after charging authors large publication fees.
True or False: This is a documented problem in academic publishing?
True. There are predatory journals.
Why does sampling method affect generalizability?
Determines who findings apply to
Bias risk
You are a clinician working with a client experiencing panic symptoms. During supervision, your supervisor references a recent study summarized online that found “limited effectiveness” of the intervention you’re currently using. The summary does not mention sample size, population, or outcome measures. Your client reports feeling more stable for the first time in months.
What is the most responsible immediate clinical response, and what should not be done at this stage?
Do not abruptly change treatment
Integrate evidence cautiously
Seek original study details
Center client progress and values
Why must research questions be clearly defined before choosing methods?
Methods must align with the question
Prevents mismatch and invalid conclusions
The intervention group showed statistically significant improvement compared to baseline (p < .05), although effect sizes were small and confidence intervals were wide.
How should this finding be interpreted responsibly, and what language should be avoided?
Result may be real but small
Wide CIs indicate uncertainty
Avoid strong claims about impact or effectiveness
Name one reason a randomized study might still produce misleading results.
Attrition
Poor measurement
Implementation failure (i.e. fidelity)
Why can technically correct research still cause harm?
Misuse
Overgeneralization
Ignoring context
(Bonus: doesn't matter if it isn't read, shared, or applied!)
Why might a reliable measure still be inappropriate for a study?
Poor construct alignment
Lacks relevance to lived experience
A randomized clinical trial was retracted after it was discovered that researchers invented an entire dataset to support a popular therapy approach, including fake participants and fabricated outcomes.
True or False: This specific type of scandal is a documented case in mainstream social work or clinical psychology research?
False - fortunately!
Data fabrication exists, but this exact “entire RCT fully invented to support a therapy” scenario is intentionally exaggerated.
What is statistical power in plain language?
Ability to detect an effect if it exists
Your agency is revising its treatment guidelines. A meta-analysis reports statistically significant but small average effects for a trauma intervention. The populations included were mostly white, insured adults. Leadership wants to remove the intervention from the approved list for all clients.
What methodological and ethical considerations should inform this decision before any guideline changes are made?
Effect size vs clinical relevance
External validity
Risk of erasing benefit for subgroups
Ethics of premature removal
A study asks, “Does this program reduce reentry?”
Is this descriptive, explanatory, or evaluative - and why?
Evaluative (program evaluation)
Focused on program impact
A city council cites a study with N = 3,200 to justify cutting funding for a harm-reduction program. The key results are based on n = 180 participants who completed follow-up surveys.
What sampling and interpretation concerns should be raised before using this study to justify policy change?
N vs n distinction
Attrition bias
Limited generalizability
Ethical risk of overgeneralization
Why can cross-sectional studies not establish causality, even with strong associations?
No temporal order
Confounding factors
Why is transparency about limitations an ethical obligation?
Prevents misuse
Supports informed decision-making
What is a proxy measure, and why is it sometimes used?
Indirect indicator
Used when direct measurement isn’t possible
A research team selectively reported outcomes that showed statistically significant results while downplaying or omitting null findings, leading to an exaggerated perception of effectiveness for an intervention.
True or False: This practice has been identified as a systemic problem across multiple fields?
True. Outcome switching, publication bias, and p-hacking do occur.
Note: pre-registration is meant to help with that!
Why should null findings from small samples be interpreted cautiously?
Low power
Increased uncertainty
A client brings in a TikTok claiming that “therapy doesn’t work for people like me,” citing a study that found no statistically significant effect for a subgroup similar to the client. The video does not mention sample size, confidence intervals, or study design.
As a clinician, how do you respond in a way that is evidence-informed and clinically supportive?
Validate concern
Contextualize limitations
Avoid invalidating lived experience
Maintain therapeutic alliance
How can poorly framed research questions lead to ethical or practical problems later in the research process?
Misuse of data
Harmful or irrelevant conclusions
Wasted resources
A randomized study finds no statistically significant effect of a trauma intervention for undocumented immigrants. The subgroup sample size was n = 27. The authors conclude the intervention "does not work for this population."
Identify at least two methodological or ethical problems with this conclusion.
Low statistical power
Subgroup analysis limitations
Risk of false negatives
Ethical harm and erasure of marginalized groups
Why can strong statistical results not fix a weak study design?
Design determines what claims are valid
Statistics cannot correct structural flaws
What does it mean to translate research findings responsibly into practice or policy?
Naming uncertainty
Avoiding overclaiming
Considering consequences
How can poor measurement undermine otherwise strong research findings?
Misrepresents the construct
Leads to misleading conclusions
True or false?
False! The DSM contains no references to research, and the final inclusions are based on clinical committee consensus and voting.
Why might marginalized groups be especially affected by low power in research studies?
Smaller subgroup sizes
Effects harder to detect
Risk of erasure (the "other" category)
A county health department plans to defund a community-based mental health program after a large study reported no statistically significant effects. Follow-up data was only available for 20% of participants, and the authors noted wide confidence intervals. Community members report perceived benefits that were not captured in the outcome measures.
What arguments should be made against using this study alone to justify defunding the program?
Attrition bias
Low precision
Measurement limits
Ethical risk of harm
Need for mixed-methods or additional data