Always a pessimist, Popper held that experiments are performed in order to do this to a theory.
What is falsify (or disprove)?
Looking for the effects of college life on infaturation? Better have an age-matched control to avoid this threat to internal validity.
What is maturation?
If your database has a lot of "infinity" values, this type of validity may be compromised.
What is data evaluation validity?
Latin for "I shall be pleasing", this is what we may call sugar pills if they are given as a control for medical or psychological interventions.
What is a placebo?
A little bird told me this is a place where you can summarize your research in 280 characters, or thread of 280 character messages.
What is twitter?
Hypotheses are beliefs about how things are. They can inspire predictions--i.e., beliefs about this.
What is "how things will be in a particular situation, such as an experiment"?
This threat to external validity comes from having too little variance in your experimental stimuli.
What is narrow stimulus sampling?
Whether or not your students play in a band, changes in how they score the data can lead to this threat to internal validity.
What is instrumentation threat?
Even without an experiment, you can make a comparison using this type of control--though watch out for construct validity threats.
What is a case control?
Look on the bright side! This negative result is much more interpretable because you have this type of control.
What is a positive control?
Whether or not you're counting things is beside the point, this type of research emphasizes the observation stage of science.
What is qualitative research?
DAILY DOUBLE: This picture depicts the location of bullet holes in world war 2 planes that returned to base. Initially the military thought these were parts of the plane that should be reinforced with more armor. This is the flaw in that logic.
What is "survivorship bias" (i.e., sampling bias due to attrition)?
Just because your result is significant, doesn't mean it's important. This is the term used to describe the magnitude of a result.
What is effect size?
Pre-tests are valuable for interpretation, but if you really want to be thorough you can perform a Solomon four group design, which helps test for these.
What are testing effects?
When the variance of one variable is associated with the variance of another--it's a hint, but not proof, that one may cause the other.
What is a correlation?
You have a result!
But if you don't do this,
you may be in for a crisis.
What is replicate?
Not to be confused with a mathematical proof, this is when you're not actually looking for external validity.
What is proof of concept?
Back in the 1930's, Hans Seyle found that ovary hormones cause peptic ulcers. On closer look, it turned out that the ulcers were caused by the stress of his inexpert injections. Putting it all together, we see that there was this type of validity threat.
What is construct validity?
A technique used to ensure that even if participants within a group are variable, the variance is roughly equivalent between groups. Can also refer to groups wearing similar clothing.
What is matching?
Don't understand the physics of the effect? This is one reason why studies demonstrating ESP are unconvincing.
What is the lack of a plausible mechanism?
You can do research in many ways, but doing this—often the focus of a research design class--is especially helpful for assessing the contingency between one variable and another.
What is conduct an experiment?
In a small town of Middlebury a study showed there were fewer accidents on larger roads. But a follow-up study found that this was because larger roads had medians separating those traveling one way from those traveling another. We would use this term to describe the role of road medians in the original effect.
What is a mediator?
This phrase, which is abbreviated by it's acronym "WEIRD", has been used to refer to potential limits in the external validity of many research experiments.
What is "Western, educated, industrialized, rich and democratic"?
These two words are used to describe how consistent and how accurate (or predictive) a measure is.
What are reliability and validity?
If you want to convince reviewers you haven't been p-hacking, it can be helpful to communicate the science before you conduct the study by done by doing this.
What is pre-register?