Quantitative and Qualitative Measurement
Reliability and Validity
Scales and Indexes
Survey Questionnaires
Survey Interviews
100
These three features separate quantitative from qualitative approaches to measurement
What are timing, the data itself, and how we connect concepts with data (p. 200)
100
These are three types of reliability.
What are stability reliability (reliability across time), representative reliability (reliability across subpopulations or different types of cases) and equivalence reliability (when a construct is measured with multiple specific measures)? (208-9). Which are characteristic of your study?
100
________ scales provide an ordinal-level measure of a person's attitude.
What is Likert? (p. 226-230)
100
These are three errors to avoid when engaging in survey research.
What are errors in selecting the respondents, errors in responding to survey questions, and errors in survey administration? (p. 313)
100
The ___________ interview is less structured than a standard interview, and is treated as a social situation in which respondents must interpret the meaning of a survey question.
What is conversational interviewing? (p.341)
200
Five ways through which we construct conceptual definitions.
What are thinking carefully, observing directly, consulting with others, reading what others have said, and trying possible definitions (p. 201)
200
These four things will improve reliability.
What are (1) clearly conceptualize constructs, (2) use a precise level of measurement, (3) use multiple indicators, and (4) use pilot tests. (209)
200
To avoid the problem of the response set, do this in creating a Likert Scale.
What is switching directions of the scoring order? (228) Bonus: why?
200
Name some of the ten problems to avoid when writing survey questions.
What are 1. avoid jargon, slang, and abbreviations 2. avoid ambiguity, confusion, and vagueness 3. avoid emotional language and prestige bias 4. avoid double-barreled questions 5. avoid leading questions 6. avoid questions beyond respondents' capabilities 7. avoid false premises 8. avoid asking about distant future intentions 9. don't not avoid double negatives ;) 10. avoid overlapping or unbalanced response categories (p. 314-317)
200
A _______ is a neutral request to clarify an ambiguous answer, to complete an incomplete answer, or to obtain a relevant response.
What is a probe? (p. 345)
300
________________ links a conceptual definition to a set of measurement techniques or procedures.
What is operationalization? (203)
300
____________ ______________ tells us how well the conceptual and operational definitions mesh with one another.
What is measurement validity? (p.211)
300
__________ scaling begins with a large number of statements that cover all shades of opinion.
What is Thurston? (Measures agreement/ disagreement but not intensity thereof; assumes agreement about where statements appear in rating system; time consuming and costly; possible to get same score in several ways.) p. 231
300
Four ways to increase honest answering about sensitive topics.
What are 1. create comfort and trust 2. use enhanced phrasing 3. establish a desensitizing context 4. use anonymous questioning methods (320-321)
300
These are the six categories of interview bias.
What are 1. errors by the respondent 2. unintentional errors or interviewer sloppiness 3. intentional subversion by the interviewer 4. influence due to the interviewer's expects ions 5. failure of an interviewer to probe 6. influence on the answers due to the interviewer's appearance (p. 347)
400
This is the sequence of quantitative measurement.
What is conceptualization, operationalization, then application of the operational definition or the collection of data (p. 204)
400
These are four types of measurement validity.
What are face validity (consensus that the indicator measures the construct), content validity (the content is specified, all areas of the definition are sampled, indicators reach all parts of the definition), criterion validity (validity of an indicator is compared with another measure of the same construct in which a researcher has confidence-- can be concurrent, as with MAP or ACT tests, or predictive, like the SAT), construct validity (do the various indicators operate in a consistent manner? May be convergent-- multiple measures of the same construct hang together or operate in similar ways; or discriminant -- indicators are negatively associated with opposing constructs) (pp. 212-214)
400
_________ scaling is used to determine whether there was a structured relationship among a set of indicators (whether multiple indicators about an issue had an underlying single dimension or cumulative intensity).
What is Guttman? (235)
400
An ________-_______ question asks a question to which respondents can give any answer.
What is open-ended (as opposed to closed-ended, p.323-326)
400
The ____________ __________ model views all human encounters as highly dynamic, complex mutual interactions in which even minor, unintended forms of feedback have an influence.
What is the collaborative encounter model? (p. 349)
500
True or false: in qualitative studies, operationalization comes before conceptualization.
What is true? (205) And why? (206)
500
Consistency/ plausibility and truthfulness/ detailed evidence-gathering are the ways that _____________ researchers approach reliability and validity.
What is qualitative? (p 214-216)
500
A __________ ____________ allows us to test whether a patterned hierarchical relationship exists in the data gathered via Guttman scaling.
What is a scalogram analysis? (p.235)
500
One type of survey and its advantages and disadvantages.
What is 1. mail/ self administered- pro: can give them directly to respondents, offer anonymity. con: low response rate, lack of control. 2. telephone- pro: sample from lists, large response rates, computer assisted telephone interviewing. con: sharp drop-off rate. moderately high cost, inconvenience to interviewee. 3. face to face - pro: highest response rates, longest, most complex questionnaires. con: high cost, interviewer bias. 4. web surveys - pro: fast, inexpensive. flexible design. can be static or interactive. con: fast, inexpensive. low quality. coverage, privacy/ verification, design concerns. (337-340)
500
These six methods of pilot testing can improve questionnaires and interviews.
What are: 1. Think aloud interviews 2. Retrospective interviews and targeted probes 3. Expert evaluation (like AC 653) 4. Behavior coding 5. Field experiments 6. Vignettes and debriefing (p. 351)