Definitions of Reliability
How Do I Use It?
Types of Scores
Types of Errors
100

The quality of a test such that it produces consistent scores. 

What is Reliability? 

100

Correlate the scores from a test given at Time 1 with the same test given at Time 2.

What is Test-Retest Reliability?

100

Observed Score, True Score, and Error Score

What are Test Scores?

100

Errors within the individual taking the test.

What is Trait Error?

200

Reliability that examines consistency over time.

What is Test-Retest Reliability? 

200

Examine the percentage of agreement between raters.

What is Inter-Rater Reliability? 
200

The actual score you get on a test.

What is Observed Score?

200

Errors within the testing situation

What is Method Error?
300

A type of reliability that examines the consistency of raters. 

What is Inter-Rater Reliability?

300

Use Cronbach’s alpha to correlate every item with every other item on a test and gives an average correlation among all parts of a test.

What is Internal Consistency Reliability?

300

The hypothetical score of a person’s real ability / knowledge / skill.

What is True Score?

300

The difference we’re seeing is due ACTUAL difference and NOT error.

What is Classical Test Theory?

400

When you want to know if performance on one part of the test is similar to performance on another part of the test. 

What is Internal Consistency Reliability?

400

Correlate the scores from one form of the test with scores from a second form of the test.

What is Parallel Forms Reliability? 

400

The difference between a person’s observed and true score.

What is Error Score?

400

Observed score = True score + (Trait error score + Method error score)

What Classical Test Theory Equation?

500

When you want to know if different forms of a test give equivalent scores.

What is Parallel Forms Reliability? 

M
e
n
u