Researchers must demonstrate instruments are reliable since without reliability, research results using the instrument are not replicable, and replicability is fundamental to the scientific method. Reliability is the correlation of an item, scale, or instrument with a hypothetical one which truly measures what it is supposed to. Since the true instrument is not available, reliability is estimated in one of four ways:
" Internal consistency: Estimation based on the correlation among the variables comprising the set (typically, Cronbach's alpha)
" Split-half reliability: Estimation based on the correlation of two equivalent forms of the scale (typically, the Spearman-Brown coefficient)
" Test-retest reliability: Estimation based on the correlation between two (or more) administrations of the same item, scale, or instrument for different times, locations, or populations, when the two administrations do not differ on other relevant variables (typically, the Spearman Brown coefficient)
" Inter-rater reliability: Estimation based on the correlation of scores between/among two or more raters who rate the same item, scale, or instrument (typically, intraclass correlation, of which there are six types discussed below).
These four reliability estimation methods are not necessarily mutually exclusive, nor need they lead to the same results. All reliability coefficients are forms of correlation coefficients, but there are multiple types discussed below, representing different meanings of reliability and more than one might be used in single research setting.
The full content is now available from Statistical Associates Publishers. Click here.
Below is the unformatted table of contents.
Table of Contents Overview 6 Key Concepts and Terms 7 Scores 7 Number of scale items 7 Models 7 SPSS 7 SAS 8 Triangulation 8 Calibration 8 Internal consistency reliability 9 Cronbach's alpha 9 Overview 9 Interpretation 9 Cut-off criteria 9 Formula 9 Number of items 10 Cronbach's alpha in SPSS 10 Example 1 10 Alpha if deleted 10 Item-total correlation 11 R-squared 11 Negative alphas 11 KR2 11 Example 2 12 Standardized item alpha 12 Cronbach's alpha in SAS 13 SAS syntax 13 SAS output 13 Other internal consistency reliability measures 14 Ordinal reliability alpha 14 Raykov's reliability rho (?) 15 Armor's reliability theta 15 Spearman's reliability rho 15 Split-half reliability 16 Split-half reliability in SPSS 16 Overview 16 SPSS menu selections 16 Spearman-Brown split-half reliability coefficient 17 Guttman split-half reliability coefficient 19 Guttman's lower bounds (lambda 1-6) 19 Split-half reliability in SAS 20 Odd-Even Reliability 20 Overview 20 SPSS 21 Test-retest reliability 22 Overview 22 Inter-rater reliability 23 Overview 23 Cohen's kappa 23 Kappa in SPSS 23 Example 24 Interpretation 24 Weighted Kappa 25 Intraclass correlation (ICC) 25 Example 25 Sample size: ICC vs. Pearson r 26 Data setup 26 Interpretation 27 Obtaining ICC in SPSS 28 Single versus average measures 28 ICC Models 29 ICC use in other contexts 31 Assumptions 31 Additivity 31 Independence 32 Uncorrelated error 32 Consistent coding 33 Random assignment of subjects 33 Equivalency of forms 33 Equal variances 33 Similar difficulty of items 34 Same assumptions as for correlation 34 Frequently Asked Questions 34 How is reliability related to validity? 34 How is Cronbach's alpha related to factor analysis? 34 How is reliability related to attenuation in correlation? 35 How should a negative reliability coefficient be interpreted? 36 What is Cochran's Q test of equality of proportions for dichotomous items? 36 What is the derivation of intraclass correlation coefficients? 37 What are Method 1 and Method 2 in the SPSS RELIABILITY module? 38 Bibliography 38