The appropriate internal consistency (reliability) statistic for these Insight Assessment critical thinking skills tests is the KR-20 coefficient (for instruments with dichotomously scored items). Reliability coefficients range between .77-.83, extremely high for a measurement of an attribute as complex as critical thinking. Scale score statistics demonstrate similar strength.
The appropriate internal consistency reliability coefficient for the reasoning skills instruments is the Kuder-Richardson test because scoring for these instruments is dichotomous. However, this coefficient is known to underestimate the actual reliability of the instrument when there are fewer than 50 items and when the construct being measured is not highly homogenous.
KR-20’s of .70 are deemed evidence of strong internal consistency in non-homogenous measures. This level of internal consistency is the standard used for development of Insight Assessment critical thinking skills instruments. The OVERALL Scores of all versions of the reasoning skills tests meet or exceed this .70 criterion in the validation samples, and in large model population samples. KR statistics in this range are typically observed in independent samples when the sample size and variance is adequate. Factor loadings for items range from .300 to .770.
The traditional delineation of reasoning into deductive or inductive cross cuts the APA Delphi Report’s list of core critical thinking skills. This means that any given inference or analysis or interpretation, for example, might be classified as deductive or as inductive, depending upon how the theoretician conceives of these more traditional and somewhat contested categories. Conceptually the skills in the Delphi list are not necessarily discrete cognitive functions either, but in actual practice are used in combination during the process of forming a reasoned judgment, that is critical thinking. Although, in some contexts a given skill can be considered foremost, even though other skills are also being used. For example, a given test question may call heavily upon a test taker’s numeracy skills, while at the same time requiring the correct application of the person’s analytical and interpretive skills. For these reasons, and others relating to test design and cognitive endurance, the questions on the CCTST in its various versions, may or may not be used on more than one scale. As a result, although the specific skill scores reported have internal consistency reliability, test-retest reliability, and strong value as indicators of specific strengths and weaknesses, they are not independent factors; which is theoretically appropriate to the holistic conceptualization of critical thinking as the process of reasoned and reflective judgment, rather than simply a list of discrete skills.
The discussion of the Kuder Richardson statistic applies to the following measures of cognitive skills:the CCTST, BCTST, TER, HSRT, Quant-Q, BRT, the EDUCATE INSIGHT Reasoning Mindset series and the second parts of the LSRP, MDCTI, College Student Success and the INSIGHT assessments