Evidence of gains in overall critical thinking skills
Our cross-sectional data from the California Critical Thinking Skills Test (CCTST), although uncontrolled by the many changes that have occurred in higher education over the past decade and a half, provide evidence of gains in the overall critical thinking skills in the college student population (undergraduate and graduate) over the past 15 years.
This evidence is very modest. And yet, it is encouraging when we consider the still uncontrolled influences on this analysis.
Table 1. Comparison of CCTST overall mean scores by student population over time
Currently, educational institutions enroll a higher percentage of high school graduates, and they offer a wider variety of programs to an ever more diverse student population as compared to 2005 (e.g. working vs. not; fulltime vs. part-time; caring for children vs. not; FTC vs. re-entry, etc.).
Since 2012, student enrollment at for-profit institutions has also disproportionately increased. This variable’s influence is still undetermined at the time of this writing. Online programs are increasingly prevalent now.
So, although the news is positive about potential gains in critical thinking skills in the college student population, we need a deeper look into whether students are improving over time.
The Impact of Teaching Critical Thinking
There is one other important difference: educators have focused considerable attention on the development of critical thinking. Does teaching for thinking make a difference at the macro level?
Today, publications on ways to teach for critical thinking in different disciplines abound, as compared to twenty or thirty years ago. Meta-analyses, like those conducted by Abrami and colleagues1 (2008, and 2015), provide a wealth of support for the claim that students gain strength in critical thinking after an effective training program.
Our discussions with researchers, dissertation students, and employee development professionals lead us to conclude that initiatives to improve critical thinking are occurring all over the world and in at least 50 countries. And, as publication of peer reviewed papers studying the effectiveness of case-based learning, using human simulators, reflective journaling, concept mapping, and various other learning approaches indicates, studies conducted in many countries document critical thinking gains in various national populations.2
Is all this attention to critical thinking having a measurable overall impact?
Table 2. Comparison of CCTST overall score distribution in college undergraduates.
Table 2 compares the groups, and the accompanying graphic verifies that the samples were normally distributed. The change in CCTST Overall score demonstrates an average gain of 1.4 points. This gain is statistically significant (t = 9.10, p<001). More important, it is educationally significant, demonstrating that those in the 2019 sample were better able to reason to an accurate response and not fall prey to the common human reasoning errors. This supports the assertion that the educational emphasis on training reasoning skills is paying off.
This figure (below) illustrates the positive shift of scores over this time frame.
These findings provide some basis for confidence that improvements are occurring in the critical thinking skills of baccalaureate students. Going deeper than the CCTST Overall score, we found statistically significant growth in all the cognitive skill metrics the original versions of the CCTST assessed: Analysis (t = 7.84, p<.001), Inference (t = 7.96, p<.001), Evaluation (t = 5.66, p<.001), Induction (t = 11.78, p<.001), and Deduction (t = 5.55, p<.001).
Recent forms of the CCTST have expanded the score package to include Interpretation, Explanation, and Numeracy. Differences are commonly observed in relation to particular skill areas.
Stronger scores are more typically seen for Analysis (to analyze problem situations and identify the significance of the critical data ), Inference (to base conclusions on evidence and reasoning), Explanation (to provide a reason or justification for an action or belief), and Induction (to confirm or disprove hypothesizes using evidence and reasoning).
Weaker scores are more common for Interpretation (identifying the critical details of the problem), Evaluation (determining the quality of an analysis, inference, judgment, etc.), Deduction (reasoning in logically precise contexts), and Numeracy (reasoning in contexts that involve numbers, proportions, probability, flow rates and other quantitative conditions).