In a previous blog entry, I shared a list of AI fears. But, when I ran it through Trecek-King’s FLOATER acronym, the AI megaprompt suggested that while well-reasoned, the fears lacked evidence. Of course, it’s not that the fears lacked evidence, but that MY original post was missing evidence. This is part 3 where I share the AI fear and an AI’s response with AI evidence. Skeptical? Good.

Read the series here:

Confirmation Bias

One of the dangers of AI, and this blog post and subsequent ones may well prove it, is that one starts with the desired conclusion (fear of AI). Then, one uses AI to analyze low quality, incomplete evidence that supports that conclusion in a fit of motivated reasoning (a.k.a. confirmation bias being the main problem). Confirmation bias is simply coming to a conclusion then marshaling evidence and “motivated” reasoning to support that conclusion.

“We think we follow evidence to a conclusion. In reality, we come to our beliefs in irrational ways, then work backwards to find evidence to rationalize the belief.” -Melanie Trecek-King, ThinkingIsPower.com

So, with that in mind, here’s the fear and the AI evidence. In Part 2 of this blog entry, I relied on Perplexity.ai. In this one, I use Claude.ai.

Is the conclusion merited or not?

FEAR #2: AI REPLACES THE HUMAN REASONING PROCESS

AI will replace critical thinking and reasoning. Learning how to reason is hard work. Then, learning about logical fallacies, how to overcome our own bias, is hard. Reasonign involves following evidence to a logical conclusion. Melanie Trecek-King (Thinking is Power.com) describes it:

Evidence->Reasoning/Logic->Conclusion

AI makes it very easy to rationalize, or to select evidence to justify a conclusion. Rationalizing might be described as: Desired Conclusion -> Motivated Reasoning (confirmation bias) -> Incomplete, low quality evidence

AI makes it a simple matter to construct motivated reasoning with any evidence to reach the desired conclusion. But the fear with students using AI is that students NEVER develop a way of gathering evidence, figuring out what it says, then coming to a conclusion. The reason why is that writing has been the way that’s been done for a long time.


AI’s Response

There are valid concerns that AI language models like myself could potentially hinder the development of critical thinking and reasoning skills in students if not used thoughtfully. Here is some of the key evidence to consider:

1. Ease of rationalizing desired conclusions: As you note, AI makes it very easy for students to cherry-pick evidence to support whatever conclusion they want to reach. They can prompt the AI to generate arguments for a particular view, without doing the hard work of gathering and evaluating evidence objectively. This risks reinforcing confirmation bias.

2. Outsourcing the reasoning process: If students rely too heavily on AI to analyze information and construct arguments for them, they may not fully develop their own skills in logical reasoning, evaluating evidence quality, recognizing fallacies, etc. Learning to reason well requires practice.

3. Potential to skip evidence gathering: A worry is that students will jump straight to generating a full essay with AI based on their existing views, without first doing a thorough investigation of evidence on the topic. Sound reasoning has to start with comprehensive, high-quality evidence.

4. Impact on motivation to learn: Some fear students will be less motivated to put in the hard work to develop reasoning capabilities if they can take an “easy route” with AI. Writing has traditionally been key for building these skills.

However, I would argue that while these are real risks to mitigate, AI can also potentially support critical thinking if used as a tool in the right ways - for example, by helping students more easily see different viewpoints on an issue to analyze, or by critiquing gaps in reasoning or evidence in student writing.

The key is for educators to very intentionally guide AI use as part of the learning process, with the goal of strengthening (not replacing) each student’s own thinking and reasoning capabilities. Students still need to be taught and assessed on their information gathering, logical reasoning, and drawing sound conclusions from evidence. AI should be a supplement to this educational process, not a substitute for it.

Proper instruction, support and assessment will be essential to ensure the next generation develops strong critical thinking faculties even in an age of AI assistance. The human reasoning process remains paramount. But AI, used carefully as an educational aid rather than a crutch, has potential to help cultivate these crucial skills. Research and experimentation will be important to figure out the best approaches.