In an online course I co-developed, an educator wrote via AI the following question:

What are we so afraid of in education that would make us not embrace this new technology in order to educate the future leaders of our world?

A quick aside: Wow, it’s been so long since I posted something that would end up in a forum (a walled garden) on a blog to make sure I didn’t lose it.


My response appears below.

Update: The FLOATER analysis at the end of this blog entry suggested that while well-reasoned, this blog entry lacked evidence. So I asked AI and the evidence will appear in subsequent blog entries. Read the series here:

The real fear isn’t that teachers will use AI for themselves to make their work less burdensome, time-consuming. There are several fears in the use of AI in education. Here are the ones I’ve observed.


Students will increasingly rely on AI generated creations to get the desired output for a grade.The desired output, the product that represents students’ knowledge, isn’t the real desired end of education. It’s the process students’ brains go through to get to the end product that is graded. AI usage by students jeopardizes this in a way that is indistinguishable from actual work product.


AI will replace critical thinking and reasoning. Learning how to reason is hard work. Then, learning about logical fallacies, how to overcome our own bias, is hard. Reasonign involves following evidence to a logical conclusion. Melanie Trecek-King (Thinking is describes it:


AI makes it very easy to rationalize, or to select evidence to justify a conclusion. Rationalizing might be described as: Desired Conclusion -> Motivated Reasoning (confirmation bias) -> Incomplete, low quality evidence

AI makes it a simple matter to construct motivated reasoning with any evidence to reach the desired conclusion. But the fear with students using AI is that students NEVER develop a way of gathering evidence, figuring out what it says, then coming to a conclusion. The reason why is that writing has been the way that’s been done for a long time.


One of the tough things to comprehend is that AI can replace the work trained professionals do. It can do it effortlessly under the right tutelage and with the right prompts. This means that people’s work product, what they get paid to produce (as opposed to being paid to reason) is easy to replace. With one person and an AI, I may get away with cutting a few other jobs.

It’s not hard to imagine AI robots eventually taking over physical tasks that are dull, repetitive, and which employ low-skilled workers. Worse, we can easily imagine AI using its own power to leapfrog from advancement to advancement to replace jobs that involve creation of some work product.

Those are three of the fears I see in people who resist AI. Whether they are supported by evidence, that is another question. And, that aligns to the FLOATER analysis below.


What I do now is try to remember and run my own reasoning through the AI application of Melanie Trecek-King’s FLOATER, an acronym for sorting through misinformation. You can learn about it here.

This is what a FLOATER-powered AI (Perplexity) had to say about the fears I raise above using a FLOATER megaprompt (a.k.a. long prompt) I prepared.

To evaluate the claims about the fears of AI in education using the FLOATER model, we will assign a score to each letter of the acronym based on the provided sources and the reasoning behind each score. The scores will range from 1 to 5, with 1 being the lowest (least satisfactory) and 5 being the highest (most satisfactory).

FLOATER Evaluation Table

Criteria Score Reasoning
Falsifiability 4 The fears about AI replacing student brain processing, human reasoning, and people’s functions are falsifiable through empirical studies and observations. However, some aspects, like long-term societal impacts, are harder to test.
Logic 4 The fears are logically structured and follow a clear argument. They are based on reasonable assumptions about how AI might impact education and employment.
Objectivity 3 The fears are subjective and reflect a particular viewpoint. While they are logically presented, they may be influenced by biases and do not fully consider the potential benefits of AI.
Alternative Explanations 3 The fears do not fully explore alternative explanations or potential benefits of AI in education. They focus primarily on negative outcomes without considering how AI might be integrated to support learning.
Tentative Conclusions 3 The conclusions are presented as definitive fears rather than tentative possibilities. A more balanced approach would acknowledge the complexity and evolving nature of AI in education.
Evidence 2 The fears are not supported by specific evidence within the query. While they are plausible, they would be strengthened by citing studies or data that demonstrate the impact of AI on education and employment.
Replicability 4 The concerns about AI’s impact on education can be studied and tested in various educational settings. Research on the effects of AI on student learning, critical thinking, and job markets can provide replicable evidence.


The fears outlined in the query about AI in education are logically presented but lack empirical evidence and objectivity. They do not fully consider alternative explanations or potential benefits of AI. To adhere to the FLOATER model, it would be necessary to support these fears with empirical data, consider alternative perspectives, and present conclusions tentatively, acknowledging the complexity and evolving nature of AI in education.

Citations: [1] [2] [3]… [4]… [5]… [6] [7]… [8]… [9]… [10]… [11] [12]… [13]… [14]… [15]… [16]… [17] [18]… [19] [20]