Over the last few weeks, I’ve been reflecting on the issue of AI. My problem is that my reflections occur at odd moments, like while I’m watching television or reading a book. Or working on a project. It takes forever for these drops of insight to accumulate into a pool. I don’t feel anything about AI. I’m not passionately supportive of it, or vehemently against it. I don’t even blame Big Tech for wanting to make money off of me (and I dutifully pay a monthly fee for Perplexity AI…it’s nothing to get the benefit of answers more accurate than Google).
Some people like it, some people do not. There are many valid reasons for not. But, since I must rely on AI in my daily work (you don’t believe me, I can tell), I am less surprised when AI makes a mistake or doesn’t quite deliver on the desired output.
I was delighted to read this talk by Dr. Linda McIver, From Hypnotized to Heretic: Immunising Society Against Misinformation.
Solution Shaped
I got a laugh out of this keynote speaker’s explanation of AI results. Having had many similar conversations with Claude AI myself, this was spot on regarding Claude’s apologies for not following directions:
It’s humanising, anthropomorphic language. I apologise. I appreciate your patience. I made careless errors. I failed to carefully double check MY work. I made an oversight. You are holding ME accountable. I will strive to be more diligent.
But despite the apology, let’s be clear. Claude was working exactly as intended. Because LLMs are not designed to produce facts, right answers, or anything at all accurate or reliable. They are designed to produce statements that are plausible. As Lilly Ryan puts it, their statements are not facts, they are Fact Shaped.
And I would go further. Their summaries are not summaries. They are summary shaped. Their solutions are not solutions. They are solution shaped.
As I read Dr. McIver’s words, they resonate. After all, it is for this reason that I dumped Claude AI (I did have a pro account for a while and even developed an online, self-paced course that no one has bought yet) after weeks of suffering it’s mistakes, lack of internet connectedness, and overtaxed servers that always seem to cut any detailed discussion short.
In Claude AI’s favor, it’s the best coder out of the AI chatbots. But it’s not a finisher…it starts things, but doesn’t finish them (runs out of juice about the time my tokens give out, right before I wish they would, like that game at the carnival).
That said, who doesn’t know someone who starts projects well, but then never finishes them? I have been blamed for that myself. Having Claude as my AI intern taught me how irritating I could be.
Three Legged Puffin?
One of the examples Dr. McIver gives is that of a 3-legged puffin:
It didn’t matter which system I tried. It could show me a puffin, but never a 3 legged one. I tried a range of prompts, starting with “3 legged puffin” and increasing in desperation through “3 legged puffin with 3 legs” to “3 legged puffin with 3 legs that actually has 3 legs” through to “3 legged puffin with 3 actual legs with the third leg on its stomach.”
But, as another stark reminder of the fact that these systems do not think, it had no way of producing a 3 legged puffin.
Try as I might, none of my prompts yielded a 3-legged puffin, either. Dr. McIver went on to explore MORE reasons why AI was a failure, a big hoax. Like a gullible crowd at the Carnival, we kept coming back to give our tokens to play rigged games.
How Could It Be Better?
Then, the conversation flips over to what we could do better, what we must do to avoid getting suckered. What is that? Critical thinking, critical evaluation of information, and asking tough questions to fuel our reasoning.
Dr. McIver points out that what she wants education systems to produce are:
- Critical Thinkers
- Creative Problem Solvers WHO CRITICALLY EVALUATE THEIR OWN SOLUTIONS
- Challengers of the Status Quo – people who ask WHY
- Evidence based policy makers
Her conclusion is:
What we can build this way is a world where policy is evidence based. Where we make data informed decisions, while understanding that the data isn’t perfect. Where kids are empowered to learn all of the skills they need to solve problems in their own communities. Where technological solutions are rationally evaluated, rather than uncritically worshipped.
In fact, that desire for evidence-based everything is what started me down my current path. But I keep reminding myself, all I’m doing is lowering the uncertainty, not getting more certainty, of a hypothesis until it becomes a theory I can mostly rely on. The only people who have certainty are those who walk by faith alone, but that’s only self-deception, isn’t it?
Moving from Uncertainty to Certainty
Reading Dr. McIver’s words, I can definitely support critical thinking and the importance of solving problems. I’m not so sure that you can be evidence-based and believe in anything for a certainty…you can accept the reality that everything is subject to change. Maybe more physics and philosophy should be in everyone’s curriculum.
In fact, what if we all embraced that and taught our kids that from the very beginning. The problem? I didn’t really critically evaluate my false gods until middle age. My kids were grown and indoctrinated already. My life had been lived pretty much, already.
The Truth isn’t the truth. I’d rather we had a lot more of the truth, but even that is tough to ascertain given what we are (human beings who interpret the world as we are, not as it is).
The Desire to Believe
Unfortunately, I’m not sure evidence-based approaches are in the cards. It comes back to what we are in the first place. Human beings who, as the speaker points out, want to believe. And, to reach critical mass of unbelievers, you have to overcome so many obstacles. Santa Claus. Jesus. Saints. Devils and demons. Fairy tales taught to us as gospel when children.
You know, if I had to go back and change one thing, I would have asked my parents (and myself as a parent) to not teach children fantasy/pretend as if it was the literal truth. But that happens every day, doesn’t it?
But, we have to try. Not trying isn’t an option. We have to send the message out that there are truths out there that are obscured by the Truth we are taught growing up by well-meaning family and friends who were indoctrinated, brainwashed the same as we were.
Skeptical Thinking
I really like Melanie Trecek-King’s ThinkingIsPower.com approach. From FLOATER to systematic disconfirmation, there’s a lot to learn and embrace.
The goal, maybe, is to lower the level of uncertainty about ideas, to bring order to the chaos of living. If I feel sad about the lack of certainty, it’s because I was taught there was certainty about things that aren’t so.
Now, I’m a little more skeptical. AI isn’t perfect. It’s a long way. It’s another resource I can use, that might twist in my hand and bite me (metaphorically speaking here), but that’s OK.
It may be that AI doesn’t work perfectly. That 3-legged puffins may never be a reality. But that’s OK. What AI is able to do is better than some interns I’ve worked with, and can save effort and time. It does it enough times that it’s close enough to the work of a human intern.
Anyways, check out Dr. McIver’s talk. Well worth the read.