Leonardo da Vinci failed, if he hoped to keep men and women in ships safe:
… I do not publish nor divulge [methods of building submarines] by reason of the evil nature of men who would use them as means of destruction at the bottom of the sea, by sending ships to the bottom, and sinking them together with the men in them.
– Leonardo da Vinci
Considering my friend’s requests about image timestamp modification recently, I was thinking about the ethics of technology how-to on the world around it. When formulating a sense of ethics, one might ask, “How does this harm human beings? What is the impact of a technology, a scientific process, on people and those things that keep them afloat?
I found this quote from Arthur Galston in Kaj Sotala’s A brief history of ethically concerned scientists quite interesting. Galston was the inventor of Agent Orange:
I used to think that one could avoid involvement in the antisocial consequences of science simply by not working on any project that might be turned to evil or destructive ends. I have learned that things are not all that simple, and that almost any scientific finding can be perverted or twisted under appropriate societal pressures. In my view, the only recourse for a scientist concerned about the social consequences of his work is to remain involved with it to the end.
His responsibility to society does not cease with publication of a definitive scientific paper. Rather, if his discovery is translated into some impact on the world outside the laboratory, he will, in most instances, want to follow through to see that it is used for constructive rather than anti-human purposes.
Of course, anyone who has relied on technology, dealt in information (like a journalist) faces this dilemma. Should one provide information that may be used against others? That may harm others? Where’s the line in the sand?
What About AI
While government collusion with aliens, zombies, and other bad things have been ruled out due to ridiculousness, a lot of folks are worried about AI and its impact. An article from The Digital Learning Institute offers, as do many others, a sanguine opinion of AI’s use in education:
Simply put, educators need to have an open mind as to what they can do with this tool, and how it can assist both teachers and students to reduce their workloads. Seeing it as an opportunity can open several doors for learning opportunities – and more.
Naysayers and doomcriers are either hiding amidst the outpouring of support for AI to be used in every human endeavor, or embracing their role as modern day Cassandras, ignored no matter how accurate their predictions.
Value-laden technology refers to technological artifacts that have inherent values due to their design, functions, and goals. In some cases, scientists and engineers focus on the technical aspects of creating such technology without considering the moral implications, which can lead to ethical dilemmas.
The genie is, as usual, out of the bottle. There’s no stopping this, right? Even if you could pull a Da Vinci on AI, there’s too much money in it:
[It is] not a recommended general approach, as it may limit the progress of society by withholding valuable knowledge. Instead, ethical scientists should evaluate each situation individually and ensure safety and responsible use of technology while encouraging its beneficial applications. In the context of AI and machine learning, such as with ChatGPT, it is crucial to consider the ethical implications of its use in education.
While it can offer numerous benefits, such as personalized learning, virtual tutoring, language learning, exam preparation, and writing assistance, there are also potential shortcomings, such as providing wrong answers, biased translations, repetitive copy, lack of depth, and long-winded sentences.
Therefore, it is essential for educators and students to use AI tools like ChatGPT responsibly, understanding their limitations and ensuring the correct data is given. AI should act as a supporting role in developing students' creativity, critical thinking, and authentic content creation. Source: Conversation with Perplexity Pro
Focus on Technical, Not Ethical
In my own efforts, I have focused on the “how-to” use AI without giving much thought to the “ethics.” The how-to is a focus on the technical side of using AI to make mundane tasks easier to navigate. But is that really all there is for K-12 education?
Worse, how can such conversations even be had in a politically charged environment that is K-12 schools today? Just the other night during my walk, I was listening to the podcast, Those who can’t teach…anymore. This fascinating podcast focuses on how politically charged school environments have become, leading to the exit of long-time, award winning educators.
In the podcast, one of the teachers (Atkinson) shares:
“…There were immense changes in education, in our society, and politics in the way that people thought about each other and treated each other, and it started to come into my classroom. . .I used to be able to have a conversation with kids about a current issue. And 100% explore both sides of that issue…How can you really teach kids to think of the world in which they live if you can’t broach these subject?”
Now, take AI in schools. I bet most of those who are bringing AI into schools and every other human endeavor are spending little to no time considering its impact. Except for fight or flight, it’s just not a safe topic. No teacher is going to say to an administrator, “I don’t want to learn MagicSchool.ai or XYZ tool” because to do so sends a message about them.
Are You a Stick in the Mud?
In reflecting on my own adoption of AI, which I love, I have kept an eye on the impact AI has on people, climate, and others. But at a certain point, you realize, this is coming. The best I can do is ride this bomb down to the ground and see what I can do, as Galston says, to follow through for human purposes. That is, to be constructive and minimize the amount of harm done to human beings.
But it makes me wonder. Don’t we end up just following our technology from one disaster to the next, cleaning up (if that) the previous disaster with the new one coming? How many times have you read…
The use of artificial intelligence (AI) can contribute to the fight against climate change. Existing AI systems include tools that predict weather, track icebergs and identify pollution. AI can also be used to improve agriculture and reduce its environmental impact, the World Economic Forum says. The World Economic Forum
Yeah. A lot. It makes me think of Supercell’s Clash Royale’s bomb-riding goblin (image source: