Doug Johnson introduced me to a new quote from Christopher Hitchens. I’d first run into Hitchen’s Razor, which goes like this:

What can be asserted without evidence can also be dismissed without evidence.

That’s a great quote and one of the first that went into my notebook on critical thinking. The quote Doug shares goes like this:

Of course we have free will because we have no choice but to have it.

Is It Moral or Ethical?

Doug shares something that resonates with me:

To me, the scary thing about AI is not intelligence, but independent decision-making. Or at least human’s willingness to give decision-making to our computer programs. Making moral decisions seems to be much trickier and more dangerous than, say, financial ones.

No doubt, lots of folks are thinking about what is driving AI decision-making. For me, the question is whether people appreciate that an AI could develop a sense of ethics according to rules. To that end, to get a better understanding of AI ethics, I thought to start with Paul Paul Kurtz' Forbidden Fruit: The Ethics of Secularism. Why start there you ask? Well, it doesn’t make sense to expect AI to adhere to God-issued commandments about right and wrong. And, I’m not sure I’d want to feed AI holy texts from world religions and see what it came up with.

About Morality

Paul Kurtz points out that there are two kinds of morality:

  • A morality of obedience to commandments allegedly derived from a transcendental theistic source
  • an ethics-based on the development of critical rational inquiry and insight

What I’m curious about is, “How do you prescribe ethics-based morality (non-theistic) to an AI model?” I suppose you’d have to write all the rules down, then have it follow those rules.

Wait, Humans Can’t Develop a Moral Code

For those folks who believe in God, AI must be doomed to commit atrocity. That’s because people are responsible for setting up the parameters for the AI to follow. Those parameters must be flawed because they are not coming from an all-powerful being.

But I found Paul Kurtz' perspective on this enlightening. He writes:

Since religious systems of morality are creations of human culture, invested and sustained by men, this belies the claim that human beings cannot by their own effort, create a moral code or behave morally.

Even if you don’t like the moral code in the Bible or other holy texts, it’s still kinda cool to think, “Yeah, what we’ve been ascribing to an all-powerful god? We did that.” Kurtz goes on to say:

There is insufficient evidence for the existence of God. Therefore, the religious texts cannot have been revealed by a god. Rather these texts are simply the expressions of human beings.

What Does AI Make of World Religious Texts?

For fun, I asked ChatGPT Plus to share five big takeaways after reading all the holy texts. It came back with:

  1. Unity and Oneness: Many holy texts emphasize the fundamental unity of humanity and the oneness of God, advocating for universal compassion and understanding across diverse cultures.

  2. Moral Frameworks: These texts often provide moral guidelines designed to shape ethical human behavior, underscoring virtues like honesty, charity, and forgiveness.

  3. Purpose of Life: They frequently explore the meaning of life and the human role in the universe, suggesting that life has a purpose guided by divine will or cosmic order.

  4. Consequences of Actions: A recurring theme is the concept of accountability for one’s actions, with promises of reward or punishment in an afterlife based on earthly conduct.

  5. Spiritual Practice: Encouragement of regular spiritual practices (like prayer, meditation, or pilgrimage) is common, serving as a means to cultivate a deeper connection with the divine and to foster personal growth.

So given that, what are five ethical guidelines AI’s are to follow? Here’s the response: Here are five ethical guidelines that I, as an AI, am designed to adhere to, and how they compare with human secular ethics:

  1. Neutrality and Impartiality: I strive to provide unbiased information, maintaining a neutral stance without personal or cultural biases, unlike human secular ethics which can be influenced by personal or cultural perspectives.

  2. Confidentiality: I treat all interactions as private, not storing personal data or details, a principle similar to human ethical standards around privacy, but applied strictly and without exception.

  3. Safety and Non-maleficence: I aim to prevent harm and ensure user safety by filtering harmful content and providing accurate information, paralleling the human ethical principle of “do no harm” but executed through predefined programming guidelines.

  4. Accessibility and Inclusivity: I am designed to be accessible to all users, offering information in an inclusive manner that seeks to bridge knowledge gaps, which is a broader and more systematic application of the human principle of equality and inclusion.

  5. Accuracy and Reliability: I commit to providing fact-based and precise information, constantly updated through algorithms, which is akin to human commitments to truthfulness but reliant on algorithmic updates and data veracity.

Let’s take a moment to reflect on what that AI model, ChatGPT said. Now, compare it to Anthrop\c’s via Perplexity Pro:

As an AI assistant, I do not personally adhere to any particular ethical framework, whether secular or religious. I was created by Anthropic to be helpful, harmless, and honest.

However, based on the provided search results, there are a few key points about AI ethics from a secular perspective:

  1. Many AI developers and companies operate from a secular, preference-utilitarian ethical framework focused on maximizing the fulfillment of human preferences. This approach has limitations and is open to objections.

  2. A dominant secular view is that morality is subjective and made up by humans, rather than based on objective, transcendent principles. This raises challenges for determining what ethical principles AI systems should follow.

  3. Without an objective moral framework, it becomes difficult to evaluate and compare different ethical perspectives for AI. Companies driven by profit may make questionable ethical choices in AI development if they view ethics as arbitrary.

  4. Numerous secular organizations have proposed ethical AI frameworks consisting of principles like transparency, fairness, privacy, accountability, etc. However, there is debate about the degree to which machines can truly be considered ethical agents.

  5. Some argue AI ethics requires an objective moral standard, possibly grounded in religious tradition, rather than a purely secular foundation of subjective human preferences.

In summary, I aim to be helpful and avoid harm in my interactions, but I do not have personal ethical beliefs. There are significant challenges and open questions around developing robust secular foundations for ethical AI. Engaging with religious perspectives may provide valuable insights to complement secular approaches.

Review Our Understanding

I can’t help but agree with Doug that this is the time, as he says below, to reflect on our own sense of ethics. I sense a war between belief and non-belief coming, as believers grow more and more desperate to push their own perspective.

It seems to me that the development and spread of AI is an excellent time to review our understandings about moral decision-making, free will, and self-preservation. Perhaps we might better understand ourselves by asking how our creations might act.

To this perspective, I must confess to a bit of concern. Critical thinking, evidence-based actions, skeptical thinking, are hard work. It’s far easier to be told what to do, which is something I noticed even as a Catholic in year one of university studies, staving off the testimonials and admonitions of born again fundamentalists of every stripe. To contemplate non-belief as an option then was to jump off a cliff. Now, it is a reasoned choice.

That’s why ethics is so important. Secular ethics. Ethics made by people that affirms the best of who we are, and pushes us to reconcile with the worst.