Can AI and Ethics Get Along? Can We Survive If They Don’t?
Any kid these days with two functioning brain cells and a YouTube video of the Challenger Space Shuttle explosion is going to think twice about becoming an astronaut when he sees how often his own computer crashes.
Now many are starting to suggest that AI should govern us. What could possibly go wrong?
AI is skilled and efficient at calculation beyond our wildest dreams. It’s also able to seek, find and organize existing knowledge at blinding scope and speed.
But a coherent ethical structure and the practical morality of applying it…? AI only gets what it’s programmed to get…from humans. And humans don’t have a coherent moral theory by which to operate.
Now things are coming to a head on this issue because the AI company, Anthropic, has guard rails for their software that preclude its being used for domestic surveillance of U.S. citizens or to guide autonomous weaponry — instruments of death that would decide themselves when and whom to kill, without real-time human input.
President Trump threw a tantrum about this. “The U.S. will never allow a radical left, woke company to dictate how military fights and wins wars. That decision belongs to your commander and chief, and the leaders I appointed to run our military.”
Anthropic still pulled out, but the next day the Iran invasion continued to fully use the Claude software.
Then Sam Altman, a giant question mark at best in the virtue department, stepped up and said that his company OpenAI would take a go at the art of that deal. He said that his company’s ChatGPT had the same ethical guidelines as Anthropic, but somehow, he and Trump were able to get in alignment. Those details remain murky, and my sources tell me subscribers are leaving ChatGPT in droves over this mess.
Should there be such a thing as ethics for AI? What would it look like? On what principles would it be based?
Should this be decided by a community, a nation, all of humanity? Should any person, much less an egomaniacal U.S. president, be able to single-handedly define the moral behavior of the artificial intelligence software we use to make military decisions?
A couple of years ago I was asked by an AI company to create a Declaration of Ethics for Artificial Intelligence. With the assistance of many of my most skilled colleagues in the philosophy of liberty, we developed over several months as simple a document as we could. See my blog on this.
A universal morality, in my opinion, has been, throughout history, the primary missing element for a thriving humanity. We have the opportunity now to fill that void with a moral theory that is coherent, logical, common sensical, scientifically validated, intuitively and spiritually sound, completely fair, and universal throughout time and space in its application. This was the punchline of both of our THRIVE films, hundreds of articles, short videos, lectures and interviews. I will be spending a lot of time on this topic in the upcoming months, spelling this out in great detail and will be featuring it in my upcoming book, but the key point is that if we don’t align around true moral guidance incorporated into our AI, we may well be programming the demise of our own species.
In a nutshell, the only serious candidate for a Universal Morality is what’s called the Non-Aggression Principle (NAP). It states that “The initiation of the use of force, other than in genuine defense of self, other or private property, is prohibited.”
This seems very obvious, and immediately bans assault, theft, rape, and murder. It also precludes taxation (theft), coercive government and wars of aggression. It is not “pacifism.” It justifies using force for moral protection.
The big questions arise around what is “true self-defense.”
This needs to be built into any AI as intrinsic, especially for defense weapons systems. And then, as additional safeguard, a well-selected team of proven ethical humans needs to be able to review the logic and action that is being recommended by AI so that a glitch, malware or unethical programming doesn’t arbitrarily and unjustly destroy lives…or potentially all life on our globe.
When Trump was kidnapping Maduro and taking over their oil, he was asked about the nature of his moral guard rails. Seemingly surprised even by the question, he said it was simply “whatever I think is right, and fortunately I am a very moral person.” Not very well thought through for an individual in charge of the most powerful army and destructive weaponry in the history of planet Earth.
So please have a look at the declaration cited above and let me know how you think it should apply to current U.S. leadership and Artificial Intelligence deployment.
The time is now for perhaps the most important exploration and debate in human history.