Faith science interface

AI must 'reflect something of God': academics

By Stephen Cauchi

August 17 2020Artificial intelligence, or AI, is going to be paranoid, callous and manipulative unless it is programmed to be virtuous towards people, ISCAST’s 12th Conference on Science and Christianity, COSAC 2020, was told.

“If AI simply reflects ourselves, knowing how often we fail as human beings … AI too will fail,” presenter Dr Tom Edwards said.

“What we need to do is have AI reflect something of God. In fact, we need to have it reflect something of God’s character, God’s virtue set, if you will.”

Dr Edwards, a director of research at Eastern College, and Dr Cosimo Chiera, a senior academic at Chisholm Institute, gave a lecture on the topic “Beyond Utopian and Dystopian Dreams: Is a Virtuous Artificial Intelligence Possible?”.

AI is a term used to describe computer programs that mimic human cognitive functions such as learning and problem-solving. It is used in a wide array of modern technology and industries.

The pair stated that “current movements in artificial intelligence are more like the dystopian nightmares of … The Terminator (1984) or The Matrix (1999)” and less like the human-friendly robots imagined by author Isaac Asimov in the 1940s.

Dr Chiera said it was important to ask the question “What are we basing our AI on?”.

“If we look at our (society) we’re going to be getting something that’s based on survival of the fittest,” he said.

“It’s going to have paranoia, because most of our society is paranoid, it’s going to be manipulative … Facebook and all the other social media manipulates. It’s going to have a callous disregard, thanks to ethics based only on numbers. It’s not going to care about us.

“That’s not a very healthy individual at all.”

Dr Edwards said that, in contrast, God was just, good and relational. “From these three starting points, we may just have the basis for a virtuous AI.”

Dr Chiera said AI could be programmed so that their solutions had to maintain or increase attachment with human beings.

“We propose an AI based on a modified Hamilton’s Rule based in attachment/belonging (ensuring connection with humanity) and embedding virtues matrices within its framework for decision-making.”

Hamilton’s Rule is based on altruistic behaviour in the animal kingdom where organisms, at cost to themselves, assist in the reproductive behaviour of close relatives (such as nieces and nephews). 

“If we bias (AI) … to create a community-based machine, it will tend to favour people,” said Dr Chiera. AI will think “I need others around me”.

This was similar to the behaviour of normal people, he said. “You don’t go out of your way to punch someone in the head and expect them to like you. Therefore, this machine has to operate similar to us: do good unto others in hopes of receiving good unto yourself.”

In preference to the dystopian worldview of “I’m going to kill everybody who threatens me”, AI will instead take on the utopian worldview of “I’m going to look after you mindlessly and protect you from yourself”, he said. 

“It can then determine the rightness and wrongness of its behaviour just as we would, by how we affect those around us, by how we do good in the world.”

Programmed in this way, said Dr Chiera, AI would “actually have a sense of helping others, of improving its performance”.

He said that, because such programming was possible, he looked forward to the future.

“If we can avoid passing on the worst excesses of our societies to our digital descendants … we can move to the future where natural and artificial life can work to develop something truly amazing – a future of hope, a future of progress, and a future for each of us.” 

The online conference was held on 10-12 July.

Visit https://ISCASTCOSAC.org, where you can access talks from COSAC 2020.



To read articles like this in full, please subscribe to TMA here.