25 April 2024

Modelled on the human brain, AI is advancing fast. How should Christians respond? 

How should we respond to artificial intelligence. Picture: iStock

By Neil Dodgson 

19 February 2023

Artificial intelligence has been around since the 1940s but only in the last decade has it taken off, with the development of deep neural network techniques inspired by the structure of the human brain.  

Today, artificial intelligence is used to diagnose cancer, recognise speech, and identify people in a crowd.  

The latest development is ChatGPT, an AI system that can interact in natural language with a phenomenal knowledge of a wide range of topics. In the next decade AI will develop to do so much more. 

This leads to two interesting intersections with the Christian worldview. One is the philosophical: can a computer think, and can it have personhood?  

The other is the practical: what is the Christian response to the changes and challenges that artificial intelligence will bring to society? 

What would it mean if a computer could think?  

GK Chesterton wrote, in 1910, “A machine only is a machine because it cannot think.” If a machine could think, it would become something more than a machine. Would we humans be willing to ascribe personhood to something non-human that demonstrated it could think?  

Read more: Technology can now ‘enhance’ humanity: Should Christians be on board?

How do we think about personhood?  

Theologians have argued for centuries about whether animals are sentient, have souls, have intrinsic rights. We must now consider the same arguments applied to machines.  

Think about how you determine that a human is intrinsically valuable.  

A baby is unable to do much, but we accept a baby as valuable because we see potential. Someone with dementia is unable to do much, but we accept them as worthy of dignity because of their personhood.  

These beliefs are social constructs embedded deep in our shared value system as a community.  

What would it mean to accept a non-human intelligence into our community? What would it take to be convinced that a non-human entity had real intelligence? 

As things stand, there is no artificial intelligence system that shows the range of behaviour of a rational human adult.  

Yes, we do have artificial intelligence systems that excel at certain limited behaviours, but they are simply extraordinarily complex automata doing what they are programmed to.  

None has yet shown artificial general intelligence, which is the ability to apply intelligence across a massive range of things and to handle novel, unseen situations.  

AGI is a goal that some think we will never reach. Renowned computer scientist, Edsger Dijkstra put the argument pithily in 1984, when he said: “Asking if a computer can think is like asking if a submarine can swim.” We honestly do not know if AGI is achievable. 

If not, then that indicates that there is something special about humans. There is something about our brains and our being that is more than the sum of the 100,000,000,000 neurons in our heads.  

If AGI is achievable, it raises questions about sentience and personhood.  

We humans understand the limitations of being human and what we can reasonably expect another human to be able to do. But we do not know how an AGI would behave.  

Read more: We must tune into how Modernism is shaping us. This is why

We raise humans in community to conform to the rules, laws, and customs of that community. This is how society survives across generations.  

Will we need to “raise” AGIs in community? What if an AGI has very different values to us? How, indeed, do you instil values in a computer? And whose values? Those are questions to ponder now, before we develop an AGI. 

But there is a more pressing question: what do we do about the existing artificial intelligence systems? They are not AGIs, they are not sentient, they are just tools. But they are powerful tools that can be used to control people’s behaviour.  

We have seen this with social media. AI algorithms direct what users read and watch, which has had the effect of polarising civil society.  

We have seen this with surveillance, where certain governments are experimenting with ways of tracking people’s behaviour, with automated rewards and automated punishments. 

We can imagine using AI for good, providing each schoolchild with personally tailored education, to maximise the potential of our youth. We already use AI to diagnose certain diseases with greater accuracy than human physicians.  

There are both negative and positive uses.  

The Church has always tended to be reactionary to new developments. Our traditional response would be to call for a ban. That is not plausible.  

We have an opportunity to take the initiative, to work with our communities, to decide how to think about and respond to these new technologies as they develop. 

Why the Church?  

Centuries of debate have left us well placed to think about what it means to have personhood.  

A faith rooted in community has left us well placed to think about how we respond as a community to the challenges.  

We are used to grappling with these ideas. We, the Church, can add value to the conversation.  

Professor Neil Dodgson is Dean of the Faculty of Graduate Research and Professor of Computer Graphics at Victoria University of Wellington, New Zealand. 

For more faith news, follow The Melbourne Anglican on FacebookTwitter, or subscribe to our weekly emails.

Share this story to your social media

Find us on Social Media

Recent News

do you have A story?

Leave a Reply

Subscribe now to receive our newsletter and stay up to date with The Melbourne Anglican

All rights reserved TMA 2021

Stay up to date with
The Melbourne Anglican through our weekly newsletters.