3 December 2024

Advertisement

Time will judge AI’s ethical implications

Computer and hands close up. The concept of artificial intelligence and computer technology.

Gayle Woloschak

18 August 2023

Gayle Woloschak is a fellow of ISCAST (Christianity and Science in Conversation) and Professor of Radiation Oncology at Northwestern University, USA. She has taught religion and science courses for more than 20 years and has a keen interest in ethics. Here she pens some thoughts on the capabilities, concerns, and benefits of artificial intelligence, and the constraints needed for this rapidly developing technology. 

The new AI platform, ChatGPT, was asked to write a short limerick summarising the advantages and disadvantages of AI: 

In a world with AI by our side, 

Advantages we cannot hide, 

Efficiency’s high,  

Tasks simplified, Oh my! 

Yet concerns do arise, worldwide. 

It seems to come so easily for ChatGPT to “think” about its place in the world. But how do we react to this limerick and to the fact that a non-human entity, ChatGPT, can evaluate its own merits or disadvantages? Amused? Or disturbed? This may be our spontaneous response to AI, but the critical need is for a considered and studied response to this new technology. 

Most Christians view technology as neither good nor bad, but believe it is a technology’s purpose and application that determines its ethical implications. There is, however, more to it than that. For Christians considering AI, we need reliable, high-quality knowledge about the technology to begin to reflect theologically and develop a Christian ethics of AI. It is within the science of AI where that knowledge potentially lies. 

Read more: Some Christians believe technology can ‘fix’ us. Scripture differs

Even though it may not be apparent, science itself can drive the ethics behind decision-making. For example, when AIDS was first discovered, it was considered ethical to quarantine AIDS-positive people away from others, based on the very real possibility that AIDS was spreading by aerosol. Then, when scientists discovered that AIDS was caused by the virus HIV-1 – a virus not spread by aerosol – the ethics of how best to handle AIDS patients changed: quarantine was no longer needed.  

Because science can and does drive ethics, and even our Christian ethics, a careful handling of our knowledge of AI is very important. For example, what do we know of AI’s response to ethically-charged situations? Will AI be compassionate? Or, for instance, will AI give humans answers that reflect an unhealthy internet bias? This short article outlines some of my concerns about AI, some of the possible advantages, and offers some guidelines for the “AI-future”.  

Concerns over adopting ChatGPT too broadly  

One of the most popular current forms of AI is ChatGPT, an algorithm that develops text for stories, articles, and so on, given just a limited amount of human input. It is trained to follow an instruction and provide a detailed response.  

The resulting text can fool even the experts. For instance, scientists were unable to distinguish scientific article abstracts written by AI from those written by real people, according to a 2023 article in the journal Nature by H. Else. It is easy to see potential problems here for the integrity of scientific endeavour if AI pervaded the scientific method. 

On a broader note, academic integrity is effectively threatened by AI. At my university campus, cases have been found where students have used ChatGPT to write their papers. Software is needed to detect these cases more reliably, but is presently not dependable. ChatGPT will be challenging for education.  

Read more: Technology can now ‘enhance’ humanity: Should Christians be on board?

Apart from academic integrity, another concern is that ChatGPT lacks a “feel” for how humans frame our communication. So often with human communication, the “life” surrounding what we say adds extra nuance and deeper levels of meaning – but ChatGPT leaves this out by generating generic responses. For example, a group of theologians asked ChatGPT to write a sermon on the Feast of the Transfiguration. Most of the resulting sermon was accurate with an appropriate description of the feast. But the ChatGPT-generated sermon was generic – it was applicable in a broad variety of situations whereas sermons by pastors are typically written to suit a particular parish for a particular day.  

Similar observations about the limitations of ChatGPT can also be made with ChatGPT’s creation of art and movie scripts. To date, the art has been generic with little originality, has lacked the detail of human art, and also suffered from the biases of the internet. Further, a movie script written by an AI bot in 2016 and developed into a film lacked logical links: it is not clear how one line follows from the other.  

A major concern is over the inherent bias that comes either from the AI itself or from the training datasets that were used to generate the AI. For example, when AI was asked to do a photo identification of a blurry picture of Barrack Obama, it featured him as a white man rather than being true to his dark skin color. Another concern is that AI pretrains the datasets for a world someone thinks “should be” instead of the world that is. 

Difficulties in controlling AI have also been noted. Some scholars are worried about the vast knowledge that AI can amass. In a May 2023 BBC news article AI “godfather” Geoffrey Hinton warned of the dangers that AI can pose to humanity. In particular, he was concerned about AI generating humor and “laughing” at issues insensitively, or generating insults and expressing sarcasm heartlessly.  

Some other concerns include the difficulties in regulating AI, predicted job loss (although data has not shown this yet), and questions about the rights of artificial or electronic persons.  

These considerations grow our understanding of AI. Yet more importantly, they can shape our ethics around AI and may well guard against adopting AI’s use too broadly.   

Read more: Why the ‘Scientific and Spiritual Human’ was a milestone for me

Does AI have advantages? 

Despite the concerns, there are advantages that make AI worthwhile as a human endeavor. With AI, one can apply all human online knowledge to a problem rather than just the amount a single brain can provide. Analysis of huge datasets about millions of individuals, which have not been possible with other tools, are possible with AI. The information gained from this has vastly enhanced medical studies, diagnostic work, banking, cyber security, and much more. A typical example is where hundreds of thousands of radiological results from mammograms are given to computers to identify patterns to better distinguish cancer from normal tissue, without the need for a second, more detailed, examination of patients. Diagnostic accuracy is enhanced while lessening patient and doctor time. 

Goals for an ‘AI-future’ 

In light of these considerations, future goals have been advanced at AI conferences. These goals may help to shape our Christian ethics of AI:  

  1. Maintain human verification. AI in general makes many mistakes and verification that the information is valid would be important. 
  1. Develop rules for accountability. Some programs exist that can check for AI activity, but these are not considered fully accurate at this point. 
  1. Invest in a truly open system. Most ChatGPT systems that are used today are proprietary, which limits their use and their ability to be applied to some systems. 
  1. Widen the debate to include those who might be impacted by AI yet who are not direct users. 
  1. Improve transparency. There is little information on the training data that is fed to AI, and thus the biases that may be part of the system are not clear. Are only data from North America included? Are data from selected racial populations included? Are particular groups prevented from being part of AI training datasets, and so on? 

Even with these goals, few of us would think the “AI-future” is certain. Doubtless more guidelines are needed.  

AI itself was asked to generate two short limericks, one on a future with AI and one on the future dangers of AI: 

In the future, AI’s prowess will bloom, 

A world of wonder it will consume 

With algorithms so bright 

And data as its light 

New frontiers it will constantly groom. 

Or: 

In the future, beware of AI’s might, 

As it ventures into the night, 

With intelligence keen, 

A power unseen, 

Its dangers could cause quite a fright. 

Which is more accurate? Time will judge if either is correct. 

For more faith news, follow The Melbourne Anglican on FacebookTwitter, or subscribe to our weekly emails.

Share this story to your social media

Find us on Social Media

Recent News

do you have A story?

Advertisement

Leave a Reply

Subscribe now to receive our newsletter and stay up to date with The Melbourne Anglican

All rights reserved TMA 2021

Stay up to date with
The Melbourne Anglican through our weekly newsletters.