Artificial Intelligence: Theologians, philosophers and Catholic thinkers weigh in

Artificial Intelligence

Like paper, print, steel and the wheel, computer-generated artificial intelligence is a revolutionary technology that can bend how we work, play and love.

It is already doing so in ways we can and cannot perceive.

As Facebook, Apple and Google pour billions into A.I. development, there is a fledgling branch of academic ethical study—influenced by Catholic social teaching and encompassing thinkers like the Jesuit scientist Pierre Teilhard de Chardin—that aims to study its moral consequences, contain the harm it might do and push tech firms to integrate social goods like privacy and fairness into their business plans.

“There are a lot of people suddenly interested in A.I. ethics because they realize they’re playing with fire,” says Brian Green, an A.I. ethicist at Santa Clara University. “And this is the biggest thing since fire.”

The field of A.I. ethics includes two broad categories.

One is the philosophical and sometimes theological questioning about how artificial intelligence changes our destiny and role as humans in the universe; the other is a set of nuts-and-bolts questions about the impact of powerful A.I. consumer products, like smartphones, drones and social media algorithms.

The first is concerned with what is termed artificial general intelligence.

A.G.I. describes the kind of powerful artificial intelligence that not only simulates human reasoning but surpasses it by combining computational might with human qualities like learning from mistakes, self-doubt and curiosity about mysteries within and without.

A popular word—singularity—has been coined to describe the moment when machines become smarter, and maybe more powerful, than humans.

That moment, which would represent a clear break from traditional religious narratives about creation, has philosophical and theological implications that can make your head spin.

But before going all the way there—because it is not all that clear that this is ever going to happen—let us talk about the branch of A.I. ethics more concerned with practical problems, like if it is O.K. that your phone knows when to sell you a pizza.

“For now, the singularity is science fiction,” Shannon Vallor, a philosophy professor who also teaches at Santa Clara, tells me. “There are enough ethical concerns in the short term.”

The ‘Black Mirror’ factor

While we ponder A.G.I., artificial narrow intelligence is already here: Google Maps suggesting the road less travelled, voice-activated programs like Siri answering trivia questions, Cambridge Analytica crunching private data to help swing an election, and military drones choosing how to kill people on the ground.

A.N.I. is what animates the androids in the HBO series “Westworld”—that is, until they develop A.G.I. and start making decisions on their own and posing human questions about existence, love and death.

Even without the singular, and unlikely, appearance of robot overlords, the possible outcomes of artificial narrow intelligence gone awry include plenty of apocalyptic scenarios, akin to the plots of the TV series “Black Mirror.”

A temperature control system, for example, could kill all humans because that would be a rational way to cool down the planet, or a network of energy-efficient computers could take over nuclear plants so it will have enough power to operate on its own.

The more programmers push their machines to make smart decisions that surprise and delight us, the more they risk triggering something unexpected and awful.

The invention of the internet took most philosophers by surprise.

This time, A.I. ethicists view it as their job to keep up.

“There’s a lack of awareness in Silicon Valley of moral questions, and churches and government don’t know enough about the technology to contribute much for now,” says Tae Wan Kim, an A.I. ethicist at Carnegie Mellon University in Pittsburgh. “We’re trying to bridge that gap.”

A.I. ethicists consult with schools, businesses and governments.

They train tech entrepreneurs to think about questions like the following.

  • Should tech companies that collect and analyze DNA data be allowed to sell that data to pharmaceutical firms in order to save lives?
  • Is it possible to write code that offers guidance on whether to approve life insurance or loan applications in an ethical way?
  • Should the government ban realistic sex robots that could tempt vulnerable people into thinking they are in the equivalent of a human relationship?
  • How much should we invest in technology that throws millions of people out of work?

Tech companies themselves are steering more resources into ethics, and tech leaders are thinking seriously about the impact of their inventions.

A recent survey of Silicon Valley parents found that many had prohibited their own children from using smartphones.

Mr. Kim frames his work as that of a public intellectual, reacting to the latest efforts by corporations to show they are taking A.I. ethics seriously.

In June, for example, Google, seeking to reassure the public and regulators, published a list of seven principles for guiding its A.I. applications.

It said that A.I. should be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence, and be made available to uses that accord with these principles.

In response, Mr. Kim published a critical commentary on his blog.

The problem with promising social benefits, for example, is that “Google can take advantage of local norms,” he wrote.

“If China allows, legally, Google to use AI in a way that violates human rights, Google will go for it.” (At press time, Google had not responded to multiple requests for comment on this criticism.)

The biggest headache for A.I. ethicists is that a global internet makes it harder to enforce any universal principle like freedom of speech.

The corporations are, for the most part, in charge. That is especially true when it comes to deciding how much work we should let machines do. Continue reading

  • John W. Miller is a Pittsburgh-based former Wall Street Journal staff reporter and co-director of the PBS film “Moundsville.”
Additional reading

News category: Analysis and Comment.

Tags: , ,