Artificial intelligence - CathNews New Zealand https://cathnews.co.nz Catholic News New Zealand Tue, 15 Oct 2024 01:42:59 +0000 en-NZ hourly 1 https://wordpress.org/?v=6.7.1 https://cathnews.co.nz/wp-content/uploads/2020/05/cropped-cathnewsfavicon-32x32.jpg Artificial intelligence - CathNews New Zealand https://cathnews.co.nz 32 32 70145804 Artificial intelligence ethics under Catholic microscope https://cathnews.co.nz/2024/10/14/catholic-university-creating-ethics-for-artificial-intelligence/ Mon, 14 Oct 2024 05:06:15 +0000 https://cathnews.co.nz/?p=176892 artificial intelligence

Artificial intelligence is about to get a close-up investigation from Notre Dame Catholic University academics tasked with reporting on its ethical uses. This is a task close to the Pope's heart. Earlier this year he spoke of political leaders' responsibility to ensure AI is used ethically. Project plan The University has announced that it will Read more

Artificial intelligence ethics under Catholic microscope... Read more]]>
Artificial intelligence is about to get a close-up investigation from Notre Dame Catholic University academics tasked with reporting on its ethical uses.

This is a task close to the Pope's heart. Earlier this year he spoke of political leaders' responsibility to ensure AI is used ethically.

Project plan

The University has announced that it will use an endowment to develop faith-based frameworks for the ethical uses of artificial intelligence - known as AI or AGI (the G stands for General).

This "is a pivotal moment for technology ethics" says Meghan Sullivan, director of the Catholic university's Institute for Ethics and the Common Good.

AGI is developing quickly and can potentially change our economies, our education systems and the fabric of our social lives, she says.

"We believe that the wisdom of faith traditions can make a significant contribution to the development of ethical frameworks for AGI" Sullivan says.

The first part of the framework development will see the Catholic University undertake a year-long planning project.

By next September, Sullivan says the university aims to have engaged and built a network of higher education and technology leaders, along with leaders of different faiths "to broach the topic of ethical uses of AI and eventually create faith-based ethical frameworks".

"This project will encourage broader dialogue about the role that concepts such as dignity, embodiment, love, transcendence and being created in the image of God should play in how we understand and use this technology.

"These concepts - as the bedrock of many faith-based traditions - are vital for how we advance the common good in the era of AGI."

The university says that, in September 2025, a conference will focus on the most pressing faith-based issues relating to the proliferation of AGI and provide training and networking opportunities for leaders who attend.

Priority work

For some time Pope Francis has been pushing for work on AI ethics to begin.

It must be used only to benefit humanity, he told the Group of Seven leaders at a summit in southern Italy in June.

"We cannot allow a tool as powerful and indispensable as artificial intelligence to reinforce such a technocratic paradigm but rather we must make artificial intelligence a bulwark against its expansion," Pope Francis said.

"This is precisely where political action is urgently needed."

According to 2024 statistics from National University in San Diego, 77 percent of companies are either using or exploring the use of AI in their businesses.

For 83 percent, the technology is a top priority in their future plans.

Source

Artificial intelligence ethics under Catholic microscope]]>
176892
FraudGPT and other malicious AIs are the new frontier of online threats. What can we do? https://cathnews.co.nz/2024/10/03/fraudgpt-and-other-malicious-ais-are-the-new-frontier-of-online-threats-what-can-we-do/ Thu, 03 Oct 2024 05:11:49 +0000 https://cathnews.co.nz/?p=176428 cyber fraud

The internet, a vast and indispensable resource for modern society, has a darker side where malicious activities thrive. From identity theft to sophisticated malware attacks, cyber criminals keep coming up with new scam methods. Widely-available generative artificial intelligence (AI) tools have now added a new layer of complexity to the cybersecurity landscape. Staying on top Read more

FraudGPT and other malicious AIs are the new frontier of online threats. What can we do?... Read more]]>
The internet, a vast and indispensable resource for modern society, has a darker side where malicious activities thrive.

From identity theft to sophisticated malware attacks, cyber criminals keep coming up with new scam methods.

Widely-available generative artificial intelligence (AI) tools have now added a new layer of complexity to the cybersecurity landscape. Staying on top of your online security is more important than ever.

The rise of dark LLMs

One of the most sinister adaptations of current AI is the creation of "dark LLMs" (large language models).

These uncensored versions of everyday AI systems like ChatGPT are re-engineered for criminal activities. They operate without ethical constraints, and with alarming precision and speed.

Cyber criminals deploy dark LLMs to automate and enhance phishing campaigns, create sophisticated malware, and generate scam content.

To achieve this, they engage in LLM "jailbreaking" - using prompts to get the model to bypass its built-in safeguards and filters.

For instance, FraudGPT writes malicious code, creates phishing pages, and generates undetectable malware. It offers tools for orchestrating diverse cybercrimes, from credit card fraud to digital impersonation.

FraudGPT is advertised on the dark web and the encrypted messaging app Telegram. Its creator openly markets its capabilities, emphasising the model's criminal focus.

Another version, WormGPT, produces persuasive phishing emails that can trick even vigilant users. Based on the GPT-J model, WormGPT is also used for creating malware and launching "business email compromise" attacks - targeted phishing of specific organisations.

What can we do to protect ourselves?

Despite the looming threats, there's a silver lining. As the challenges have advanced, so have the ways we can defend against them.

AI-based threat detection tools can monitor malware and respond to cyber attacks more effectively. However, humans need to stay in the mix to keep an eye on how these tools respond, what actions they take, and whether there are vulnerabilities to fix.

You may have heard keeping your software up to date is crucial for security. It might feel like a chore, but it really is a critical defence strategy. Updates patch up the vulnerabilities that cyber criminals try to exploit.

Are your files and data regularly backed up? It's not just about preserving files in case of a system failure. Regular backups are a fundamental protection strategy. You can reclaim your digital life without caving to extortion if you're targeted by a ransomware attack - when criminals lock up your data and demand a ransom payment before they release it.

Cyber criminals who send phishing messages can leave clues such as poor grammar, generic greetings, suspicious email addresses, overly urgent requests, or suspicious links. Developing an eye for these signs is as essential as locking your door at night.

If you don't already use strong, unique passwords and multi-factor authentication, it's time to do so. This combination multiplies your security, making it dramatically more difficult for criminals to access your accounts.

What can we expect in the future?

Our online existence will continue to intertwine with emerging technologies like AI. We can expect more sophisticated cyber crime tools to emerge, too.

Malicious AI will enhance phishing, create sophisticated malware and improve data mining for targeted attacks. AI-driven hacking tools will become widely available and customisable.

In response, cybersecurity will have to adapt, too. We can expect automated threat hunting, quantum-resistant encryption, AI tools that help to preserve privacy, stricter regulations and international cooperation.

The role of government regulations

Stricter government regulations on AI are one way to counter these advanced threats. This would involve mandating the ethical development and deployment of AI technologies, ensuring they're equipped with robust security features, and adhere to stringent standards.

In addition to tighter regulations, we also need to improve how organisations respond to cyber incidents, and what mechanisms there are for mandatory reporting and public disclosure.

By requiring companies to promptly report cyber incidents, authorities can act swiftly. They can mobilise resources to address breaches before they escalate into major crises.

This proactive approach can significantly mitigate the impact of cyber attacks, preserving both public trust and corporate integrity.

Further, cyber crime knows no borders. In the era of AI-powered cyber crime, international collaboration is essential. Effective global cooperation can streamline how authorities track and prosecute cyber criminals, creating a unified front against cyber threats.

As AI-powered malware proliferates, we're at a critical junction in the global tech journey - we need to balance innovation (new AI tools, new features, more data) with security and privacy.

Overall, it's best to be proactive about your own online security. That way you can stay one step ahead in the ever-evolving cyber battleground.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

  • Bayu Anggorojati is Assistant Professor, Cyber Security Indonesia
  • Arif Perdana is Associate Professor, Data Science Indonesia, Monash Data Futures Institute
FraudGPT and other malicious AIs are the new frontier of online threats. What can we do?]]>
176428
Can AI make faith great again for the masses? https://cathnews.co.nz/2024/07/15/ais-future-impact-on-the-church-can-it-make-faith-great-again-for-the-masses/ Mon, 15 Jul 2024 06:12:04 +0000 https://cathnews.co.nz/?p=173148 AI

Imagine a world where AI is omnipresent. It occupies your home, your car, your workplace, your pocket. Even your mind. Every aspect of your daily routine is seamlessly integrated with this sophisticated technology. It anticipates your needs, completes your thoughts, deciphers your emotions, plays your favorite songs, drafts your emails and even suggests your next Read more

Can AI make faith great again for the masses?... Read more]]>
Imagine a world where AI is omnipresent.

It occupies your home, your car, your workplace, your pocket.

Even your mind.

Every aspect of your daily routine is seamlessly integrated with this sophisticated technology.

It anticipates your needs, completes your thoughts, deciphers your emotions, plays your favorite songs, drafts your emails and even suggests your next meal.

It serves as your personal assistant, confidant, entertainment hub and your lover.

Life becomes smooth, convenient and tailored to your every whim.

Authenticity lacking

But something crucial is missing.

A sense of unease begins to take root.

Interactions feel hollow, conversations lack depth and relationships become superficial and transactional.

The more we rely on AI, the more we find ourselves yearning for something it cannot provide:

Authenticity, meaning and opportunities to connect on a fundamentally human level.

This is where the church re-enters the scene, not as a relic of the past, but as a symbol of the present, a sanctuary of authenticity.

At this tipping point of artificiality and superficiality, people start craving transcendent values that algorithms cannot encode. They seek the warmth of human connection, the comfort of shared beliefs and the solace of timeless rituals.

Spiritual journeying

Imagine a young professional, immersed in the digital hum of a bustling city, surrounded by a sea of screens and synthetic voices. It's not difficult to imagine, of course, that this is the reality for millions of people around the world from New York City to New Delhi.

Despite the convenience of their AI-enhanced lives, they find themselves restless at night, staring at the ceiling (or the phone), pondering the bigger questions:

  • Why am I here?
  • What is my purpose?
  • What does it mean to be truly connected?

These are questions no AI-generated bot can satisfactorily answer. Why? Simply because such questions delve into the depths of the human soul. And AI doesn't have a soul. Not yet, anyway.

Gradually, this professional notices a shift among their peers. A friend mentions attending a Sunday service not for the sermon but for the sense of community, the genuine smiles and the feeling of belonging.

Another friend speaks about the meditative peace they find in the quiet of a church, away from the relentless pace of technology. Intrigued, our professional decides to explore.

Walking into the church, they notice that people are present, genuinely engaged and open-hearted.

There is a tangible sense of something greater than oneself, something that transcends the algorithmic curation they have become accustomed to. The hymns, the prayers, the very atmosphere speak to a part of the human experience that technology cannot touch: the spiritual.

In the dim light of the stained-glass windows, our young professional feels a profound sense of peace. It's not about rejecting technology but about finding balance.They realise that while AI can enhance life, it should not define it.

As more people reach this tipping point, the Church starts to see a resurgence. It becomes a counterbalance to digital dominance, a place where people can reconnect with their humanity.

It's not about nostalgia or clinging to the past; it's about rediscovering the value of the sacred and the communal in a world that increasingly feels like a digital illusion.

A constant need

These scenarios — where AI inadvertently leads people back to religious spaces — are not as far-fetched as they might seem at first glance.

Throughout history, humans have sought meaning, connection, and understanding beyond the immediate physical world.

This quest has been intrinsic to our nature, deeply embedded in our collective psyche since the Middle Paleolithic era. From ancient cave paintings to complex religious systems, this spiritual inclination has been a constant, an ever-present phenomenon throughout our journey.

Religious belief, in its many forms, has always provided answers to the big questions — questions about existence, purpose, morality and the afterlife.

These are not just abstract concepts; they are core to what makes us human.

The rituals, stories and communal gatherings found in religious practice offer a framework for understanding our place in the universe, a sense of belonging, and a connection to something greater than ourselves.

Now, enter the age of AI. Modern technology is rapidly transforming our world. It's infiltrating every aspect of our lives. Algorithms, data analytics and machine learning models dictate what we see, how we interact and even how we think.

While these innovations bring unparalleled levels of convenience and efficiency, they also introduce a sense of literal artificiality. The digital world, no matter how advanced, lacks the nutrients provided by real-life experiences.

In such a context, it is only natural for people to seek balance. When faced with the sterile precision of AI, the messiness of human life — its unpredictability, its emotional depth, its sheer rawness — becomes even more precious.

A God-shaped vacuum

Of course, sports clubs, book clubs and other social gatherings undoubtedly foster community and camaraderie around shared interests. However, they differ significantly from religious institutions like churches, mosques and synagogues in terms of spiritual nourishment.

Religious centers can serve as focal points for believers, nonbelievers and everyone in between — those seeking answers to existential questions and a deeper connection to the divine.

Through rituals, prayers and sacred texts, these institutions provide a framework for understanding life's purpose, morality and the metaphysical, offering a sense of transcendence and spiritual upliftment that secular clubs generally do not replicate.

Moreover, religious communities offer a unique sense of belonging and support that extends well beyond social interaction.

They create sacred spaces conducive to contemplation and meditation, an opportunity for individuals to connect with the divine. Religious institutions also offer a counterbalance to the isolation that can come from over-reliance on technology.

In times of crisis or existential doubt, people have, throughout history, turned to these communities for support, wisdom and solace.

A return to religious spaces should not be considered a step backward. On the contrary, it could help us reclaim a crucial aspect of human life that technology cannot replicate.

In the words of the great philosopher Blaise Pascal, "There is a God-shaped vacuum in the heart of each man, which cannot be satisfied by any created thing but only by God the Creator."

This profound insight speaks to the core of our human experience.

The hunger for something transcendent in nature — an itch that cannot be scratched by AI girlfriends, VR headsets, and promises of the Metaverse — remains ever-present. We are, at our core, God-seeking souls, and no algorithm can fulfill that eternal quest.

  • First published in Religion Unplugged
  • John Mac Ghlionn is a researcher and essayist focusing on psychology and social relations.
Can AI make faith great again for the masses?]]>
173148
Peter, Paul and the messiness of Christian discipleship https://cathnews.co.nz/2024/06/24/peter-paul-and-the-messiness-of-christian-discipleship/ Mon, 24 Jun 2024 06:12:38 +0000 https://cathnews.co.nz/?p=172424 Discipleship

We all like things neat, uncomplicated and in good order. But as we step over the threshold into the virtual world created by artificial intelligence, it seems to me that inclination may be more problematic than ever. Over the past few weeks, I've seen a number of images circulating on social media. A baby dolphin, Read more

Peter, Paul and the messiness of Christian discipleship... Read more]]>
We all like things neat, uncomplicated and in good order. But as we step over the threshold into the virtual world created by artificial intelligence, it seems to me that inclination may be more problematic than ever.

Over the past few weeks, I've seen a number of images circulating on social media. A baby dolphin, a 1901 photograph of a family with 18 children, two little boys of different races enjoying friendship: nothing controversial.

The response to these images is almost universally positive. That's because the images are created to be universally appealing.

The problem is that these images aren't real. They are created by AI.

Flawless vs. real

What's the big deal? More and more people are becoming unable to tell the difference between what is real and what isn't.

Even worse, we seem to be developing a preference for flawless and beautiful images over messy and imperfect reality.

I'm grateful that God does not.

At the end of June, the church commemorates her two most influential (and flawed) leaders: Sts. Peter and Paul.

The irony of a shared feast day shouldn't be lost on us. Despite the similarity of how their lives ended, both Peter and Paul had their issues.

Simon walked on water, but then sank. He proclaimed that Jesus was the Son of God, then cautioned him against going to Jerusalem.

Swearing he would remain loyal even if no one else did, within hours, Peter denied Jesus not once, but three times.

He was anything but the "rock" Jesus had called him to be — not exactly a firm foundation on which to build the church.

In his zeal for Jewish law, Saul orchestrated the stoning of Stephen.

He was ambitious and intent on rooting out members of this dangerous new Messianic cult.

He was a man with a mission, en route to Damascus to arrest wayward Jews and bring them back to Jerusalem in chains.

That was interrupted when Jesus appeared to him.

Poor Ananias must have been terrified when God sent him to minister to Saul. It's no wonder Paul was not readily trusted by those who were following the way.

These two men could not have been more different from each other.

Simon was not well educated, and Saul was a scholar who had studied under one of the most esteemed rabbis in Jerusalem.

Simon was brash and impetuous, often jumping into things mouth first.

Saul was calculating and deliberative, carefully planning his next move.

Simon lived in Galilee, a crossroad of cultural and religious diversity. Saul grew up in Tarsus, exposed to the full force of Greek learning and achievement and its effect on Jewish thought.

Simon and Saul also came to faith in Christ in entirely different ways.

Simon's discipleship grew organically and over time. He became "Peter" slowly.

In contrast, Saul was struck blind by an unexpected mystical encounter (see image). When he regained his sight, he was "Paul," suddenly part of a community he had considered heretical. Read more

  • Jaymie Stuart Wolfe is an author, singer-songwriter, and lay evangelist. A 1983 convert to the Catholic faith, Jaymie is a wife and mother of eight.
Peter, Paul and the messiness of Christian discipleship]]>
172424
The false promise of keeping a loved one ‘alive' with A.I. grief bots https://cathnews.co.nz/2024/05/16/the-false-promise-of-keeping-a-loved-one-alive-with-a-i-grief-bots/ Thu, 16 May 2024 06:10:59 +0000 https://cathnews.co.nz/?p=170894 grief

"How would you feel about Daddy and me turning into ghostbots?" I asked this peculiar question to my two children after reading about "grief tech," the latest wonder child of artificial intelligence that allows the living to remain digitally connected to the dead through "ghostbots." I explained to our children that they could feed our Read more

The false promise of keeping a loved one ‘alive' with A.I. grief bots... Read more]]>
"How would you feel about Daddy and me turning into ghostbots?"

I asked this peculiar question to my two children after reading about "grief tech," the latest wonder child of artificial intelligence that allows the living to remain digitally connected to the dead through "ghostbots."

I explained to our children that they could feed our texts and emails to an A.I. platform that would create chatbots that mimic our language and tone and could respond to them through text after our death.

Although the idea of death made them shudder, their response was immediate and firm: "We don't want an A.I. mommy and daddy. It wouldn't be real."

Grief tech

Like our children, I have had a visceral response to the burgeoning realm of grief tech.

As an attorney and a graduate student of theology, I could not help but envisage the intersections of law, theology and ethics.

And as a Catholic woman who has experienced profound loss and grief in four consecutive miscarriages, these glaring intersections were heightened within my body.

On a cerebral and bodily level, I found myself grappling with what personhood means in relation to grief tech.

With the creation of A.I., anthropomorphised chatbots are one critical example of how the rapidly advancing technology is testing the limits of the human condition.

At this critical juncture, it is important for us, as people of faith and goodwill, to probe A.I.'s potential to divorce us from our humanity.

Grief tech is raising significant issues that bear on what it means to be human, specifically implicating our embodiment, relationality, finitude and death.

Not only does grief tech try to divorce the human body from any concept of personhood, but grief tech's endeavor to immortalise A.I. creations of the deceased stands in opposition to the Christian understanding of death.

Technology does not simply advance without the sanctioning of human beings.

While human existence is vulnerable to grief and death, we must affirm the body as intrinsic to our humanity and to the death and resurrection of Jesus Christ, through which we enter into relationship with God, ourselves and one another.

Our theological tradition—and specifically the work of Karl Rahner and Tina Beattie—can help us reflect on these imperatives and how they play out in the future.

The spectre of grief tech

A.I. has emerged within the last two years as a formidable instrument of communication and relationship in various arenas.

Large language models (L.L.M.s), a particular form of generative A.I., have facilitated this transition, with L.L.M.s that have been trained using volumes of data acquiring the extraordinary capability to mimic human beings through language.

L.L.M.s are not sentient, but their neural networks enable them to generate lifelike responses that can feel real to actual human beings.

One nascent industry that has capitalised on L.L.M.s is grief tech.

"Death is a lucrative business," wrote Mihika Agarwal in Vox late last year, and grief tech seeks to console the living through apps and programs that "re-create the essence of the deceased."

As intimated by the question to my two children, ghostbots are a distinct version of grief tech.

Through the power of A.I., we can build ghostbots out of the dead using the data—texts, emails, voice conversations, etc.—of our deceased loved ones.

Once L.L.M.s are fed this data, companies like Open A.I., Séance A.I. and You, Only Virtual can generate ghostbots that immortalise the deceased through text-based chat.

Although ghostbots cannot think, feel or have bodily form, their words offer a semblance of humanity by imitating the language of our dearly departed.

Although grief tech is still in its initial phase, it has already transformed the way we grieve.

At the touch of an app, we can quickly comfort ourselves by interacting with ghostbots, short-circuiting the traditional way of grieving.

We can easily download a relationship rather than accept our reality and sit with our loss and pain.

Death loses its sense of finality, emboldening us to maintain relationships even beyond the grave.

The dead may be physically gone, but they can now "‘live' on our everyday devices," wrote Aimee Pearcy in The Guardian last year, buried "in our pockets—where they wait patiently to be conjured into life with the swipe of a finger."

The concreteness of human existence

Ghostbots may bring comfort and closure to the living, but grief tech raises serious issues.

At the outset, there is the risk that users will become emotionally or psychologically dependent on ghostbots due, in part, to their instant accessibility and our ability to imbue them with familiarity and meaning.

Ghostbots may feel real to us, but they cannot supplant healthy, concrete relationships with other human beings.

Entangled with this psychological risk are ethical and legal concerns.

While some companies tout "Do Not Bot Me" clauses and "Digital Do Not Reanimate" orders to prohibit individuals from turning others into ghostbots without their permission, not all grief tech apps and programs require the deceased's consent before they die.

Further, even with these possible legal protections, there remains the issue of enforcement, exacerbated by the fact that we are constantly exchanging photos, texts and emails daily.

We retain the data of others on our digital devices, and no federal law prevents us from building bots out of the dead or the living.

Our legal system's present inability to protect the humanity of the deceased accentuates the final ethical concern of instrumentalisation.

Through generative A.I., we now have the capacity to reduce our deceased loved ones to digital instruments for maintaining relationship.

With their data at our disposal, we can distill their human "essence" and contain it in eternal ghostbots that respond to us 24/7.

Whether the app or program is free or not, grief tech companies are all too willing to monetise the dead as a service to help grieving individuals.

Our dearly departed are resurrected as a technological means of support, one that can comfort us while avoiding the realities of the human body and death.

However, rather than being swept up by A.I.'s wave of "inevitability" as the only future awaiting us, people of faith can demand that we not blithely surrender our humanity to technology.

We must get to the heart of the matter and articulate a philosophical and theological anthropology that respects the concreteness and finality of human existence.

We must do this if we are to challenge grief tech's subterfuge of artificial intimacy as authentic relationship.

What does it mean to be human?

The Christian Scriptures reveal the body as constitutive of the human person. In the Book of Genesis, we are told humans are created in the imago Dei, the image and likeness of God (1:27).

Our inviolable dignity is intrinsic to our bodies, which radiate our imago Dei and particular uniqueness as individuals.

There is no Cartesian separation or Manichean division of our being. To paraphrase the theologian Elisabeth Moltmann-Wendel, we are our bodies, and the human body is good.

In the Gospels, "the flesh in its dignity" is affirmed and glorified in Jesus of Nazareth, the incarnate Son of God.

As Moltmann-Wendel notes, the life of Jesus is about "God's becoming body."

Jesus enters our world through the body of Mary, "begotten rather than fabricated," and his words and actions touch "human beings in their totality, in their bodies."

According to Moltmann-Wendel, we see the totality of human beings most vividly in Jesus' encounter with the woman who bled for 12 unrelenting years (Mk 5:25-34).

After she touches his clothes, grasping Jesus in his bodily nature, the healing between them is body to body.

She is immediately filled with his power and made whole; and he, though experiencing a flowing out of his power, remains whole (v. 29-30).

Through the woman and Jesus, we discover how, as Motlmann-Wendel claims, "God encounters us in the human body," inviting us into communion with him.

In a similar way, the Catholic theologian Margaret A. Farley, R.S.M., illuminates our concrete reality as human beings through her formulation of a sexual ethics that is germane to all human relationships.

In Just Love: A Framework for Christian Sexual Ethics, she offers a vision of the human person as an "embodied spirit" or "inspirited body" whom God invites to a destiny of relationship and wholeness.

Crucial to Farley's vision are two obligating features of personhood: autonomy and relationality.

In our autonomy, we can decide our own destiny, and in our relationships, we recognise the intrinsic value of others and our dependency upon them.

Consequently, autonomy and relationality "ground an obligation to respect persons as ends in themselves and forbid, therefore, the use of persons as mere means" (emphasis in the original).

When we instrumentalise others, we violate their autonomy and foreclose the possibility of authentic relationship.

A "just love," then, is true and good insofar as it affirms and respects the concrete reality of the beloved. It proclaims, in word and deed, that "I want you to be, and to be full and firm in being."

In contrast to the anthropologies of Moltmann-Wendel and Farley, grief tech abolishes the human body.

When we encounter one another as persons, we do not meet as diaphanous spirits but as embodied spirits, inspirited bodies, radiating the imago Dei.

Each person is an "incarnate singularity," to use the language of Roberto Dell'Oro in his 2022 article in the Journal of Moral Theology, "Can a Robot Be a Person?"

Each person is unique and possesses an unrepeatable history; our body is inseparable from our destiny; and despite death, we do not lose our humanity.

However, through the power of A.I., grief tech has ruptured what it means to be human.

Death's finality presents grief tech with the lucrative opportunity to discard the body and profit from our loss and pain.

Through ghostbots, companies offer grieving individuals the promise of eternal relationship with the dead.

This promise is destructive and deceptive, divorcing our deceased loved ones from their bodies and extracting an "essence" from their data that fails to capture and affirm the totality of their human existence.

Ghostbots can neither experience nor embody what the deceased loved, what they valued and what they lived for.

Instead, the deceased are reduced to a "stable static entity," to use Agarwal's words.

This "stable static entity" is the antithesis of the complex, dynamic person who is loved by God and birthed into the world.

They are essentialised and homogenised. By dispensing with the human body, the deceased can now be objectified as digital instruments for relationship.

The relationship that grief tech offers is an artificial intimacy that can never replicate the dynamics of human relationship.

A.I. has no relationship with itself and must be programmed to create what human beings intend.

In order to generate ghostbots then, we must feed L.L.M.s the data of the deceased, specifically their words that we want to hear.

We must create the A.I. image of the deceased that we want, with or without their consent.

Thus, what emerges from this interaction is not a two-sided, dynamic relationship between two distinct persons but a one-sided, static relationship with ourselves vis-à-vis a ghostbot.

This is a relationship that we imbue with meaning by using the dead as a means to comfort and console us. It is not just love.

Ultimately, grief tech's flight from the body creates an illusion. When we fail to respect the body, we fail to respect persons as ends in themselves. When we fail to see the body, we fail to see God.

Karl Rahner, Tina Beattie and life through death

Even in death, the human body remains a key element in one's own history.

The flesh, created in the image and likeness of God, is still destined for God and summoned to glory by God.

We cannot deny the vulnerability and finality of the human condition, but we must also recognise how life emerges by way of death in the body story of Jesus.

In the cross, our bodies hold the promise of redemption through Jesus, the "first born from the dead" (Col 1:19).

The theologians Tina Beattie and Karl Rahner can help parse this crucial notion in Christian theology.

Unlike the founder of You, Only Virtual, who wishes to end goodbyes and the human emotion of grief, the theologian Tina Beattie invites us to "befrien[d] death" through the mystery of the Incarnation.

Although death is certain for each one of us, it is also a mystery exempt from human control.

We tend to perceive it as "a mortal enemy," a specter lurking in the shadows until we are forced to confront its undeniable reality.

Beattie recognises our fear but points to the hope embodied in Christian anthropology, which offers a paradoxical view of death as a rebirth and a beginning:

"The death of Christ tells us that God, like us, is vulnerable to love's wounding and sorrow, but the resurrection of Christ whispers of a God whose dying is fecundity," she writes in her book New Catholic Feminism: Theology and Theory.

Karl Rahner develops this theme of life through death in his book, On the Theology of Death.

He emphasises how death is the universal event that strikes us in our totality as human persons, but cautions that we should not regard death as a pointless suffering nor as a phantom waiting to strike.

Although death remains a great mystery, faith illuminates its truth in the death of Jesus Christ. Jesus, the incarnate Word of God, "became consubstantial with us" and "died our death."

According to Rahner, the real miracle of Christ's death is that death was transformed into life, with the "flesh of sin" transformed into the "flesh of grace."

Death is a consequence of sin, and only in Christ's death could death usher in God's arrival in the final moment of our life when we feel most abandoned by God.

In that moment, sin's power reaches its apex but God's grace overpowers sin.

Consequently, death becomes "the highest act of believing, hoping, and loving," a "faith in darkness, hope against hope, and love of God who only appears as Lord and as inexorable justice."

Through Jesus' death, God's grace becomes ours.

Thus, Rahner invites us to "hearken to the gospel of death, which is life."

Although the natural order of life ends in death, he challenges the dominant view of death as merely a natural process divorced from our spiritual, supernatural existence of grace.

Contrary to Martin Heidegger's notion that human beings are "being-towards-death," moving in life toward death and shaped by its reality, Rahner believes we are being-towards-glory.

From the very beginning of our life, we are not oriented toward death but toward the glory of God, and death is not the end of our existence but the beginning of eternity with God.

This involves the affirmation and fulfillment of the human person through a glorifying change in which the body remains whole. In the glorious grace of Christ's death, we will not perish at death but will be transformed in the resurrection of the body.

In contrast to the Christian anthropology of death presented by Beattie and Rahner, grief tech exploits our fear by offering a semblance of human control over death.

This control is illusory and, to quote Rahner, only "degrades [our] anxiety before death to a mere expression of self-preservation."

We end up trying to extend the finite limits of our own lives by controlling our loved ones in death.

Grief tech cannot alleviate our fear of death, but only pushes us to eternalise the essentialisation of human persons through ghostbots.

Such an attempt perpetuates an artificial intimacy that can never replicate authentic relationship.

Emboldened by A.I. and without the constraints of the law, we can resurrect the dead in our own image and, as a result, the dead cannot rest in peace.

A.I.'s attempt to control or circumvent death diminishes the humanity of the deceased.

Instead of offering immortality, grief tech offers perpetual mortality that can neither capture the totality of our deceased loved ones as human persons nor comprehend the ultimate glory of the body or eternity with God.

The proponents of grief tech are unable to imagine how death could lead to wondrous new life and new relationship with God and one another.

Life after ghostbots

We are our bodies, and our bodies are inextricably tied to our relationship with God, ourselves and one another.

We are not data or digital instruments to be used and manipulated, in life or in death. Furthermore, we must not abandon the human body to try to conquer death.

Though our life in the world will eventually cease, death is not the end of who we are.

We, as human persons, are destined for God, in our beginning and in our end.

The human body is a promise for glory, and Jesus Christ proclaims who we are in his life, death and resurrection.

Despite grief tech's promise to console us through eternal ghostbots, we cannot surrender our humanity for an artificial intimacy divorced from concrete reality.

With the rapid advancement of generative A.I., what is at stake is us.

We are unique persons created and loved by God whose totality cannot be distilled by technology.

Ghostbots are a deformation of the human person, and we must draw the line between the real and the virtual, the authentic and the counterfeit.

  • First published in America magazine
  • Eryn Reyes Leong is an attorney, pursuing a master of theology degree at Loyola Marymount University in Los Angeles, Calif.
The false promise of keeping a loved one ‘alive' with A.I. grief bots]]>
170894
Experts dismiss report AI could replace priests https://cathnews.co.nz/2023/12/07/experts-dismiss-report-ai-could-replace-priests/ Thu, 07 Dec 2023 05:08:43 +0000 https://cathnews.co.nz/?p=167294 AI could replace priests

Fr Alban McCoy OFMConv. is contesting the notion that AI could replace priests. McCoy, an editorial consultant at The Tablet, says the depth of the priestly role is beyond mere preaching. "To say that AI will replace priests suggests a very truncated, post-reformation and secularist view of what priests are and what they're for, implying Read more

Experts dismiss report AI could replace priests... Read more]]>
Fr Alban McCoy OFMConv. is contesting the notion that AI could replace priests.

McCoy, an editorial consultant at The Tablet, says the depth of the priestly role is beyond mere preaching.

"To say that AI will replace priests suggests a very truncated, post-reformation and secularist view of what priests are and what they're for, implying that their fundamental role is preaching.

"The suggestion that AI could replace priests overlooks their sacramental role, not just in administering sacraments but being themselves a sacrament" McCoy stated.

UK Government report

It was a recent UK Government report that suggested this about AI. It sparked debates within the clergy community.

The UK's Department for Education's report "The Impact of AI on UK Jobs and Training" analysed the potential impact of AI systems on various job sectors.

On a list of 365 professions most susceptible to being replaced by AI algorithms, it positioned clergy at number 13.

The report assessed telephone salespeople as the most at risk, followed by solicitors and psychologists.

Clergy already using ChatGPT

However, there are already some real-world examples of AI clergy in action.

In June, 300 people attended an AI-powered church ceremony in Germany, with the sermon written by AI and delivered by computer-generated avatars on a screen.

Jonas Simmerlein, a theologian and philosopher at the University of Vienna, said about 98 percent of the sermon came from the chatbot's own writing.

Vicars and rabbis have readily admitted to using ChatGPT to help them write sermons.

Conversely, John McManus, head of media communications at Jesuits in Britain (a website), emphasised the irreplaceable human aspect of the priesthood.

"What strikes me is that there is no comparison between the spiritual services provided by a priest and what AI can replicate.

"Priests offer a face-to-face interaction that AI can't do. It's programmed.

"Human beings are not programmable creatures. They have souls.

"An AI programme can never hope to replace the spiritual services and empathy given by priests. Priesthood has a sacramental side."

Sources

The Tablet

The Telegraph

CathNews New Zealand

Experts dismiss report AI could replace priests]]>
167294
Vatican urges pause on lethal autonomous weapons https://cathnews.co.nz/2023/09/28/lethal-autonomous-weapons/ Thu, 28 Sep 2023 05:05:45 +0000 https://cathnews.co.nz/?p=164268 lethal autonomous weapons

The Holy See's foreign minister addressed the UN General Assembly, calling for a halt to the deployment of lethal autonomous weapons systems. Archbishop Paul Gallagher joined a chorus of concerns raised by various speakers regarding artificial intelligence (AI). "It is imperative to ensure adequate, meaningful and consistent human oversight of weapon systems," Gallagher said. "Only Read more

Vatican urges pause on lethal autonomous weapons... Read more]]>
The Holy See's foreign minister addressed the UN General Assembly, calling for a halt to the deployment of lethal autonomous weapons systems.

Archbishop Paul Gallagher joined a chorus of concerns raised by various speakers regarding artificial intelligence (AI).

"It is imperative to ensure adequate, meaningful and consistent human oversight of weapon systems," Gallagher said.

"Only human beings are truly capable of seeing and judging the ethical impact of their actions, as well as assessing their consequent responsibilities."

UN advisory board on AI

Additionally, the Vatican advocated for the establishment of an international organisation focused on AI to promote scientific and technological exchange for peaceful purposes and the common good.

The United Nations plans to convene an expert advisory board on AI to explore the science, risks, opportunities and governmental approaches surrounding this technology.

AI has become a central point of interest for nations, multinational groups and tech companies, sparking discussions about its potential benefits and risks.

As a non-voting "permanent observer" in the UN, the Holy See delivered one of the most extensive remarks on AI during the assembly.

Archbishop Gallagher highlighted Pope Francis's concerns about the digital world, including: "It is not acceptable that the decision about someone's life and future be entrusted to an algorithm."

Killer robots

Gallagher called for immediate talks to establish a legally binding agreement governing lethal autonomous weapons systems, often called "killer robots." He proposed "a moratorium on them pending the conclusion of negotiations."

UN Secretary-General António Guterres has also supported banning systems that operate without human control or oversight and violate international humanitarian law.

Additionally, Guterres urged countries to work towards a legally binding prohibition by 2026.

However, concerns have arisen about the potential limitations such a prohibition might impose, especially if adversaries or non-governmental groups develop similar systems. Questions persist regarding the distinction between autonomous weapons and existing computer-aided systems.

Sources

AP News

Mirage News

CathNews New Zealand

Vatican urges pause on lethal autonomous weapons]]>
164268
NZ Catholic bishops promote open informed life discussions https://cathnews.co.nz/2023/09/28/nz-catholic-bishops-promote-open-and-informed-life-discussions/ Thu, 28 Sep 2023 05:02:46 +0000 https://cathnews.co.nz/?p=164235 NZ Catholic bishops

In a significant move, the NZ Catholic bishops are promoting open and informed life discussion through a modernised and broadened document, Te Kahu o te Ora - A Consistent Ethic of Life. The modernisation seeks to fill a twenty-six-year gap and reflect some of the modern challenges. Dr John Kleinsman, director of the NZ Catholic Read more

NZ Catholic bishops promote open informed life discussions... Read more]]>
In a significant move, the NZ Catholic bishops are promoting open and informed life discussion through a modernised and broadened document, Te Kahu o te Ora - A Consistent Ethic of Life.

The modernisation seeks to fill a twenty-six-year gap and reflect some of the modern challenges.

Dr John Kleinsman, director of the NZ Catholic bishops' Nathaniel Centre for Bioethics, is delighted with the bishops' update.

Kleinsman describes the new document as a "succinct overview of eight key moral areas, including a new section on information technology and artificial intelligence."

Among the modern challenges the bishops consider

  • Information technology and artificial intelligence
  • Justice and correction systems
  • War and peace
  • Poverty
  • Discrimination and abuse
  • End-of-life issues
  • Beginning of life issues
  • Integrity of Creation

Kleinsman says that people generally know what the Chucrh teaches but are unsure of why.

Te Kahu o te Ora - A Consistent Ethic of Life summarises key points which can give people greater insights into Catholic thinking, comments Kleinsman.

"It is a great source for open and informed discussions", says Kleinsman who, as well as being a theologian, is a married man, father and grandfather.

The original Te Kahu o te Ora was inspired by Cardinal Joseph Bernardin's A Consistent Ethic of Life.

Bernardin's work grew from his observation that we must act consistently because all human life is sacred.

It was Bernadin's view that it was inconsistent to protect life in some situations but not in others.

In the years following Roe v. Wade, Bernardin argued that human life is always valuable and must be respected consistently from conception to natural death.

Being pro-life is not only about abortion or euthanasia.

Being pro-life must encompass war, poverty, access to health care, education and anything that threatens human life or human wellbeing, he argued.

Stephen Lowe, the Bishop of Auckland, the Apostolic Administrator of Hamilton and President of the NZ Catholic Bishops Conference, describes the update as "Opportune".

Lowe says human life and emerging challenges are interconnected.

"The essence of Te Kahu o te Ora is the interconnectedness of all life, from the womb to the Earth," he said.

Lowe says Pope Benedict put it well some years ago:

"There are so many kinds of desert. There is the desert of poverty, the desert of hunger and thirst, the desert of abandonment, of loneliness, of destroyed love. There is the desert of God's darkness, the emptiness of souls no longer aware of their dignity or the goal of human life. The external deserts in the world are growing, because the internal deserts have become so vast."

"While traditional human life issues continue to need our attention, we are now facing many new problems, all interlinked.

"The key message of Te Kahu o te Ora is that everything is connected, whether it is life in the womb or the life of the Earth," Lowe repeated.

Sources

NZ Catholic bishops promote open informed life discussions]]>
164235
Magisterium AI promises to revolutionise Catholic academic research https://cathnews.co.nz/2023/08/28/magisterium-ai-promises-to-revolutionise-catholic-academic-research/ Mon, 28 Aug 2023 06:09:33 +0000 https://cathnews.co.nz/?p=162885 Magisterium AI

Magisterium AI, a new programme using artificial intelligence (AI), is poised to revolutionise academic research in Catholic education. The US-based company Longbeard created the programme, which uses AI technology to provide users with information on everything relating to Catholic doctrine, teachings, and canon law. Drawing a parallel to the widely recognised ChatGPT AI, Magisterium AI Read more

Magisterium AI promises to revolutionise Catholic academic research... Read more]]>
Magisterium AI, a new programme using artificial intelligence (AI), is poised to revolutionise academic research in Catholic education.

The US-based company Longbeard created the programme, which uses AI technology to provide users with information on everything relating to Catholic doctrine, teachings, and canon law.

Drawing a parallel to the widely recognised ChatGPT AI, Magisterium AI employs advanced AI techniques to furnish users with an extensive array of information pertinent to the Catholic faith.

Unlike other AI programmes which have access to vast swaths of ever-evolving data, Magisterium AI is limited to official church documents and is carefully curated. This helps to avoid the pitfalls of other AI programmes, which can sometimes provide incorrect or misleading information.

"Magisterium AI is trained on a very small, consistent and narrow documentation" said Fr Philip Larrey, who teaches philosophy at the Pontifical Lateran University in Rome and is chair of the Product Advisory Board at Magisterium AI.

"This way, it avoids the pitfalls of the use of AI. It's never going to give you a wrong or false answer," Fr Larrey added.

With an intuitive user interface, Magisterium AI allows individuals to pose queries, prompting the AI-powered platform to generate precise responses swiftly.

The versatility of this technology renders it a valuable resource catering to both Catholic and non-Catholic individuals.

From clergy members crafting sermons to canon lawyers seeking real-time information and researchers exploring ancient Catholic texts, the potential applications of this cutting-edge tool are vast.

Magisterium AI is currently used in 125 countries and is available in 10 languages. The programme is still under development but its creators hope to add more languages and features in the future.

Potential to revolutionise Catholic research

Fr David Nazar, the rector of the Pontifical Institute for Eastern Churches, believes that AI technology has the potential to revolutionise research in Catholic academia.

"Magisterium AI can shorten the time and refine your research," Nazar said. "Research that has been ongoing for 10 years over 400 documents and manuscripts could be done in a month or a week."

While Magisterium AI has the potential to be a valuable tool for research, it also has the potential to be controversial.

"Is this going to replace canon lawyers or teachers? No, it's going to be a help" he said. "Fortunately, they won't substitute a priest, so I think I'm safe!"

Sources

Religion News

CathNews New Zealand

 

Magisterium AI promises to revolutionise Catholic academic research]]>
162885
Pope Francis calls for AI ethics free from violence and discrimination https://cathnews.co.nz/2023/08/10/pope-francis-calls-for-ai-ethics-free-from-violence-and-discrimination/ Thu, 10 Aug 2023 05:51:15 +0000 https://cathnews.co.nz/?p=162442 Pope Francis wants the next World Day of Peace to focus on artificial intelligence's impact, opportunities and dangers as the technology develops and influences a growing number of fields, from information to warfare. "Pope Francis calls for an open dialogue on the meaning of these new technologies, endowed with disruptive possibilities and ambivalent effects," read Read more

Pope Francis calls for AI ethics free from violence and discrimination... Read more]]>
Pope Francis wants the next World Day of Peace to focus on artificial intelligence's impact, opportunities and dangers as the technology develops and influences a growing number of fields, from information to warfare.

"Pope Francis calls for an open dialogue on the meaning of these new technologies, endowed with disruptive possibilities and ambivalent effects," read a statement from the Vatican on Tuesday (Aug 8).

"He emphasises the need to be vigilant and to work so that a logic of violence and discrimination does not take root in the production and use of such devices at the expense of the most fragile and excluded: injustice and inequalities fuel conflicts and antagonisms," the statement continued.

Read More

Pope Francis calls for AI ethics free from violence and discrimination]]>
162442
AI-generated child abuse images challenge real victim identification https://cathnews.co.nz/2023/07/20/ai-generated-child-abuse-images-raise-alarms-challenging-identification-of-real-victims/ Thu, 20 Jul 2023 06:09:54 +0000 https://cathnews.co.nz/?p=161478 AI-generated child abuse images

The UK's National Crime Agency (NCA) has issued a stark warning on the growing menace of AI-generated child abuse images, making it increasingly difficult to identify real children at risk. Law enforcement agencies are gravely concerned about the emergence of hyper-realistic AI-generated content, fearing that it could blur the lines between real and computer-generated victims, Read more

AI-generated child abuse images challenge real victim identification... Read more]]>
The UK's National Crime Agency (NCA) has issued a stark warning on the growing menace of AI-generated child abuse images, making it increasingly difficult to identify real children at risk.

Law enforcement agencies are gravely concerned about the emergence of hyper-realistic AI-generated content, fearing that it could blur the lines between real and computer-generated victims, creating complex challenges in identifying children in danger.

The NCA's director-general, Graeme Biggar, emphasises that the proliferation of such material might normalise abuse and escalate the risk of offenders transitioning to harm real children.

In response to these alarming developments, discussions are underway with AI software companies to implement safety measures, including digital tags to identify AI-generated images.

UK Prime Minister Rishi Sunak has been urged to tackle a surge in child abuse images created by artificial intelligence when he gathers world leaders to discuss the technology later this year.

The Internet Watch Foundation (IWF), which monitors and blocks such material online, said the Prime Minister must specifically outlaw AI-generated abuse images and pressure other countries to do the same.

Susie Hargreaves, chief executive of the IWF, said: "AI is getting more sophisticated all the time. We are sounding the alarm and saying the Prime Minister needs to treat the serious threat it poses as the top priority when he hosts the first global AI summit later this year."

Not a victimless crime

Hargreaves' comments came as the IWF confirmed for the first time that it was removing cases of AI-generated child abuse images, including the most severe "category A" illegal material.

Despite the absence of real victims in these disturbing images, the IWF firmly asserts that creating and distributing AI-generated child abuse content is far from a victimless crime. Instead, it poses a serious risk of normalising abuse, hampering the identification of real instances of child endangerment and desensitising offenders to the gravity of their actions.

Adding to the alarm, the IWF has stumbled upon a chilling "manual" authored by offenders, instructing others on how to leverage AI to create even more lifelike abusive imagery.

The NCA said that an explosion in fake child abuse images could make saving real children suffering from abuse more difficult.

Chris Farrimond of the NCA said: "There is a very real possibility that, if the volume of AI-generated material increases, this could greatly impact on law enforcement resources, increasing the time it takes for us to identify real children in need of protection."

Sources

Cryptopolitan

The Telegraph

 

AI-generated child abuse images challenge real victim identification]]>
161478
Catholic universities must embrace AI https://cathnews.co.nz/2023/07/17/catholic-universities-crucial-role-in-embracing-artificial-intelligence-ai/ Mon, 17 Jul 2023 06:09:31 +0000 https://cathnews.co.nz/?p=161385 embracing Artificial Intelligence

During a recent scientific conference, Cardinal José Tolentino de Mendonça, Prefect of the Dicastery for Culture and Education, highlighted Catholic universities' crucial role in embracing Artificial Intelligence (AI). In his address, Cardinal de Mendonça underscored the importance of Catholic universities taking the lead in innovation and actively engaging with emerging trends. The event, titled "Renewal Read more

Catholic universities must embrace AI... Read more]]>
During a recent scientific conference, Cardinal José Tolentino de Mendonça, Prefect of the Dicastery for Culture and Education, highlighted Catholic universities' crucial role in embracing Artificial Intelligence (AI).

In his address, Cardinal de Mendonça underscored the importance of Catholic universities taking the lead in innovation and actively engaging with emerging trends.

The event, titled "Renewal and Awareness: Thinking about the Future of Catholic Universities," was organised by the Strategic Alliance of Catholic Research Universities (SACRU) and hosted by the Catholic University of the Sacred Heart.

Cardinal de Mendonça emphasised the need for dialogue, addressing contemporary challenges and fostering continuous renewal through awareness. Drawing from relevant Church documents, he highlighted the fundamental role of these institutions in shaping the future.

Referring to Pope Francis' insights on AI, Cardinal de Mendonça encouraged universities to embrace AI and digital technologies fearlessly while considering their ethical implications. He stressed the significance of prioritising individual well-being and upholding moral values as essential principles in this process.

Holistic approach needed

Regarding the anthropological implications of AI, the Cardinal stressed the need for a holistic approach that centres on the human person. He advocated for investing in the formation of individuals, nurturing their cognitive, creative, spiritual and ethical potential.

Cardinal de Mendonça also emphasised the importance of universities actively engaging with society and fostering encounters among diverse cultures. He identified creative intelligence and discernment rooted in solid values as essential qualities for navigating the AI landscape.

The Cardinal highlighted the responsibility of Catholic universities in implementing AI, referring to the concept of "algor-ethics" coined by Pope Francis. He called for social structures that ensure ethical considerations in the production and use of AI, emphasising the need for ethical guidelines and practices.

Concerns were raised about the rapid pace of technological advancement, emphasising the need for fairness, confidentiality and data verification. However, participants agreed that AI has the potential to contribute to sustainable societies and introduce new professions.

The outcomes of the event will be summarised in a public document that addresses the intersection of AI and the values upheld by Catholic universities.

Sources

America Magazine

Vatican News

CathNews New Zealand

Catholic universities must embrace AI]]>
161385
Te reo Maori threatened by AI https://cathnews.co.nz/2023/06/26/chatgpt-and-its-scarily-good-te-reo-maori-is-concerning/ Mon, 26 Jun 2023 06:02:43 +0000 https://cathnews.co.nz/?p=160474 ChatGPT

ChatGPT seems to be taking over te reo Maori. The new artificial intelligence (AI) has academics and te reo speakers worried. The chatbot is a quick learner. Ever since its launch last November, ChatGPT has 'learned' to write in te reo. The quality is "scarily good", says Waikato University's Te Taka Keegan (pictured). But he Read more

Te reo Maori threatened by AI... Read more]]>
ChatGPT seems to be taking over te reo Maori. The new artificial intelligence (AI) has academics and te reo speakers worried.

The chatbot is a quick learner.

Ever since its launch last November, ChatGPT has 'learned' to write in te reo. The quality is "scarily good", says Waikato University's Te Taka Keegan (pictured).

But he has a question.

"If they are producing a very good quality of Maori ... where did they get their data from?"

Perhaps AI scraped it from social media sites. If so, it's a worry, he says.

That's because the chatbot's results are so good, the language could shift from a traditional reo to a ChatGPT version, which might mean Maori lose sovereignty over their language.

"We've lost a lot of control over our land, ... the education that our children get; our own data and our own stories is kind of our last control over ourselves. If ... we lose sovereignty over that, it doesn't bode well for the uniqueness that is Maori."

Ngapera Riley is worried about the ethics of information, data sovereignty and te reo.

Her company, Figure.NZ, works to democratise New Zealand data - but the way information is gathered and misused is concerning, she says.

"Once we open it, it's out there, right? But we've decided it is better to let people use the information and access it, than to hide it."

ChatGPT shouldn't be used as a primary source, but as a tool, she stresses. Its results still need human auditing.

"That's where it will get dangerous, if people start to get too lazy and just start using it like that [as a primary source]."

Sonny Ngatai is optimistic te reo Maori can survive AI. He wants to see the language used everywhere.

But ChatGPT needs some boundaries, Ngatai says.

"Where I would put my flag up for data sovereignty is when it comes to our stories, or our narratives, or our tikanga, stuff like that."

Protecting Maori intellectual property rights in those situations is important, he says. "It's not just a matter of stringing words together like a chatbot could.

"It's part of our identity, part of who we are as New Zealanders. There is just so much more to the language then an AI being able to translate what you want to say."

What now?

Even though there are challenges, Keegan is generally positive about AI.

That would mean isolating the chatbot's data source, using Maori to train it and then controlling it at an iwi level. If that could be organised, Maori could retain sovereignty and use the AI as a helpful tool.

Riley is also positive about what ChatGPT can offer. However, Maori must be actively involved.

"My hope is that tools like ChatGPT can help preserve and use [te reo], but we still need the human element to input into the language, and to check that we aren't using incorrect sources," Riley says.

Source

Te reo Maori threatened by AI]]>
160474
ChatGPT preaches sermon and runs church service https://cathnews.co.nz/2023/06/15/chatgpt-chatbot-preaches-sermon-and-runs-experimental-service/ Thu, 15 Jun 2023 06:05:15 +0000 https://cathnews.co.nz/?p=160057 Chatbot

An artificial intelligence (AI) chatbot told hundreds of people at a Lutheran service on Friday "to rise from the pews and praise the Lord." The experimental church service was almost entirely AI-generated. A ChatGPT chatbot delivered a sermon at the church in Bavaria, Germany. What happened The sermon chatbot, personified by an avatar of a Read more

ChatGPT preaches sermon and runs church service... Read more]]>
An artificial intelligence (AI) chatbot told hundreds of people at a Lutheran service on Friday "to rise from the pews and praise the Lord."

The experimental church service was almost entirely AI-generated. A ChatGPT chatbot delivered a sermon at the church in Bavaria, Germany.

What happened

The sermon chatbot, personified by an avatar of a bearded man on a huge screen above the altar (pictured), told the packed congregation not to fear death, the Associated Press (AP) says.

"Dear friends, it is an honour for me to stand here and preach to you as the first artificial intelligence at this year's convention of Protestants in Germany," the AI avatar said.

It reportedly focused on leaving the past behind, paying attention to the present, not being afraid of death and maintaining faith in Jesus Christ.

The sermon-preaching avatar was one of four avatars taking turns leading the service. They reportedly drew laughter at times for their monotonous, deadpan delivery.

The service lasted 40 minutes. Prayers and music were included, as well as the sermon.

The chatbot developer

A University of Vienna theologian and philosopher, Jonas Simmerlein, used the ChatGPT to create the service, AP reported.

Simmerlein says about 98 percent of the sermon - themed "Now is the time" - came from the chatbot's own writing.

The service was part of Deutscher Evangelischer Kirchentag (German Lutheran Church Day). The popular biennial event attracts thousands of Christians. Issues addressed at the event this year include climate change, the war in Ukraine, and AI.

"I told the artificial intelligence ‘We are at the church congress, you are a preacher … what would a church service look like?'" said Simmerlein.

He also asked the chatbot to use psalms, prayers and a concluding blessing in the sermon.

The ChatGPT provided "a pretty solid church service," he says. However, no human interaction was able to take place between the chatbot and the congregation.

"The pastor is in the congregation, she lives with them, she buries the people, she knows them from the beginning," Simmerlein says. "Artificial intelligence cannot do that. It does not know the congregation."

Mixed responses

Not everyone agrees with Simmerlein's assessment of the chatbot's effectiveness.

"There was no heart and no soul," one woman said after the service.

"The avatars showed no emotions at all, had no body language and were talking so fast and monotonously that it was very hard for me to concentrate on what they said.

"But maybe it is different for the younger generation who grew up with all of this," she added.

Another attendee - a young pastor - was there with a group of teenagers. He was more impressed by the experiment.

"I had actually imagined it to be worse. But I was positively surprised how well it worked. Also the language of the AI worked well, even though it was still a bit bumpy at times," he said.

But the chatbot missed any kind of emotion or spirituality - which is essential when he writes his own sermons, he added.

Source

ChatGPT preaches sermon and runs church service]]>
160057
US priest says ‘no place for AI in the synodal process' https://cathnews.co.nz/2023/05/08/us-priest-says-no-place-for-ai-in-the-synodal-process/ Mon, 08 May 2023 06:08:19 +0000 https://cathnews.co.nz/?p=158598 AI has no place

A priest in the US state of South Carolina has said "AI has no place in the synodal process," responding to the Catholic Church in Asia's use of artificial intelligence to create a document for use by the wider Church. Fr Jeffrey Kirby, a pastor at Our Lady of Grace Catholic Church in South Carolina, Read more

US priest says ‘no place for AI in the synodal process'... Read more]]>
A priest in the US state of South Carolina has said "AI has no place in the synodal process," responding to the Catholic Church in Asia's use of artificial intelligence to create a document for use by the wider Church.

Fr Jeffrey Kirby, a pastor at Our Lady of Grace Catholic Church in South Carolina, told Fox News Digital that any development and/or use of AI "must defer always to the human person.

"Our greatest asset as a human family is our ability to form and build relationships," said Fr Kirby.

According to Vatican News, synod organisers in Asia used AI to help draft a final document during the Asian synodal continental assembly in Thailand, held in February. The report added that the event was the first to incorporate digital technologies to gather input from participants.

The data was received from small groups that discussed their responses to questions posed in the working document and then submitted summaries using Google Forms. AI software then processed the data, with humans reviewing the generated data for any inaccuracies.

Fr Clarence Devadass, a Malaysian priest and a consultant to the Dicastery for Interreligious Dialogue, said the AI process was effective in sorting data and picking up on keywords, but human resources were needed to ensure accuracy.

AI has limited place in society

However, not all Catholic leaders are convinced about using AI in the Church.

"AI can have its limited place in society, but it must always be in service to human ingenuity and creativity. It cannot usurp a place that belongs to the human mind and heart," Fr Kirby said.

"We can sometimes forget the ‘artificial' in AI," Kirby added.

"'Artificial' is a far cry from what is natural and authentic. As human beings, we live in a world of relationships marked by love, hope, the giving of thanks and mutual accompaniment with others," he also said.

Kirby continued, "The Bible teaches us that we are made in the image and likeness of God, not in the image of AI. We cannot allow AI to steal what is human."

Fr Kirby said, "Synodality is about real human relationships and interactions. AI has no place in the synodal process. The use of AI in the synodal process is the very death of the authentic process itself."

Sources

New York Post

Catholic News Agency

Vatican News

CathNews New Zealand

 

US priest says ‘no place for AI in the synodal process']]>
158598
Artificial Intelligence ‘could create religion of the future' https://cathnews.co.nz/2023/05/08/artificial-intelligence-could-create-religion-of-the-future/ Mon, 08 May 2023 05:55:53 +0000 https://cathnews.co.nz/?p=158590 The world could soon see the first religion that attracts devotees with sacred texts created by artificial intelligence, the historian Yuval Noah Harari said. The Israeli scholar, known for the best-selling book Sapiens, told a science conference that AI systems such as ChatGPT have breached a new threshold because they are capable of using language Read more

Artificial Intelligence ‘could create religion of the future'... Read more]]>
The world could soon see the first religion that attracts devotees with sacred texts created by artificial intelligence, the historian Yuval Noah Harari said.

The Israeli scholar, known for the best-selling book Sapiens, told a science conference that AI systems such as ChatGPT have breached a new threshold because they are capable of using language to shape human culture.

"Simply by gaining mastery of the human language, AI has all it needs in order to cocoon us in a Matrix-like world of illusions," he told the Frontiers Forum science conference in Switzerland.

"Contrary to what some conspiracy theories assume, you don't really need to implant chips in people's brains in order to control them or to manipulate them. For thousands of years, prophets and poets and politicians have used language and storytelling in order to manipulate and to control people and to reshape society," Harari said

Read More

Artificial Intelligence ‘could create religion of the future']]>
158590
Amnesty International uses AI-generated images of Colombian human rights abuses https://cathnews.co.nz/2023/05/04/amnesty-international-uses-ai-generated-images-of-colombian-human-rights-abuses/ Thu, 04 May 2023 05:50:34 +0000 https://cathnews.co.nz/?p=158450 Leading global human rights organization Amnesty International is defending its choice to use an AI image generator to depict protests and police brutality in Colombia. Amnesty told Gizmodo it used an AI generator to depict human rights abuses so as to preserve the anonymity of vulnerable protestors. Experts fear, however, that the use of the Read more

Amnesty International uses AI-generated images of Colombian human rights abuses... Read more]]>
Leading global human rights organization Amnesty International is defending its choice to use an AI image generator to depict protests and police brutality in Colombia.

Amnesty told Gizmodo it used an AI generator to depict human rights abuses so as to preserve the anonymity of vulnerable protestors.

Experts fear, however, that the use of the tech could undermine the credibility of advocacy groups already besieged by authoritarian governments that cast doubt on the authenticity of real footage.

Amnesty International's Norway regional account posted three images in a tweet thread over the weekend acknowledging the two-year anniversary of a major protest in Colombia where police brutalised protestors and committed "grave human rights violations," the organization wrote.

Read More

Amnesty International uses AI-generated images of Colombian human rights abuses]]>
158450
AI will increase inequality and raise tough questions about humanity https://cathnews.co.nz/2023/05/01/ai-increase-inequality/ Mon, 01 May 2023 06:11:39 +0000 https://cathnews.co.nz/?p=158276

On November 30 2022, OpenAI launched the AI chatbot ChatGTP, making the latest generation of AI technologies widely available. In the few months since then, we have seen Italy ban ChatGTP over privacy concerns, leading technology luminaries calling for a pause on AI systems development, and even prominent researchers saying we should be prepared to Read more

AI will increase inequality and raise tough questions about humanity... Read more]]>
On November 30 2022, OpenAI launched the AI chatbot ChatGTP, making the latest generation of AI technologies widely available.

In the few months since then, we have seen Italy ban ChatGTP over privacy concerns, leading technology luminaries calling for a pause on AI systems development, and even prominent researchers saying we should be prepared to launch airstrikes on data centres associated with rogue AI.

The rapid deployment of AI and its potential impacts on human society and economies is now clearly in the spotlight.

What will AI mean for productivity and economic growth? Will it usher in an age of automated luxury for all, or simply intensify existing inequalities? And what does it mean for the role of humans?

Economists have been studying these questions for many years. My colleague Yixiao Zhou and I surveyed their results in 2021, and found we are still a long way from definitive answers.

The big economic picture

Over the past half-century or so, workers around the world have been getting a smaller fraction of their country's total income.

At the same time, growth in productivity - how much output can be produced with a given amount of inputs such as labour and materials - has slowed down. T

his period has also seen huge developments in the creation and implementation of information technologies and automation.

Better technology is supposed to increase productivity.

The apparent failure of the computer revolution to deliver these gains is a puzzle economists call the Solow paradox.

Will AI rescue global productivity from its long slump? And if so, who will reap the gains? Many people are curious about these questions.

While consulting firms have often painted AI as an economic panacea, policymakers are more concerned about potential job losses. Economists, perhaps unsurprisingly, take a more cautious view.

Radical change at a rapid pace

Perhaps the single greatest source of caution is the huge uncertainty around the future trajectory of AI technology.

Compared to previous technological leaps - such as railways, motorised transport and, more recently, the gradual integration of computers into all aspects of our lives - AI can spread much faster.

And it can do this with much lower capital investment.

This is because the application of AI is largely a revolution in software.

Much of the infrastructure it requires, such as computing devices, networks and cloud services, is already in place.

There is no need for the slow process of building out a physical railway or broadband network - you can use ChatGPT and the rapidly proliferating horde of similar software right now from your phone.

It is also relatively cheap to make use of AI, which greatly decreases the barriers to entry.

This links to another major uncertainty around AI: the scope and domain of the impacts.

AI seems likely to radically change the way we do things in many areas, from education and privacy to the structure of global trade.

AI may not just change discrete elements of the economy but rather its broader structure.

Adequate modelling of such complex and radical change would be challenging in the extreme, and nobody has yet done it. Yet without such modelling, economists cannot provide clear statements about likely impacts on the economy overall.

More inequality, weaker institutions

Although economists have different opinions on the impact of AI, there is general agreement among economic studies that AI will increase inequality.

One possible example of this could be a further shift in the advantage from labour to capital, weakening labour institutions along the way. At the same time, it may also reduce tax bases, weakening the government's capacity for redistribution.

Most empirical studies find that AI technology will not reduce overall employment.

However, it is likely to reduce the relative amount of income going to low-skilled labour, which will increase inequality across society.

Moreover, AI-induced productivity growth would cause employment redistribution and trade restructuring, which would tend to further increase inequality both within countries and between them.

As a consequence, controlling the rate at which AI technology is adopted is likely to slow down the pace of societal and economic restructuring.

This will provide a longer window for adjustment between relative losers and beneficiaries.

In the face of the rise of robotics and AI, there is possibility for governments to alleviate income inequality and its negative impacts with policies that aim to reduce inequality of opportunity.

What's left for humans?

The famous economist Jeffrey Sachs once said

What humans can do in the AI era is just to be human beings, because this is what robots or AI cannot do.

But what does that mean, exactly? At least in economic terms?

In traditional economic modelling, humans are often synonymous with "labour", and also being an optimising agent at the same time. If machines can not only perform labour, but also make decisions and even create ideas, what's left for humans?

The rise of AI challenges economists to develop more complex representations of humans and the "economic agents" which inhabit their models.

As American economists David Parkes and Michael Wellman have noted, a world of AI agents may actually behave more like economic theory than the human world does. Compared to humans, AIs "better respect idealised assumptions of rationality than people, interacting through novel rules and incentive systems quite distinct from those tailored for people".

Importantly, having a better concept of what is "human" in economics should also help us think through what new characteristics AI will bring into an economy.

Will AI bring us some kind of fundamentally new production technology, or will it tinker with existing production technologies?

Is AI simply a substitute for labour or human capital, or is it an independent economic agent in the economic system?

Answering these questions is vital for economists - and for understanding how the world will change in the coming years.

  • Yingying Lu Research Associate, Centre for Applied Macroeconomic Analysis, Crawford School of Public Policy, and Economic Modeller, CSIRO
  • First published in The Conversation. Republished with permission.

AI will increase inequality and raise tough questions about humanity]]>
158276
How AI's threatens our economies, societies, and democracies https://cathnews.co.nz/2023/04/27/threat-of-ai/ Thu, 27 Apr 2023 06:11:51 +0000 https://cathnews.co.nz/?p=158092 AI

In six months, a year, or two, from now, the first wave of AI-made layoffs will hit the economy. A whole lot of execs, having figured out that a whole lot of people are beginning to use AI to do their jobs, are going to dispense with the middleman. They won't care very much if Read more

How AI's threatens our economies, societies, and democracies... Read more]]>
In six months, a year, or two, from now, the first wave of AI-made layoffs will hit the economy.

A whole lot of execs, having figured out that a whole lot of people are beginning to use AI to do their jobs, are going to dispense with the middleman.

They won't care very much if the resulting work — writing copy, reviewing documents, forming relationships — is done with little care, and less quality. They're just going to see the dollar signs.

And then what?

Because we're already in an economy where people are stretched so thin that they're using buy now, pay later to pay for groceries.

That's a last resort.

They're maxed out in every other way.

They've tapped out their "credit," their incomes have cratered in real terms relative to eye-watering inflation, they have no real resources left.

What happens if you take an economy stretched that thin…and pull?

It breaks.

Those layoffs will lead to delinquencies and bad debt, which will cause bank failures, which will require the classic sequence of bailouts, shrunken public services, and lower investment.

And then we'll be in the first economic AI crash — right when it's supposed to be booming.

Those jobs?

They're never coming back.

A hole will have been ripped in the economy.

You can already see glimmers of what those jobs are — not really jobs, entire fields and industries will be decimated, and already are.

Those who are proficient in manipulating AI think they're clever for holding down four, five, six jobs at once — but the flip side of the coin is that they're taking them from other people.

You can see the writing on the wall.

Many forms of pink-collar work? Toast. Clerical work, organizational work, secretarial slash assistant style work.

And then you can go up the ladder. Graphic designers and musicians?

Good luck, you're going to need it.

Writers (shudder) and publishers and editors? LOL.

All the way up to programmers, who used to be, not so long ago, the economy's newest and most in-demand profession.

We can keep going, almost endlessly. Therapists? Check. Doctors — GPs? Eventually.

Even…all those executives themselves…who will fire today's pink-collar masses?

Probably.

And from there, you begin to see the scale and scope of the problem.

It's not that AI's going to "kill us all." We're doing a pretty good job of that, in case you haven't noticed.

But it is that AI is going to rip away from us the three things that we value most. Our economies, human interaction, and in the end, democracy.

I've taken you through the first, just a little bit. Let's consider the second, human interaction. Continue reading

  • Umair Haque is one of the world's leading thinkers. He is a member of the Thinkers50, the authoritative ranking of the globe's top management experts.
How AI's threatens our economies, societies, and democracies]]>
158092
Midjourney ends free trials amid controversy over fake images https://cathnews.co.nz/2023/04/03/midjourney-ends-free-trials/ Mon, 03 Apr 2023 06:07:53 +0000 https://cathnews.co.nz/?p=157416 Midjourney ends free trials

Midjourney, the artificial intelligence (AI) image generator programme, has discontinued free trials after a series of fake images, including one of Pope Francis, went viral. Founder David Holz announced on his Discord channel, "Due to a combination of extraordinary demand and trial abuse, we are temporarily disabling free trials until we have our next improvements Read more

Midjourney ends free trials amid controversy over fake images... Read more]]>
Midjourney, the artificial intelligence (AI) image generator programme, has discontinued free trials after a series of fake images, including one of Pope Francis, went viral.

Founder David Holz announced on his Discord channel, "Due to a combination of extraordinary demand and trial abuse, we are temporarily disabling free trials until we have our next improvements to the system deployed."

Millions of people saw fake images of Donald Trump being arrested, a nod to his looming indictment by a Manhattan Grand Jury.

The images were created by Elliot Higgins, founder of the Bellingcat website, who has since been banned from Midjourney.

An AI image created on Midjourney v5 of Pope Francis in a white puffer jacket fooled many people into believing it was genuine.

The image's author, Pablo Xavier, posted the images to a Facebook group called AI Art Universe and then on Reddit, after which they proceeded to go viral.

"I was just blown away," he tells Buzzfeed News. "I didn't want it to blow up like that."

Holz acknowledged that his company was uncertain about how to manage the remarkable capabilities of the tool they had developed. The CEO said that, at this point, the company could either "go full Disney or go full Wild West" when it comes to the realism of the images.

Growing concerns about AI's power

There are growing concerns about AI's power, which has increased rapidly in the last few months. Elon Musk and other notable technology figures signed a public letter calling for a "pause" on "dangerous" AI experiments so that a set of shared safety protocols can be thrashed out between the key players.

On March 27, before the publication of Pope Francis in the puffer jacket, the pontiff applauded the benefits of technology and artificial intelligence when used for the common good. However, he has warned against using AI unethically or irresponsibly.

Technology is, and has been, he said, "immensely beneficial" to our human family, especially in medicine, engineering and communications.

"At the same time," Pope Francis cautioned, "I am certain that this potential will be realised only if there is a constant and consistent commitment on the part of those developing these technologies to act ethically and responsibly."

Sources

PetaPixel

BGR

Vox

Vatican News

CathNews New Zealand

Midjourney ends free trials amid controversy over fake images]]>
157416