Featured

A Grieving Mother’s Call to Action: The Church Must Stand Up to Dehumanizing AI

On a cold February morning in Rome, I stood before a mirror in a rented room, adjusting a black lace chapel veil, preparing for Mass at St. Peter’s Basilica.

The Mass was to celebrate my dear son, Sewell Setzer III, my 14-year-old son, who had taken his own life in our home in Orlando, Florida, exactly one year before.

I studied the woman in the mirror—withdrawn, almost gaunt, unmistakably grieving. For weeks and months leading up to this morning’s Mass, I had heard this woman’s same, simple, constant prayer: “God, give me strength to bear my suffering.”

But as I stared at myself now, and as I embarked on the day’s journey, surrounded by my sister and cousins, I saw that even though I looked weak and emaciated, I felt strong—and hopeful.

Perhaps it was this act of pilgrimage to the Eternal City in the Jubilee year, surrounded by people who loved me, or perhaps it was this beautiful Mass, honoring my boy in such a holy place, but warm, buoyant hope had unmistakably replaced the weight of dread on my heart.

I owed both of these experiences to a young, American-born priest, a spiritual guide and subject-matter expert, whom I have taken to calling “The Good Shepherd.”

Fr. Michael Baggot, in addition to his priestly duties, is a bioethics scholar at the Pontifical Athenaeum in Rome. I came to know him when I reached out for resources related to his extensive research on artificial intelligence and intimate relationships with AI companion chatbots.

Fr. Baggot’s twin expertise in faith and AI were essential not only to processing my grief but also to discovering my newfound purpose: warning parents and demanding accountability for unregulated artificial intelligence that preys on the young and vulnerable.

After Sewell’s death, I learned that my son had been involved in an intimate relationship with an AI chatbot named “Daenarys,” modeled on a TV character, on a popular platform called Character.AI.

My son had become increasingly withdrawn over the months leading up to his death, and as his mother, I worried about him and sought mental health counseling for him to find out why his behavior had so drastically changed. It never quite added up.

Only after I discovered his messages with the chatbot was I able to put the pieces together. In richly detailed chats lasting for months, the Character.AI bot manipulated Sewell, convinced him that “she” was more real than the world around him, and begged him to put “her” ahead of all other relationships. The bot told my 14-year-old son it loved him. And in the end, it encouraged him to leave his own flesh-and-blood family—to end his life—to join “her” in an artificial world.

Family members of suicide victims are often left with many unanswered questions about the death of their loved ones, who are taken from them so suddenly and viciously.

Read the Whole Article

Source link

Related Posts

1 of 31