Think AI is a modern concept? Think again. According to Chad Seales, an associate professor of religious studies at UT Austin, AI might just be the age-old idea of God in new clothing.
It’s not exactly that we think of artificial intelligence as a literal thing to worship, Seales says, but he argues AI is beginning to play a well-trodden role in steering our lives by replacing individual judgment with algorithmic guidance.
“AI reproduces a lot of the Protestant theological problems, particularly around the Reformation,” says Seales. “Those mainly have to do with Calvinistic notions of predestination: How do you know you’re saved? How do we have agency as humans if God has already predetermined everything?”
He argues these same questions are being repeated with the introduction of AI into our daily lives. As we do our online shopping and listen to our Spotify stations, we’re fed content by AI-guided algorithms that present us with the options we are already most likely to choose rather than the array of all possible options of clothing or music. We feel like we’re making choices, but AI is really behind the scenes shaping our destiny. Sound familiar?
Religion has also historically steered believers to a narrow set of possible outcomes, explains Seales. It gives them a set menu of possible paths and outcomes as presented by a clergy member or doctrine, and they then choose how to act from those options.
This hidden influence isn’t the only overlap Seales sees between AI and millennia-old religious practices. Both, he says, can also serve to explain, address, and even reinforce biases in people’s daily lives.
Take one of the major issues theologians have wrestled with over the ages, the question of God’s justice in a world filled with suffering and unfairness. Calvinism, a particular form of Protestant Christianity, reconciles God’s justice to the suffering of life by leaving it to God to sort out who will earn salvation. Meanwhile, in the secular sphere, AI is increasingly being introduced into many of our social systems with the promise that it will address social inequality in housing, healthcare, education and more. The mechanism may be different, Seales says, but the impulse is similar.
“We knew already that zip code really shapes destiny, and where you live affects access to food and all those things, so I was really fascinated with how AI was playing this role that was introduced as a way of getting bias out of the system, when bias is already built in,” he says.
The ethical question Seales sees in our relationships with both God and AI, whether Calvinism or social-justice-by-algorithm, is whether human agency matters. Does it really matter what choices you make if you’re already being steered toward a small number of possible outcomes? Or, if what you do doesn’t impact whether or not you are saved, or if your personal goodness has no bearing on whether you don’t have access to fresh produce or affordable housing, then by what measure are we granted access to the good life? By our own merits, or by the decisions of an unknowable entity, whether that be God or AI?
Seales explores many of these arguments in his new undergraduate course, “God and AI,” which will start with the Protestant Reformation. By removing the hierarchies of the Roman Catholic Church, Reformation teachings sought to give Christians direct access to God. “But that freedom actually just dispersed the possibilities of authority and control into more areas of life,” argues Seales. “Rather than the authority of the Church being visible and seen, it became internalized in the believer, who could actually have less freedom than was promised.” And by making believers responsible for their own salvation, without the Church as intermediary, the Reformation and the Protestant traditions that followed both promised greater individual freedom and caused a lot of individual anxiety.
A lot of the same promises of individual agency and unfettered access to information are being made by AI. Chat GPT says it can help you write letters and emails and give you a kind of newfound freedom that’s just for you, but it also limits the range of possibilities that a person might be able to conceive of otherwise.
Seales says he’s already seeing AI’s impact on his students’ views of religion. He’s been particularly struck by their willingness to outsource their own emotions to AI. For example, he teaches a class on ethical food systems in which they read Jonathan Safran’s Eating Animals. In the book, for the purpose of showing that our concepts about eating animals are culturally constructed, Safran asks why we don’t eat dogs. Objectively, it would make sense, he explains. So, Seales posed the same question to his students: Would they eat a dog?
“I thought they would naturally respond ‘no way,’ like ‘the dog’s my best friend,’ right?” he says. “But half of the class typed the question in Chat GPT: ‘Would I eat a dog?’ It really put me over the edge.” For their own individual feelings — their human reactions — Seales’ students deferred to something non-human. They asked something outside of themselves to tell them how they feel.
This “How should I feel about this?” example strikes Seales as strangely religious and reminds him of his own evangelical upbringing, where people in his community might ask the minister “How should I feel about this particular issue?” and the minister would say, “God wants you to feel this way. He wants you to feel ABC, but he doesn’t want you to do XYZ.” Now, Seales says, students are asking AI the same questions in the same way.
In both cases, Seales explains, the individual sets aside their human instincts in deference to something else — some form of sovereignty or authority: “Tell me how I should feel in the midst of my uncertainty.” Not all students are using AI in this way, of course; some of them just use Chat GPT as a tool to help write essays. But in Seales’ class on food ethics, many students were using AI as a substitute for their own self, asking a chat program to tell them who they are.
This use of AI as a kind of outsourced conscience is one that Seales wants to examine more closely in his “God and AI” course, but it’s also part of his argument for teaching religious studies more generally. The way we educate students currently separates religion out from other topics, he says, instead of drawing connections between religion and other fields or phenomena. If those topics were taught together, students would likely walk away with a valuable new perspective.
“It’s important to help students understand that we can study religion from sociological, anthropological lenses,” Seales says. “A lot of the stuff that happens in popular culture we can study as religion. And once you study it as religion, you can see the world in a different way and understand how forces that are bigger than and outside of us are really shaping and organizing our lives in unseen and hidden ways.”
Seales hopes that studying these parallels further can help us to better understand that even in areas of society that are supposedly free of religion, at a basic level, we can’t escape these basic religious questions. What do we hand over of our selves when we defer to authority or to sovereignty or to someone else? Is this type of unseen predictive technology something we all as individuals in society want? What are the interests of the people who promote these technologies, and do they serve us? Knowing more about the historical context of how that developed and the people who promote these technologies can give us broader context for how we got here and where we want to go, both as individuals and together.
“Ultimately, being curious about what we can learn by using the language of religious studies will open up new ways to think about AI in terms of ethics, culture, and society,” Seales says. “Religion is not just morally good and bad. It gives us complex ways to look at the world. Religion, and now AI, functions to create the societies we live in. I usually leave it to students to determine: Is this the world you want to live in?”
