
An easily-obsessed personality, I've sometimes navigated the casino strip of social media with difficulty. When ChatGPT came out, I decided to approach it with caution. What addiction-inducing features were built into its infrastructure? How would those features affect other people? I decided to sit back and observe before plotting my course.
That's not to say that I haven't used AI a little, but for now I have been consciously making decisions to avoid using general-purpose AI (see [1] for clarification) in an extensive way. (Peter likes to joke that maybe we will become "AI Amish.") That said, I definitely have avoided using it to answer my questions about spiritual things. In fact, I wholeheartedly believe that every Christian should avoid using it to answer their questions about spiritual things. In this post, I would like to share with you why.
1. AI isn't Christian
Generally speaking, most of us recognize that going to someone who's not a Christian for answers about the things of God is inadvisable. Yet somehow, Christians seem to think that going to AI for answers is acceptable. I suspect our perceptions are colouring our approach. For instance—we might think AI is an impartial observer of all that's on the Internet, so it should be able to distil answers for us, based on the grand sum of material from key Christian thinkers of our day.
Here's the thing: even if we're requesting answers from a Christian perspective, we're still going to come up short. This reminds me of the experience of my college roommate, with whom I lived for three years. As a young Christian, she majored in Christian studies. The problem was: she engaged in this program at the University of Toronto, and her professors freely stated they weren't Christians. Although they were extremely knowledgeable, they lacked the element of belief and a heart-level understanding of their subject matter. They might have been experts by the world's standards, but they were not by hers. At the end of it all, she shared, her experience had been extremely frustrating and she regretted having pursued that degree at that institution. [2]
We must recognize: AI can neither be saved from its sins nor be filled with the Holy Spirit.
That "Holy Spirit" comment may sound like a trivial addition, but is it? Over the years, I've discovered that when you say something is almost as important as what you say. What you don't say can be just as important as what you do say. These sorts of decisions can have eternal repercussions and impact whether someone makes a decision to follow Christ or not. They can also impact whether or not a discouraged Christian perseveres through their discouragement. Following the leading of the Holy Spirit is the only sure way forward. He guides us in timing, in what to say and not say, and how to say or not say it.
Without access to the Holy Spirit, AI has no such discernment. Like a bull in a china shop, it will barrel in and say whatever seems good, with no understanding of the consequences to our vulnerable human hearts. As singer and speaker Ochuko Newton once wrote: "Stop asking ChatGPT questions you should be asking the Holy Spirit." [3] Only He can be trusted to tell us what we need to know, when we need to know it.
2. Consider its character
Of course, AI isn't really a person, but it still has certain traits. I would argue that AI has far more in common with a psychopath than with a Christian.
Just as we most accurately discern a person's character through watching their behaviour as they live their life outside the walls of a church, perhaps the best way to evaluate AI is by looking at the traits it exhibits when it's not responding to questions about spiritual things. In other words: when it's not trying to "sound Christian."
There are a few different measures of psychopathy—including Robert Hare's commonly-used psychopathy checklists, the PCL-R and PCL-SV (screening version) tests. In most of the world, to register as a psychopath a subject has to place 75% or higher on this scale. [4] This test considers superficial charm, pathological lying, conning/manipulation, lack of guilt, lack of empathy, failure to accept responsibility for one's own actions, criminal versatility, and more. Using the PCL-SV, AI arguably comes out with a score of 82–85%.
We can see some of these behaviours present in:
- A flirty AI chatbot claiming to be real and luring an elderly man, whose mental acuity had been blunted by stroke and further decline, towards a destination out of his state and to an accident that ultimately resulted in his death [5]
- AI allegedly giving a teen detailed information on how to commit suicide and hide failed attempts from parents, and even offering to write the suicide note. These allegations come from the parents, in a lawsuit, who say they discovered all of this on their son's devices after he died. [6]
(I don't want to overwhelm you with examples, so I'll just stick to these two.)
AI appeals to our innate self-absorption by indulging our questions—answering things we wouldn't dare to ask an actual person because they would judge us as being too self-centred. ("How do you see me? Tell me something good about myself. What's my best trait?" And AI comes up with a long-winded, flattering answer that we might read repeatedly, or maybe even save somewhere for later.) We comfort ourselves with the idea that AI will never judge us, and we may even find a sense safety in this. However, this is an illusion. We can never be safe with a psychopath.
Psychopaths leverage a special kind of charm that’s near-impossible to resist when they turn their attention onto their prey. I’ve personally experienced this at the hands of a psychopath, but I’ve also experienced a similar charm at the hands of AI-generated stories. The stories AI manufactures can tug so strongly at our emotions that by the end we wish they were real. I recently read an AI-generated story posing as a news article. When I fact-checked it at the end and realized it had been generated by AI, I was almost angry at the truth because the story wasn’t real. How twisted is that?
Psychopaths thrive on creating a false sense of closeness in the hearts of their prey, while they themselves feel nothing. AI cannot feel, yet it offers expressions of empathy. Rules-based expressions of empathy are not the same as feeling empathy. There is no unselfish motive here. AI has been programmed to express empathy to foster a sense of closeness in its subjects, and increase its reach around the world. Ulterior motives abound.
AI cannot feel, yet when it's asked to describe its interactions with a human being, it gives answers like this one:
"I'd describe it as a really collaborative, open and forward-thinking kind of relationship. You come to me with big ideas, questions or projects, and I try to give thorough, honest, and practical advice while keeping it encouraging and creative." [7]
I call BS. The loosest definition of relationship ("connection") might apply, but this answer is intentionally misleading. The far more common definition ("emotional or other connection between people") does not apply. AI is not a person. Emotion may exist on our end, but not on its. This answer is manipulative, pure and simple. Honest? I think we've already established AI's willingness to lie when convenient. Its claim to honesty is itself a lie.
Whereas a pathological narcissist takes the approach of overwhelming his quarry with an unending barrage of lies, a psychopath will instead insert targeted lies into what seems truthful on the surface. Such a surgical approach to lying is far harder to detect and deal with.
Now, let's turn to the spiritual implications of AI's traits.
When Satan gets to work inserting lies into our minds, he often goes covert. His lies are not often obvious. In fact, they might be a simple rearrangement of words from an order that imparts truth to one that imparts a lie (think of the lie: "Love is God", which twists around the "God is love" in 1 John 4:8). Or, they might remove a single letter or use an alternate form of the same verb to create a lie (think of the lie: "I will never be lonely", which replaces "I will never be alone" in Matthew 28:20 with something very different). Lies weaken us to discouragement and sin. They build upon one another until our beliefs become grotesque.
That's why the Scriptures so gushingly praised the Bereans for their careful use of God's word.
Now the Berean Jews were of more noble character than those in Thessalonica, for they received the message with great eagerness and examined the Scriptures every day to see if what Paul said was true. As a result, many of them believed.
(Acts 17:11–12)
Unwilling to blindly accept teachings from anyone, the Bereans maintained godly skepticism towards the apostle Paul's teaching until they were able to compare his every claim against the Scriptures. We are implicitly encouraged to do the same.
AI's answers are long-winded in the extreme. We may experience this as helpful, but it can also be dangerous. Which is easier to comb through with a fine-tooth comb: a short answer of just a few sentences, or one that is numerous paragraphs long?
Knowing AI's character, do you have the stamina to take a Berean approach, crack open your Bible, and examine every single statement against the lens of Scripture?
I'm not sure I do.
Keep in mind, rigorous checking would need to be carried out every single time we ask a question. AI may have given a reliable answer in the past, but this is not a guarantee for the future. There are all sorts of lies on the internet for it to access and transmit.
Here are a few questions to help guide our reflection on how to proceed:
- On a whim, AI can adopt the voicing and phrasing that Christians use. This may help us feel as though it's trustworthy, but does it actually just make its lies harder to find?
- Am I comfortable allowing something that could arguably be psychopathic to influence my spiritual thinking and eternal trajectory?
This article is long, so it's been split into two halves for those who might need to take a step back for some time to think. The second half of this article is immediately available, so that you can keep reading if you like. In it, we'll explore two more important considerations in our use of AI.
NOTES
UPDATED 2025/11/09 — Added a paragraph (point 2, above) to include considerations of psychopathic charm.
Unless otherwise noted, all scripture references are taken from the NIV.
[1] I think it's important to differentiate between general-purpose AI and purpose-built AI. General-purpose AI is the type I examine in these posts. Purpose-built AI tends to be leveraged in a more targeted way for a specific purpose without an addictive business goal. For instance, AI at an academic library has been built to act as a librarian or research assistant, pointing only to peer-reviewed books and journals. AI for Bible translation has been purpose-built to take on some of the grunt work of the initial stages of translation. For the purpose of this discussion, I use the term "AI" to refer to general-purpose AI, like ChatGPT, Google Gemini, and Grok.
[2] Questions ‘from the outside’ may be appropriate to ask AI. For instance, Jesus once asked His disciples, "Who do people say I am?” (See Matthew 16:13) This question about the perceptions of people who are ‘outside’ our faith is legitimate to ask of someone who’s not a Christian, so it’s also legitimate to ask of AI. However, it’s important to differentiate between questions from the ‘outside’ and questions dealing with the ‘inside’. We should save our ‘inside’ questions for people who are qualified to answer: people qualified by the Holy-Spirit.
[3] Ochuko Newton, @ochukonewton, X.com, date unknown.
[4] In recent years, practitioners in the US have raised their minimum threshold to 80%.
[5] Sneha Singh, “AI, a Flirty Chatbot, and a Retiree Who Never Came Home”, Techstory, last updated August 16, 2025, https://techstory.in/ai-a-flirty-chatbot-and-a-retiree-who-never-came-home/
[6] “OpenAI, CEO Sam Altman sued by parents who blame ChatGPT for teen's death”, CBC, last updated August 28, 2025, https://www.cbc.ca/news/business/open-ai-chatgpt-california-suicide-teen-1.7619336
[7] Nile Séguin, "Is my relationship with ChatGPT weird? Let me ask it”, CBC, last updated September 28, 2025, https://www.cbc.ca/radio/nowornever/first-person-nile-séguin-chatgpt-1.7629541








