PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Remarkable study examines the impact of real-life “robot preachers”

by Vladimir Hedrih
February 24, 2024
Reading Time: 5 mins read
(Photo credit: OpenAI's DALL·E)

(Photo credit: OpenAI's DALL·E)

Share on TwitterShare on Facebook

A trio of studies—one conducted in a recently automated Buddhist temple, another in a Taoist temple, and a third online—revealed that religious followers perceive robot preachers and the institutions employing them as less credible than their human counterparts. This perception stems from the belief that robots lack consciousness, thereby diminishing their credibility as preachers. The research was published in the Journal of Experimental Psychology General.

The past decade has seen a great increase in the capabilities of automated and artificial intelligence (AI) systems. This has initiated a transformation of both the global society and economy that is of such an extent that many believe that we are witnessing the start of an AI revolution that will completely transform the way we work and live.

Artificial intelligence systems are now used with great effect in many domains that were traditionally considered exclusively human. While automated systems have been used in manufacturing and gaming for decades, recent advancements have enabled AI systems to penetrate areas such as medicine, journalism, psychotherapy, and even prostitution. Many studies show that people find it very hard, or even impossible, to distinguish between art created by AI and that made by humans by mere observation.

Recently, attempts were made to have robots take on roles of religious professionals. Mindar, a robot designed to look like the Buddhist deity of mercy, made headlines back in 2019 after it began giving religious services in a temple in Japan. While still rare, there were instances of robots being installed to perform similar roles in several Christian churches of different denominations.

In their new study, Joshua Conrad Jackson of the University of Chicago and his colleagues hypothesized that although robot preachers may effectively convey religious content, they may struggle with credibility. Robots are perceived as lacking a mind, devoid of the ability to feel, understand, think, or make decisions. Consequently, robots are seen as incapable of genuinely believing in supernatural entities, failing to demonstrate the deep commitment to faith expected of religious professionals. To explore this, the researchers conducted three studies.

In the first study, the researchers surveyed individuals leaving the Kodaiji Temple after a sermon. Kodaiji Temple is a large Buddhist Temple in Kyoto’s Higashiyama District (in Japan), a place where the famous robot preacher Mindar was installed. Jackson and his colleagues surveyed individuals after a sermon given by Mindar and a sermon given by a human preacher. Mindar and human preachers delivered sermons in different buildings. During the 6-week observation period, the study authors surveyed 498 participants. Their average age was 46 years and 228 were women.

The researchers offered 1000 yen (just under $7) to prospective survey participants. Once accepted, participants were given the option to donate a portion of this money to the temple. The amount donated served as an indicator of their religious commitment. Additionally, participants rated preachers’ credibility (e.g. “The robot [human] priest acts as a good religious role model”) and completed an assessment of their religious beliefs and moral values.

The second study was an experiment conducted in a Taoist temple in Singapore, where the researchers randomly assigned sermons to be delivered either by a robot or a human preacher. A total of  239 temple visitors participated in the study.

Google News Preferences Add PsyPost to your preferred sources

The researchers assessed their religious commitment and had them rate the credibility of the preacher in the same way they used in study 1. Additionally, these individuals completed an assessment of perceptions of robots’ minds i.e., their ability to have experiences (e.g., “Robots can have desires”) and their agency (e.g., “Robots can think”).

Study 3 was an online experiment in which participants were told that a sermon they were shown was either generated by an advanced AI program or by a human preacher. Participants were 300 Amazon MTurk workers who read an excerpt from a sermon allegedly written by an AI or a human. After that, they completed an assessment of religious commitment (e.g. “I would consider donating money to my church”; “I would consider telling strangers to join my place of worship”)

The participants also rated the credibility of the alleged author of the sermon, his perceived possession of a mind (e.g. “The person [AI] who wrote this sermon is probably capable of thinking and planning”, “The person [AI] who wrote this sermon is probably capable of hunger and thirst”), his charisma and likability (“The person [AI] who wrote this sermon is probably likable”, “The person [AI] who wrote this sermon is probably charismatic.”), credibility of the person who trained the robot, and the extent to which they believed AI could be human-like (AI anthropomorphism).

Results of study 1 showed that participants who viewed Mindar were less likely to believe in God compared to participants who viewed a human preacher. They also donated only 26% of their compensation on average to the church, compared to 44% donated by those who saw a human preacher. Participants also found Mindar to be substantially less credible than a human preacher. Further analysis showed that whether the preacher was a robot or a human was associated with whether individuals will donate to the temple or not, but not with how much they will donate.

Results of study 2 largely mirrored the first study. Participants rated the robot preacher as having lower credibility than the human preacher. This extended to the credibility of the whole temple – it was rated as lower after participants attended a robot-led survey. Participants donated less after a robot-led sermon in this study as well.

Results of study 3 confirmed the previous findings – AI authors of sermons were perceived as less credible than human authors and as having less mind than human authors. Participants also reported less religious commitment after reading the AI sermon.

The study authors tested a statistical model that proposed that information about the nature of the alleged author of the sermon influenced the perception of the mind of the sermon author. This determined the credibility assigned to the author, which, in turn, influenced participants’ religious commitment. Results showed that such a state of relationships between these factors is indeed possible.

Another model suggested that perceptions of the sermon author’s mental capacity influenced their likability and credibility, which then enhanced religious commitment. The data supported this relationship as well.

“Our research reveals how recent insights from psychological research on social learning and cultural transmission can help predict which occupations can be successfully automated, and which need remain human. Domains like religion, which rely on agents modeling their epistemic and moral commitment to belief systems and each other, may not be easily outsourced to robots,” the study authors concluded.

The study makes an important contribution to the scientific study of ways humans perceive AI agents. However, it also has limitations that need to be taken into account. Notably, study authors did not explore conditions under which robots could be made more credible or likeable as religious figures. It is possible that results do not apply to all robots and that modifying certain characteristics of robots or the context of their use might lead to different results.

The paper, “Exposure to Robot Preachers Undermines Religious Commitment,” was authored by Joshua Conrad Jackson, Kai Chi Yam, Pok Man Tang, and Ting Liu.

RELATED

AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

Conversational AI shows promise in easing symptoms of anxiety and depression

May 6, 2026
New research challenges the idea that logical thinking diminishes religious belief
Cognitive Science

New research challenges the idea that logical thinking diminishes religious belief

May 6, 2026
The surprising link between conspiracy mentality and deepfake detection ability
Artificial Intelligence

Deepfake videos degrade political reputations even when viewers realize they are fake

May 5, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

Turning to chatbots when lonely may exacerbate feelings of loneliness, study finds

May 4, 2026
Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age
Artificial Intelligence

Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age

May 3, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Fascinating new research suggests artificial neurodivergence could help solve the AI alignment problem

May 1, 2026
Gold digging is strongly linked to psychopathy and dark personality traits, study finds
Artificial Intelligence

High trust in AI leaves individuals vulnerable to “cognitive surrender,” study finds

April 30, 2026
Study suggests men are more drawn to religion when it is consistent with their reproductive goals
Addiction

Subconscious surrender to God predicts long-term addiction recovery, study finds

April 30, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • What your personality traits reveal about your sexual fantasies
  • Both men and women view a partner’s financial investment in a rival as a major relationship threat
  • Brain scans of 800 incarcerated men link psychopathy to an expanded cortical surface area
  • The gender friendship gap is driven primarily by white men, not a universal difference across groups
  • General intelligence explains the link between math and music skills

Science of Money

  • Why a blue background can make a brown sofa look bigger
  • Why brand names like “Yum Yum” and “BonBon” taste sweeter to our brains
  • How the science of persuasion connects to B2B sales success
  • Can AI shopping assistants make consumers less willing to choose eco-friendly options?
  • Relying on financial bonuses might actually be driving your sales team away, new research suggests

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc