The study shows lower acceptance of euthanasia decisions made by AI

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

The role of AI in medical decision making results in different responses in humans compared to human doctors. A new study examined the situations in which acceptance differs and why using stories that described medical cases. People are less accepting of euthanasia decisions made by robots and AI than those made by human doctors, a new study finds. The international study, led by the University of Turku in Finland, examined people's moral judgments about the life care decisions of AI and robots as they relate to people in comas. The research team conducted…

The study shows lower acceptance of euthanasia decisions made by AI

The role of AI in medical decision making results in different responses in humans compared to human doctors. A new study examined the situations in which acceptance differs and why using stories that described medical cases.

People are less accepting of euthanasia decisions made by robots and AI than those made by human doctors, a new study finds. The international study, led by the University of Turku in Finland, examined people's moral judgments about the life care decisions of AI and robots as they relate to people in comas. The research team conducted the study in Finland, the Czech Republic and the United Kingdom by telling the research subjects stories that described medical cases.

The project's lead investigator, university lecturer Michael Laakasuo of the University of Turku, explains that the phenomenon in which humans make some of the decisions made by AI and robots to a higher standard than similar decisions made by humans is called the human-robot moral judgment asymmetry effect.

However, it is still a scientific mystery in which decisions and situations give rise to the moral judgment asymmetry effect. Our team examined various situational factors related to the emergence of this phenomenon and the acceptance of moral decisions. “

Michael Laakasuo, University of Turku

People are seen as more competent decision makers

According to the research results, the phenomenon occurred where people were less likely to accept euthanasia decisions made by the AI ​​or a robot, regardless of whether the machine was in an advisory role or the actual decision maker. If the decision was to keep the livelihood system up to date, there was no judgment asymmetry between the decisions of humans and AI. However, in general, research subjects favored decisions that involved turning off life support and not continuing.

The difference in acceptance between human and AI decision makers disappeared in situations where the patient in the story told to research subjects was awake and self-requested for euthanasia by lethal injection.

The research team also found that the moral judgment asymmetry is caused, at least in part, by humans being less competent decision-makers than humans when it comes to AI.

“AI’s ability to explain and justify its decisions has been viewed as limited, which may explain why people are less likely to embrace AI in clinical roles.”

Experience with AI plays an important role

According to Laakasuo, the results suggest that patient autonomy is crucial when applying AI in healthcare.

"Our research illuminates the complex nature of moral judgments when considering AI decision-making in medical care. People perceive AI's involvement in decision-making very differently compared to when a human is in charge," he says.

"The implications of this research are significant as the role of AI in our society and medical care expands every day. It is important to understand the experiences and reactions of ordinary people so that future systems can be perceived as morally acceptable."


Sources:

Journal reference:

Laakasuo, M.,et al.(2025). Moral psychological exploration of the asymmetry effect in AI-assisted euthanasia decisions. Cognition. doi.org/10.1016/j.cognition.2025.106177.