• Home
  • 2026 Plenary Sessions
  • Registration and Fees
  • Location and Accommodations
  • About Us
    • Sponsors
    • Executive Committee
    • Advisory Board
    • Contact Us/Join Mailing List
  • CME
  • Sunday, March 22
  • Monday, March 23
  • Tuesday, March 24
  • 2026 Posters
2026 Conference on Medicine and Religion

AI and Moral Skill in Medicine: Toward An Apologia for Sympathizing with Human Weakness
Ethan Schimmoeller, MD, Riverside Methodist Hospital

Already in the 1960s, electronic medical record pioneer Warner Slack of Harvard was asked whether computers might one day replace physicians, provoking the following aphorism in response: “Any doctor who can be replaced by a computer deserves to be replaced by a computer.” Though Slack meant to underscore the rich range of cognitive, affective, and relational skills embodied as physician meets patient, capacities technologically unimaginable at the time, doctors today worry they may become another line of white collar work rendered redundant by Silicon Valley. Indeed, emerging empiric evidence suggests doctors are already experiencing technical deskilling by using AI. One recent study showed that physician skills dropped after only three months of using an AI tool to aid recognizing pre-cancerous polyps in colonoscopies. What remains invisible to tech enthusiasts and doomers alike, however, are the moral skills at stake in tandem with technical skills.
​

Virtue theorist Shannon Vallor presents precisely this problem: moral character is always ambiguous, awaiting formation through our modes of engagement with technology for good or for ill. Examining closely the truism that modern technical progress does not equal, and in fact often obfuscates, moral progress, she makes the case that handing over activities necessary to form moral skills and virtue to technology risks moral deskilling. For example, emerging AI tools aiming to support or optimize pain management may limit human opportunities to cultivate care as a moral skill in addition to a technical skill. Indeed the core competencies of hospice and palliative medicine physicians are at least as moral as they are technical: addressing suffering and distress, care of the imminently dying, communication attending to emotion, prognostication; each embodies the moral skill of care. The real question, with an algorithm,in other words, is less whether it is good or bad but whether it forms or deforms us. A care-less doctor deserves to be replaced by a computer. Today, AI serves as a mirror where we behold ourselves, at least a very particular version of ourselves, reflecting the narratives we tell about ourselves, forcing us to ask what conditions must be the case to render the spectre of care-less medicine plausible. 

This paper will follow Vallor’s virtue analysis from the moral surface of technology to the anthropological and cultural tides moving our creation and engagement of AI in medicine, focusing on pain management in palliative medicine as representative for medicine as a whole. Additionally, this paper will question whether virtue theory - especially as related to care ethics implicit in palliative medicine - can save medicine from AI, or if virtue rather requires a more robust, specifically Christian image of what it means to be human that can speak prophetically to emerging technologies. “For we do not have a high priest who is unable to sympathize with our weaknesses, but we have one who in every respect has been tested as we are, yet without sin.” Hebrews 4:15