Angels, Demons, AI, and Reciprocity in Healthcare Ethics
Dallas Gingles, PhD, MTS, Southern Methodist University
This paper joins the ever-expanding conversation about generative AI, but it does so by focusing strictly on the task of moral reasoning and by drawing attention to an unlikely comparison to AI in that task.
The best analog for AI in the broad theological/philosophical tradition is angels and demons: non-bodily, super intelligent, with some under-defined agential status. But according to that tradition, angels and demons don’t reason morally. They are creatures who either perfectly will what God wills for them, or they utterly will against God’s will. If the analogy holds, outsourcing human moral agency to AI is a category error, because even if it is something like “super intelligent” it cannot reason morally.
This comparison draws attention to an even more important argument. The core (or very near the core) of most great moral traditions is reciprocity (e.g., the Golden Rule). That reciprocity is possible because of our shared finitude, rather than only because of our shared rationality. This finitude is especially important in moral reasoning in healthcare, where we entrust ourselves to other finite creatures not least because we know that they intimately understand the terror of finitude. A primary example of this is the first question patients frequently ask doctors: “if you were in my shoes, what would you do.” Outsourcing moral deliberation to AI, then, is not only a category mistake. Attempting to do so threatens the very core of morality, because there is no reciprocity between AI and human creatures. Relations of trust—especially between patients and practitioners—are based, not exclusively, and maybe not even primarily, on the superior intelligence of the practitioner, but on our ability to see our finitude and fragility reflected in the practitioner.
The best analog for AI in the broad theological/philosophical tradition is angels and demons: non-bodily, super intelligent, with some under-defined agential status. But according to that tradition, angels and demons don’t reason morally. They are creatures who either perfectly will what God wills for them, or they utterly will against God’s will. If the analogy holds, outsourcing human moral agency to AI is a category error, because even if it is something like “super intelligent” it cannot reason morally.
This comparison draws attention to an even more important argument. The core (or very near the core) of most great moral traditions is reciprocity (e.g., the Golden Rule). That reciprocity is possible because of our shared finitude, rather than only because of our shared rationality. This finitude is especially important in moral reasoning in healthcare, where we entrust ourselves to other finite creatures not least because we know that they intimately understand the terror of finitude. A primary example of this is the first question patients frequently ask doctors: “if you were in my shoes, what would you do.” Outsourcing moral deliberation to AI, then, is not only a category mistake. Attempting to do so threatens the very core of morality, because there is no reciprocity between AI and human creatures. Relations of trust—especially between patients and practitioners—are based, not exclusively, and maybe not even primarily, on the superior intelligence of the practitioner, but on our ability to see our finitude and fragility reflected in the practitioner.