I’m Afraid – NHSEB 2022-2023 Regionals Case 1 Analysis: Artificial Intelligence Connections to Early Human Life

NHSEB Case 1 invites teams to think about the moral implications of creating sentient artificial life, and proposes a standard of cautious protectionism. It’s a cool topic on its own, but I’m noticing unexpected parallels with early human life that could inform an interesting all-things-considered view or inspire a nice judges’ Q&A question. And with NHSEBAcademy hosting a discussion with one of the case’s quoted philosophers this Thursday (click here to chat with Dr. Sebo live), now’s a great time to think harder about this case.

When A.I. neural networks will be sufficiently sophisticated to generate conscious awareness is unknown. We have enough trouble explaining normal consciousness. What would constitute clear evidence for artificial sentience is even more contested. It’s also unclear whether consciousness is a phenomenon replicable apart from organic material. Just as a genuine fire cannot be replicated in a computer simulation (no matter how fancy the algorithm, 1s and 0s modeling fire do not constitute actual fire), maybe consciousness operates similarly, forever precluding non-organic minds.

However, since we have a prima facie obligation to consider the interests of any entity capable of suffering, perhaps we should assume certain advanced A.I. is sentient to avoid facilitating great pain. Or so the NHSEB Case Committee suggests. They quote philosopher Jeff Sebo as arguing that “turning an A.I. off [and beforehand causing it to dread its death] can be wrong even if the risk of the A.I. being sentient is low… we should extend moral consideration to A.I.s not when A.I.s are definitely sentient or even probably sentient, but rather when they have a non-negligible chance of being sentient, given the evidence.” The writers go on to infer that the implicit moral principle “is that creating something with the capacity for sentience would also mean we created something that deserves moral consideration.” This seems noncontroversial enough. If there’s credible risk that Action A may harm a being capable of sentience, that’s at the very least reason to reconsider Action A.

Philosopher John McClellan once informally argued for similar caution on a completely different topic. Imagine that you’re hunting deer and hear a rustling sound in a bush. It might be a deer, but it might also be another hunter. Since killing a person would be a great moral wrong, we should of course await visual confirmation that it’s a deer before shooting. Well, McClellan argued that if we agree it would be immoral to shoot into a bush when there’s a reasonable risk that doing so might kill a person, we should apply similar logic to the status of Unborn Developing Humans when thinking about the morality of abortion. While some argue that UDHs are morally insignificant, others argue they possess great moral worth for a variety of reasons such as their unique capacity to develop into a full person and their possession of several features of personhood later in pregnancy, including conscious awareness. McClellan argued that so long as such reasons (or others) are sufficient to generate a non-negligible risk that UDHs are morally significant, abortion is extremely morally risky and thus only justifiable, if ever, in the most extreme circumstances (e.g., when the mother’s life is in danger).

How should Sebo’s standard that something with “the capacity for sentience… deserves moral consideration” apply to McClellan’s standard that when there’s a risk that we may destroy something with high moral value, we should err on the side of caution? How should our judgments about the treatment of potentially sentient A.I. inform and mesh with our judgments about the treatment of Unborn Developing Humans – entities that definitely possess the capacity for eventual sentience, and in the later stages of gestation, already are sentient? For one, maybe logical consistency demands that if we argue in favor of caution when it comes to possibly sentient A.I., we should adopt similar caution when dealing with Unborn Developing Humans.

Agree? Sense relevant differences that would justify treating one with more respect than the other? Either way, considering this angle should enrich a team’s overall understanding, and could also serve as a fantastic judge’s question. And if you think the case is cool and would like to discuss it with Dr. Sebo himself, be sure to take advantage of the town hall event happening this Thursday, December 15th at 7 EST. Attendance is free, but pre-registration is required. Click here for more info.