2025-2026 NHSEB Regional Case 11 Calling Dr. Alexa

With strong similarities to last season’s “My Pal Hal,” NHSEB regional case 11 is about using AI for psychological support. In “Calling Dr. Alexa,” high school senior Grace uses an AI therapy app to abate stress caused by high-pressure studies and the drama of navigating her transition into adulthood. While a personal psychologist would be nice, the app is on-demand 24 hours per day and much more affordable, so Grace uses it regularly. She feels like it’s helping, but worries the guidance she’s receiving might be cookie-cutter slop, and also that the intimate details she shares could one day be exposed.

There are many angles at team could take in analyzing this one, but there’s some overlap with my team’s thoughts on case 5. “Grade Expectations” is about educators leveraging AI for lesson prep and even grading, which might be hypocritical if they’ve banned student AI use or simply less effective than 100% human teaching. When we discussed it, my team thought there was something morally relevant about their shared desire to have their schoolwork thoughtfully engaged by another human mind, as opposed to an unthinking algorithm. However, what if algorithms were proven to achieve better learning outcomes in terms of higher test scores? The third discussion question for “Calling Dr. Alexa” broaches this “better outcomes using AI than humans” possibility.

Discussion Question 3: If an AI system reliably outperforms average therapists on key outcomes, is there still a moral reason to prefer human care for some patients?

In crafting this question, the case committee thoughtfully included the phrases “moral reason” and “some patients” to ensure teams think through a finely nuanced answer. But this alludes to an important consideration when it comes to preferring human versus AI labor in many contexts. Currently, it’s probably not the case that AI does a better job than average human therapists. But as AI improves, that could soon change, assuming we could agree on standards (patient self-reports of contentment, reduced need for prescriptions, etc.). And if/when that time comes, would we still have a moral reason to prefer human care for some patients? Similarly, if an AI system reliably outperformed average educators on key outcomes, might we conclude similarly? What about human engineers, human clergy, human politicians?

Happy discussing! Below is a superb study guide from superstar coach Michael Andersen, so generously shared for the global Ethics Bowl community.