2025-2026 NHSEB Regionals Case 6: Mission Admission

If you do something because it helps others, but also because it helps you, does that dilute the praiseworthiness of the action? In other words, are more selfless acts morally better? On the other hand, could pursuing good for others + good for you actually amplify an action’s praiseworthiness – make it a better action overall? Or would your whys have little impact on an action’s praiseworthiness? Perhaps outcomes are all that matter – intentions be darned?

Why all the questions? Because NHSEB case 6 is about 17-year-old college hopeful Erin, who founds a nonprofit to spread literacy, but also because it will look really good on her college admission applications.

There’s some intuitive appeal of Erin doing it because it will help others. But it’s hard to blame her for also wanting to improve her chances of getting into the college of her choice. All things considered, we probably wouldn’t criticize Erin for helping to cultivate her community’s love for reading. But if we had reason to think 95% of her motive was to get into Yale and only 5% was to promote literacy, we’d probably think less of her than were those %s reversed. The questions are, how much less would we think of her, why, and how should our judgments about Erin influence the motives that we ourselves suppress or nurture in our own decision-making?

As you begin to think about the specifics of Erin’s case (always read the specifics), as well as related areas good judges might ask you to tackle, consider the “Very Helpful (But Optional) Resources for Further Exploration” in coach Michael Andersen’s excellent study guide below, as well as the entire thing. Thanks as always for the awesome study guide, coach Michael!

2025-2026 NHSEB Regionals Case 8 and MSEB Regionals Case 3 Fido as Feed

Case 8 in the high school set and case 3 in the middle school set (same case: “Fido as Feed”) invites teams to weigh the nourishment of zoo animals against the emotions of pet owners. Or, at least that’s one way to frame a zoo in Denmark that invited donations of “unwanted but otherwise healthy animals” to be used as food for their carnivores.

The idea is to allow the zoo animals to enjoy the whole meal – fur, organs, bones, and all – as they would in the wild. And zoo officials specified that they’re not requesting cats or dogs, which was probably a smart PR move, but rather “chickens, rabbits, or guinea pigs,” as well as horses. The Ethics Bowl case doesn’t mention horses, but that species was indeed included in the zoo’s request.

My high school team said this case made them sad. And they did indeed look and sound sad while discussing it. But it also gave them a chance to think about how we treat similar species differently. The same person who provides a cushy indoor life for their beloved cat might add bacon to their cheeseburger without a second thought. And when it comes to how that cat might be treated as it nears the end of its life, it seems a person would either have to be very callous or very enlightened to volunteer it to be ripped apart by a tiger, even with reassurances that it would be humanely euthanized first.

I sense fruitful connections to how we treat the cadavers of people who donate their bodies to science. But before making that leap, check out coach Michael’s excellent-as-always study guide below, tailored to work for either a high school or a middle school Ethics Bowl audience. Enjoy!

2025-2026 NHSEB Regional Case 2 Paving the Way

Should a public park trail be paved to make it easier for people in wheelchairs to navigate, even though this might harm nearby plants and exacerbate erosion? NHSEB regional case 2 essentially pits respect for the natural habitat against improving accessibility for humans. However, there may be technological solutions that could balance both.

As your team thinks about this one and works through coach Michael Andersen’s study guide below, consider searching for “sustainable” trail options that might protect wildlife and foliage while simultaneously improving humans’ ability to enjoy nature.

However, be sure to seriously engage the case’s discussion questions, too, because even if a crushed stone or reclaimed wood trail might solve the immediate problem, Bowl organizers may very well pose a competition question that asks teams to balance human and nonhuman interests more generally (see coach Michael’s recommended video #1, Whose Life is More Valuable? for guidance). This is true for all Ethics Bowl cases. Good teams should always be ready to pivot into nearby philosophical territory, for if an initial question doesn’t stray from the case details, judge Q&A probably will.

Bowls Behind Bars

Several cases this IEB and NHSEB season involve treatment of incarcerated persons – whether prisoners’ religious dietary needs should be accommodated, whether they should be allowed to trade organs or bone marrow for reduced sentences, at what age (if any) life without the possibility of parole might be a just punishment. It would be understandable for teams with little experience with the prisons system to base their judgments on what they’ve learned from movies and television, or to think only about criminals’ victims. So, here are two resources to help expand their empathy and enhance their views – a remarkable video of incarcerated students actually doing Ethics Bowl, and an excerpt from Ethics Bowl to the Rescue! chapter 12: Bowls Behind Bars.

One place you might not expect to find Ethics Bowls is in prisons. Then again, there was once a somewhat famous philosopher who did some of his best work while behind bars. We know this because conversations with friends who came to visit were later published. One friend tried to convince him to escape, even offering to help, which led to a discussion on the nature of justice and citizens’ duties.

On the final day, talk turned to logical arguments concerning the immortality of the soul. The imprisoned philosopher concluded that our soul most likely does survive bodily death, which might have made his ultimate sentence a little easier to bear. Anyway, you may have heard of him—Socrates?

While Socrates’s dialogues with Crito, Phaedo, Simmias, and others may not have constituted an Ethics Bowl, Ethics Bowls have been held in prisons in at least five U.S. states. And as you might imagine, they’re an opportunity to not only enhance moral reasoning, but to humanize, teach empathy and compassion for all involved.

San Quentin Pioneers

In the first known case, University of California Santa Cruz philosophy professor, IEB coach, and Northern California HSEB organizer, Kyle Robertson, coached a group of students at San Quentin State Prison (later renamed San Quentin Rehabilitation Center) in late 2017, then brought his IEB team to hold a friendly match in early 2018. Writing for UC Santa Cruz, Scott Rappaport covered the event, as well as the background leading up to it.

Twice a month from last September to February, UC Santa Cruz philosophy lecturer Kyle Robertson woke up early, dropped his kids off at school, drove north for one hour and fifty minutes, crossed the Richmond Bridge, and went to San Quentin.

He would park in the prison lot, walk past a gift shop selling art created by death row inmates, and enter the main gate, where he would sign in at the first of three consecutive checkpoints. Finally entering the prison yard, he would walk past prisoners playing on the basketball courts and others engaged in games of chess, to get to the education center of the prison.

Robertson was there to teach a course in Ethics Bowl—a non-confrontational alternative to the traditional competitive form of debate—in collaboration with the Prison University Project (PUP). At the same time, he was also teaching an undergraduate course and coaching a team in Ethics Bowl at UC Santa Cruz. He soon suggested and arranged a very unusual debate between seven philosophy students from UC Santa Cruz and a team of prison inmates from San Quentin. It took place in the prison chapel—in front of an audience of nearly 100 inmates. [1]

UC Santa Cruz IEB team member Pedro Enriquez was there that day. He was a junior at the time and recalled his initial unease.

I thought it was going to be a lot more like the movies where they’re locked down, and you know, they’re going to be hollering or whatever. So when we walked in after we passed the security and they were just walking around, I was like, “Wait, is anybody gonna do anything? Like, where are all the cops? What if they do something?”[2]

Enriquez and his teammates quickly realized they were safe. And when apart from an interruption for a mandatory headcount, the rounds progressed per usual.  The San Quentin team took the trophy, the UC Santa Cruz IEB team returned the next year, and word soon spread.

Contagious Compassion

Among the judges that day was none other than Ethics Bowl creator Bob Ladenson who had moved to California to be closer to his grandkids after retiring from the Illinois Institute of Technology. At his side was the IEB director at the time, professor Richard Greene from Weber State University in Utah. Greene spoke with many of the imprisoned students and was so impressed by their seriousness and dedication that he worked with Rachel Robison-Greene of Utah State University to found a similar program in Utah. By the spring of 2020, they had an Ethics Bowl class in both the men’s and women’s state prisons.

COVID derailed their efforts temporarily. But they restarted in 2023, and after an eight-week class, two Utah IEB teams, one from Weber State and another from Utah State, visited for a friendly at the women’s facility. Greene had nothing but good things to say about the event, as well as his experience working with the students… [continued with sections on Ethics Bowl in prisons in Washington, Maryland, and Massachusetts].


[1] “How to Find Truth in Today’s Partisan World” by Scott Rappaport for UC Santa Cruz’s Center for Public Philosophy, reports.news.ucsc.edu/ethics-bowl

[2] Ibid.

2025-2026 NHSEB Regional Case 1 Whose Germline is it Anyway

NHSEB regional case 1, “Whose Germline is it Anyway?” (also included in Oregon’s MSEB cases) invites us to consider the permissibility of editing human genes in heritable ways. It’s one thing when the health risks of CRISPR gene editing would only directly impact impact an autonomous, volunteering adult. It’s another when we’re editing the genes of the unborn. And it’s yet another matter when the edits could be passed to offspring and incorporated into the broader human gene pool.

When my team discussed this case, they worried about the unknown health risks, but thought those could be overridden when the ailment being addressed was sufficiently severe. However, given that this is case #1, and so the first they discussed, they may change their minds as we return to it.

And thanks to coach Michael Andersen’s excellent study guide below, they’ll have a lot more to think about this time around!

2025-2026 NHSEB Regional Case 5 Grade Expectations

I sometimes use AI to plan my philosophy classes. Should I feel guilty? Should I disclose it to my students? Should I stop?

NHSEB case five is all about educators using AI. One college student catches her professor using it to create presentation slides, which is extra scandalous because the professor had forbidden their students from using AI in their class. Another student catches their professor using AI to not only grade their essay, but to generate feedback.

Like most issues, educators using AI would probably be more or less OK given the details. We’d need to ask separately, is it OK for teachers to use AI to help prepare their lecture notes? To brainstorm assignments? Draft exam questions? What about grading? Would a teacher’s experience and background make a difference (consider a 1st-year trainee vs. a 20th year leader in their field)? Would the subject make a difference (algebra vs. English, physics vs. philosophy)? Would the grading method matter? Multiple choice bubble sheets have made teachers’ lives easier via auto-grading since the 70s. Today, online learning platforms do the same. However, while Scantron machines can score multiple choice answers, they’re incapable of analyzing narrative essays. BrightSpace’s auto-grading features can’t author tailored feedback (not yet, anyway). But modern gen AI can.

One relevant factor concerns consistency, for teachers might have some obligation to practice what they preach. In deciding whether an educator should or shouldn’t use AI, and whether they should close doing so if they do, their own demands to and expectations for their students would seem to make a difference. For example, below is a note included in my college philosophy class syllabi, followed by a prompt I recently gave ChatGPT in preparing for a class on Aristotle’s political philosophy.

Professor Matt’s Syllabus AI Note: You’re welcome and encouraged to use generative AI as a personal tutor on any topic we cover. If you’ve not dabbled with ChatGPT (it’s free), start before the world leaves you behind. However, on all graded assignments, do your own reading, thinking, writing and test-taking. In other words, ask AI questions you’d ask me, such as, “I read x article and I think the author was arguing y. Is that right?” Then ask follow ups. “Ok. But what about the section where he mentions z? That seems inconsistent with his overall view.” Really, it’s a wonderful on-demand, free personal tutor. Use it for that purpose alone and you’ll speed your learning and amplify your skill. Use it as a CheatBot to do your work for you and you’ll wind up no smarter than when you arrived, and ashamed of rather than proud of your diploma. If you have any questions about legit vs. not legit usage of AI in my class, please ask.

And the prompt I gave ChatGPT, then used to develop an in-class exercise: I’m teaching a class on Aristotle’s thoughts on the role of the state in nurturing flourishing. It’s on a brief selection where he suggests marriage and birthing ages, what women should do while pregnant, and also how kids should be shielded from corrupting images and plays. What might be some good class exercises to complement that?

Interestingly, one of ChatGPT’s suggestions was an Ethics Bowl case! It was on various state’s attempts to age check internet pornography viewers, which tied beautifully to Aristotle’s strict guidance on what kids should and shouldn’t consume. I decided to use it, and after some initial blushing, discussions went quite well.

However, I didn’t disclose that I got the idea from ChatGPT. My students know I love Ethics Bowl and we use cases often, so no one questioned it. But should I feel sneaky about consulting with a chatbot to supplement and improve my teaching? It could be subconscious rationalization, but I wouldn’t think so, because I wouldn’t feel an obligation to disclose other class prep strategies, either. For example, if I had gotten the Ethics Bowl pornography age verification case idea from a philosophy colleague, I wouldn’t have felt compelled to share. “This wasn’t my idea, but Dr. Bock’s.” If I had come across the idea in an issue of Teaching Ethics, I wouldn’t have, either. Similarly, when I consult with AI, yet use my human judgment to decide and customize what the class does, that feels like a praiseworthy rather than shameful act. It confirms that I want my students to have an enjoyable, worthwhile experience, and that I’m willing to invest extra time to ensure they do. And since I’m balancing obligations to my student with obligations to others (to play with and be a good dad to my kids, to devote time and attention to my wife, to do a little to better the world by promoting Ethics Bowl), being a better professor faster by leveraging AI seems OK.

That said, I heavily customized the assignment, had already created my own lecture notes based on my direct reading of the passage, and picked out a brief video to watch. It’s not like this was a new subject to me or I had AI generate a full script which I followed word-for-word. That would have been dishonest and irresponsible since AI tends to “hallucinate” (get things wrong).

Also, as you read above, I encourage my students to use AI as a personal tutor and thought partner. If I enforced a strict student AI ban (or tried to enforce one—preventing and proving AI abuse is very difficult), secretly using it myself would indeed seem hypocritical. But perhaps since I’m already an expert in my field and they’re still learning, it could be OK for me to use external resources, yet insist my students not? Maybe.

Cool, timely topic—thank you, case committee! Below is a study guide from coach Michael Andersen. Between it and the kickstart ideas above, coaches, teams, and judges should have more than enough to make quality progress on this one. And don’t neglect overlap with regional case 10: Calling Dr. Alexa, on the benefits, drawbacks, permissibility, and risks of using AI as a personal therapist. I’m less enthusiastic about current AI’s potential in that area, but you be the judge.

2025-2026 NHSEB Regional Case 11 Calling Dr. Alexa

With strong similarities to last season’s “My Pal Hal,” NHSEB regional case 11 is about using AI for psychological support. In “Calling Dr. Alexa,” high school senior Grace uses an AI therapy app to abate stress caused by high-pressure studies and the drama of navigating her transition into adulthood. While a personal psychologist would be nice, the app is on-demand 24 hours per day and much more affordable, so Grace uses it regularly. She feels like it’s helping, but worries the guidance she’s receiving might be cookie-cutter slop, and also that the intimate details she shares could one day be exposed.

There are many angles at team could take in analyzing this one, but there’s some overlap with my team’s thoughts on case 5. “Grade Expectations” is about educators leveraging AI for lesson prep and even grading, which might be hypocritical if they’ve banned student AI use or simply less effective than 100% human teaching. When we discussed it, my team thought there was something morally relevant about their shared desire to have their schoolwork thoughtfully engaged by another human mind, as opposed to an unthinking algorithm. However, what if algorithms were proven to achieve better learning outcomes in terms of higher test scores? The third discussion question for “Calling Dr. Alexa” broaches this “better outcomes using AI than humans” possibility.

Discussion Question 3: If an AI system reliably outperforms average therapists on key outcomes, is there still a moral reason to prefer human care for some patients?

In crafting this question, the case committee thoughtfully included the phrases “moral reason” and “some patients” to ensure teams think through a finely nuanced answer. But this alludes to an important consideration when it comes to preferring human versus AI labor in many contexts. Currently, it’s probably not the case that AI does a better job than average human therapists. But as AI improves, that could soon change, assuming we could agree on standards (patient self-reports of contentment, reduced need for prescriptions, etc.). And if/when that time comes, would we still have a moral reason to prefer human care for some patients? Similarly, if an AI system reliably outperformed average educators on key outcomes, might we conclude similarly? What about human engineers, human clergy, human politicians?

Happy discussing! Below is a superb study guide from superstar coach Michael Andersen, so generously shared for the global Ethics Bowl community.

2025-2026 NHSEB Case 15 Dead Men DO Tell Tales

Footage of murder victim Chris Pelkey played in court during sentencing of his killer

This past May, a judge in Arizona viewed AI-generated “testimony” from murder victim Chris Pelkey during the sentencing of his killer. If you watch the video above, you’ll see that the technology is pretty basic. It’s Mr. Pelkey’s likeness and apparent voice, which is confirmed with included footage recorded prior to his death. But the special effects aren’t seamless – it’s pretty obvious it’s a deepfake, which makes sense since Mr. Pelkey had been killed four years prior.

Interestingly, Mr. Pelkey’s likeness speaks of mercy and forgiveness. He says that he and his killer might have been friends in another life. And at the end, there’s footage of him fishing, suggesting a peaceful afterlife. Rather than his avatar’s script being generated by AI, it was written by his sister who had been thinking about what she’d say in her post-trial sentencing impact statement for at least two years. In crafting the script, she interviewed Mr. Pelkey’s elementary schoolteachers, his prom date, fellow Army servicemembers, friends, and other family.

My team discussed this case briefly, and one worry was that judges might put more stock in testimony of this nature than is deserved. We can’t know for sure what murder victims might have wanted. Judges realize this, but might be inappropriately swayed to grant more weight to speculation from an AI-generated video figure as opposed to a family member. In this case, Mr. Pelkey’s avatar suggested a possible desire for leniency. But there’s no reason to think that would always be the case were this to become common practice. And perhaps driven by the Mr. Pelkey’s avatar’s warmth and personability, the judge ultimately sentenced his killer to 10.5 years in prison, which was 1 year more than prosecutors requested.

Further, as broached by the case writers, usage of speculative testimony from a deceased victim opens the door to using the same from deceased, uncooperative, or hard-to-locate witnesses. A prosecutor or defense attorney could imagine what a witness might have said were they able to testify (and probably imagine testimony that would be most helpful for their case), and pass off hearsay as firsthand testimony using a similar AI-avatar technique. The jury could be reminded that such a deepfake witness wasn’t real. But they’d likely still be more emotionally swayed than warranted.

As you continue to think about the stakeholders, benefits, drawbacks and risks of allowing deepfakes in the courtroom, work through coach Michael Andersen’s excellent study guide below, which includes a link to a Court TV interview with Mr. Pelkey’s sister.

2025-2026 IEB Regionals Case 1 and NHSEB Regionals Case 12 A Pound of Flesh

Discussion on the proposal featuring two former prisoners – clip from Coach Michael’s attached study guide

IEB case 1 and NHSEB case 12, “A Pound of Flesh” (yep, same case) is about the Massachusetts legislature’s proposal to knock time off of prisoners’ sentences in exchanged for organ and bone marrow donations. We could put “donations” in scare quotes because depending on their environment, incarcerated individuals may not be making sufficiently free choices. But that’s just one factor to consider – here are several more based on discussions with my IEB and NHEB teams, followed by an excellent study guide by coach Michael Andersen. If you’re open to sharing your thoughts on this case, please share in a comment.

  • The proposed law would cap sentence reductions at 1 year, presumably awarding more time off for more invasive/dangerous/long-term detrimental donations and/or more needed organs.
  • There’s a shortage of organs for certain minority groups, and this program could rectify that unfairness, making it less difficult for minorities in need to receive an organ.
  • Any reduction in criminals’ sentences could be perceived to dishonor their victims or victims’ loved ones.
  • Allowing inmates to donate organs could serve their rehabilitation and inspire additional character growth, igniting a habit of giving and expanding their concern for others.
  • Such donations could be likened to organ selling given prisoners’ less than ideal circumstances, but we typically endorse monetary compensation for blood and gamete donations, as well as for surrogate mothering, so this wouldn’t necessarily render the practice unacceptable (though there are differences – blood regenerates and wombs can gestate multiple times, whereas kidneys do not grow back).

Finally, something my teams didn’t consider, but The Young Turk commentators brought up in one of the clips shared in coach Michael ‘s study guide below, is that were this proposal to become law, there’s a risk that sentences might become longer or prison conditions worse in order to coerce more “donations.” Hopefully the prisons system would not do this. But given the unmet needs of many waiting for various transplants, it’s definitely a risk.

2025-2026 NHSEB Regionals Case 3 Public Record, Private Lives

Case 3 in the NHSEB case set winds up being largely about interpersonal ethics, but begins with a privacy policy frame. Coach Michael Andersen in Washington shared the below excellent study guide, which recommends the above excellent privacy-related video by Ethics Bowl researcher and advocate Michael Vazquez at UNC. My team discussed the case week before last, then revisited it briefly last week after watching the video – excellent context which helped sharpen their view. Thanks to both Michaels!