2025-2026 NHSEB Regional Case 5 Grade Expectations

I sometimes use AI to plan my philosophy classes. Should I feel guilty? Should I disclose it to my students? Should I stop?

NHSEB case five is all about educators using AI. One college student catches her professor using it to create presentation slides, which is extra scandalous because the professor had forbidden their students from using AI in their class. Another student catches their professor using AI to not only grade their essay, but to generate feedback.

Like most issues, educators using AI would probably be more or less OK given the details. We’d need to ask separately, is it OK for teachers to use AI to help prepare their lecture notes? To brainstorm assignments? Draft exam questions? What about grading? Would a teacher’s experience and background make a difference (consider a 1st-year trainee vs. a 20th year leader in their field)? Would the subject make a difference (algebra vs. English, physics vs. philosophy)? Would the grading method matter? Multiple choice bubble sheets have made teachers’ lives easier via auto-grading since the 70s. Today, online learning platforms do the same. However, while Scantron machines can score multiple choice answers, they’re incapable of analyzing narrative essays. BrightSpace’s auto-grading features can’t author tailored feedback (not yet, anyway). But modern gen AI can.

One relevant factor concerns consistency, for teachers might have some obligation to practice what they preach. In deciding whether an educator should or shouldn’t use AI, and whether they should close doing so if they do, their own demands to and expectations for their students would seem to make a difference. For example, below is a note included in my college philosophy class syllabi, followed by a prompt I recently gave ChatGPT in preparing for a class on Aristotle’s political philosophy.

Professor Matt’s Syllabus AI Note: You’re welcome and encouraged to use generative AI as a personal tutor on any topic we cover. If you’ve not dabbled with ChatGPT (it’s free), start before the world leaves you behind. However, on all graded assignments, do your own reading, thinking, writing and test-taking. In other words, ask AI questions you’d ask me, such as, “I read x article and I think the author was arguing y. Is that right?” Then ask follow ups. “Ok. But what about the section where he mentions z? That seems inconsistent with his overall view.” Really, it’s a wonderful on-demand, free personal tutor. Use it for that purpose alone and you’ll speed your learning and amplify your skill. Use it as a CheatBot to do your work for you and you’ll wind up no smarter than when you arrived, and ashamed of rather than proud of your diploma. If you have any questions about legit vs. not legit usage of AI in my class, please ask.

And the prompt I gave ChatGPT, then used to develop an in-class exercise: I’m teaching a class on Aristotle’s thoughts on the role of the state in nurturing flourishing. It’s on a brief selection where he suggests marriage and birthing ages, what women should do while pregnant, and also how kids should be shielded from corrupting images and plays. What might be some good class exercises to complement that?

Interestingly, one of ChatGPT’s suggestions was an Ethics Bowl case! It was on various state’s attempts to age check internet pornography viewers, which tied beautifully to Aristotle’s strict guidance on what kids should and shouldn’t consume. I decided to use it, and after some initial blushing, discussions went quite well.

However, I didn’t disclose that I got the idea from ChatGPT. My students know I love Ethics Bowl and we use cases often, so no one questioned it. But should I feel sneaky about consulting with a chatbot to supplement and improve my teaching? It could be subconscious rationalization, but I wouldn’t think so, because I wouldn’t feel an obligation to disclose other class prep strategies, either. For example, if I had gotten the Ethics Bowl pornography age verification case idea from a philosophy colleague, I wouldn’t have felt compelled to share. “This wasn’t my idea, but Dr. Bock’s.” If I had come across the idea in an issue of Teaching Ethics, I wouldn’t have, either. Similarly, when I consult with AI, yet use my human judgment to decide and customize what the class does, that feels like a praiseworthy rather than shameful act. It confirms that I want my students to have an enjoyable, worthwhile experience, and that I’m willing to invest extra time to ensure they do. And since I’m balancing obligations to my student with obligations to others (to play with and be a good dad to my kids, to devote time and attention to my wife, to do a little to better the world by promoting Ethics Bowl), being a better professor faster by leveraging AI seems OK.

That said, I heavily customized the assignment, had already created my own lecture notes based on my direct reading of the passage, and picked out a brief video to watch. It’s not like this was a new subject to me or I had AI generate a full script which I followed word-for-word. That would have been dishonest and irresponsible since AI tends to “hallucinate” (get things wrong).

Also, as you read above, I encourage my students to use AI as a personal tutor and thought partner. If I enforced a strict student AI ban (or tried to enforce one—preventing and proving AI abuse is very difficult), secretly using it myself would indeed seem hypocritical. But perhaps since I’m already an expert in my field and they’re still learning, it could be OK for me to use external resources, yet insist my students not? Maybe.

Cool, timely topic—thank you, case committee! Below is a study guide from coach Michael Andersen. Between it and the kickstart ideas above, coaches, teams, and judges should have more than enough to make quality progress on this one. And don’t neglect overlap with regional case 10: Calling Dr. Alexa, on the benefits, drawbacks, permissibility, and risks of using AI as a personal therapist. I’m less enthusiastic about current AI’s potential in that area, but you be the judge.

2025-2026 NHSEB Regional Case 11 Calling Dr. Alexa

With strong similarities to last season’s “My Pal Hal,” NHSEB regional case 11 is about using AI for psychological support. In “Calling Dr. Alexa,” high school senior Grace uses an AI therapy app to abate stress caused by high-pressure studies and the drama of navigating her transition into adulthood. While a personal psychologist would be nice, the app is on-demand 24 hours per day and much more affordable, so Grace uses it regularly. She feels like it’s helping, but worries the guidance she’s receiving might be cookie-cutter slop, and also that the intimate details she shares could one day be exposed.

There are many angles at team could take in analyzing this one, but there’s some overlap with my team’s thoughts on case 5. “Grade Expectations” is about educators leveraging AI for lesson prep and even grading, which might be hypocritical if they’ve banned student AI use or simply less effective than 100% human teaching. When we discussed it, my team thought there was something morally relevant about their shared desire to have their schoolwork thoughtfully engaged by another human mind, as opposed to an unthinking algorithm. However, what if algorithms were proven to achieve better learning outcomes in terms of higher test scores? The third discussion question for “Calling Dr. Alexa” broaches this “better outcomes using AI than humans” possibility.

Discussion Question 3: If an AI system reliably outperforms average therapists on key outcomes, is there still a moral reason to prefer human care for some patients?

In crafting this question, the case committee thoughtfully included the phrases “moral reason” and “some patients” to ensure teams think through a finely nuanced answer. But this alludes to an important consideration when it comes to preferring human versus AI labor in many contexts. Currently, it’s probably not the case that AI does a better job than average human therapists. But as AI improves, that could soon change, assuming we could agree on standards (patient self-reports of contentment, reduced need for prescriptions, etc.). And if/when that time comes, would we still have a moral reason to prefer human care for some patients? Similarly, if an AI system reliably outperformed average educators on key outcomes, might we conclude similarly? What about human engineers, human clergy, human politicians?

Happy discussing! Below is a superb study guide from superstar coach Michael Andersen, so generously shared for the global Ethics Bowl community.

2025-2026 NHSEB Case 15 Dead Men DO Tell Tales

Footage of murder victim Chris Pelkey played in court during sentencing of his killer

This past May, a judge in Arizona viewed AI-generated “testimony” from murder victim Chris Pelkey during the sentencing of his killer. If you watch the video above, you’ll see that the technology is pretty basic. It’s Mr. Pelkey’s likeness and apparent voice, which is confirmed with included footage recorded prior to his death. But the special effects aren’t seamless – it’s pretty obvious it’s a deepfake, which makes sense since Mr. Pelkey had been killed four years prior.

Interestingly, Mr. Pelkey’s likeness speaks of mercy and forgiveness. He says that he and his killer might have been friends in another life. And at the end, there’s footage of him fishing, suggesting a peaceful afterlife. Rather than his avatar’s script being generated by AI, it was written by his sister who had been thinking about what she’d say in her post-trial sentencing impact statement for at least two years. In crafting the script, she interviewed Mr. Pelkey’s elementary schoolteachers, his prom date, fellow Army servicemembers, friends, and other family.

My team discussed this case briefly, and one worry was that judges might put more stock in testimony of this nature than is deserved. We can’t know for sure what murder victims might have wanted. Judges realize this, but might be inappropriately swayed to grant more weight to speculation from an AI-generated video figure as opposed to a family member. In this case, Mr. Pelkey’s avatar suggested a possible desire for leniency. But there’s no reason to think that would always be the case were this to become common practice. And perhaps driven by the Mr. Pelkey’s avatar’s warmth and personability, the judge ultimately sentenced his killer to 10.5 years in prison, which was 1 year more than prosecutors requested.

Further, as broached by the case writers, usage of speculative testimony from a deceased victim opens the door to using the same from deceased, uncooperative, or hard-to-locate witnesses. A prosecutor or defense attorney could imagine what a witness might have said were they able to testify (and probably imagine testimony that would be most helpful for their case), and pass off hearsay as firsthand testimony using a similar AI-avatar technique. The jury could be reminded that such a deepfake witness wasn’t real. But they’d likely still be more emotionally swayed than warranted.

As you continue to think about the stakeholders, benefits, drawbacks and risks of allowing deepfakes in the courtroom, work through coach Michael Andersen’s excellent study guide below, which includes a link to a Court TV interview with Mr. Pelkey’s sister.

2025-2026 IEB Regionals Case 1 and NHSEB Regionals Case 12 A Pound of Flesh

Discussion on the proposal featuring two former prisoners – clip from Coach Michael’s attached study guide

IEB case 1 and NHSEB case 12, “A Pound of Flesh” (yep, same case) is about the Massachusetts legislature’s proposal to knock time off of prisoners’ sentences in exchanged for organ and bone marrow donations. We could put “donations” in scare quotes because depending on their environment, incarcerated individuals may not be making sufficiently free choices. But that’s just one factor to consider – here are several more based on discussions with my IEB and NHEB teams, followed by an excellent study guide by coach Michael Andersen. If you’re open to sharing your thoughts on this case, please share in a comment.

  • The proposed law would cap sentence reductions at 1 year, presumably awarding more time off for more invasive/dangerous/long-term detrimental donations and/or more needed organs.
  • There’s a shortage of organs for certain minority groups, and this program could rectify that unfairness, making it less difficult for minorities in need to receive an organ.
  • Any reduction in criminals’ sentences could be perceived to dishonor their victims or victims’ loved ones.
  • Allowing inmates to donate organs could serve their rehabilitation and inspire additional character growth, igniting a habit of giving and expanding their concern for others.
  • Such donations could be likened to organ selling given prisoners’ less than ideal circumstances, but we typically endorse monetary compensation for blood and gamete donations, as well as for surrogate mothering, so this wouldn’t necessarily render the practice unacceptable (though there are differences – blood regenerates and wombs can gestate multiple times, whereas kidneys do not grow back).

Finally, something my teams didn’t consider, but The Young Turk commentators brought up in one of the clips shared in coach Michael ‘s study guide below, is that were this proposal to become law, there’s a risk that sentences might become longer or prison conditions worse in order to coerce more “donations.” Hopefully the prisons system would not do this. But given the unmet needs of many waiting for various transplants, it’s definitely a risk.

2025-2026 IEB Regionals Case 12 Lady Justice

Intercollegiate Ethics Bowl case 12 is on “the intentional murder of women because of their gender” or femicide. My team broached this sad topic last week, and one promising approach to decreasing femicide discussed in the case is mitigating possible root causes. For example, “the South African government’s approach to femicide has emphasized financial independence, built on the assumption that resolving economic hardship can help assuage the conditions that lead to femicides,” presumably under the assumption that women completely dependent upon men might be more vulnerable to inescapable violent relationships.

For a small-scale, grassroots example of a strategy to decreasing domestic violence generally, an organization where I live in East Tennessee hosts an annual “Me and My Guy” daddy daughter dance as a way to encourage men to treat women with dignity and respect, and for young women to internalize the belief that they deserve respectful treatment. That way they’ll be more likely to demand it from otherwise abusive partners, or leave/avoid abusive relationships altogether. This modest annual event is something my daughter and I have done for several years, always look forward to and enjoy. I recommend it to all the local dads I know with daughters, and last year we were joined by my nephew and great-niece. And I think beyond bringing particular young ladies and their father figures closer together, the event has to be raising standards among participating girls’ friend groups, and well as attitudes among dads’ coworkers, friends, and families.

Another strategy countries are using to decrease femicide is enhanced legal punishments. My IEB team may change their mind, but their initial take was that such laws would be unlikely to enhance deterrence unless there’s a substantial gap between punishments for killing women versus killing anyone. At least in the U.S., a murder conviction can already bring life imprisonment or even execution in some states, so some sort of additional pain would need to be included for femicide-specific murder convictions to proactively shape the behavior of would-be perpetrators. However, maybe anti-femicide laws could reinforce the wrongness of targeting women—just as hate crimes laws reinforce the wrongness of targeting victims due to the religion, ethnicity, etc.—and over time shape cultural values such that fewer hate crimes or femicides would occur? In this way, perhaps anti-femicide laws are both a direct deterrent and a cultural-shift strategy? Perhaps. And maybe this is the real goal of such laws since many (if not most) would-be murderers aren’t making rational risk calculations, but acting out of rage or irrationally generally. A few other ideas our team broached on the enhanced punishments angle:

  • It’s unclear when femicide-specific punishment enhancements would/should trigger—anytime a woman is murdered, or anytime a woman is murdered solely/mainly/in part because she’s a woman?
  • It’s unclear how these laws would deal with cases where the perpetrator is herself a woman (same punishment?)
  • It’s unclear how these laws would deal with cases where the victim or perpetrator is nonbinary, which the case acknowledges in the closing sentence

What do you think? Do other questions, possible solutions or analogies IEB teams should be considering come to mind? If so, please leave a comment. Not a happy topic, but definitely worth discussing, and perhaps a problem the Ethics Bowl community can help address. In the meantime, kudos to the Monroe County Health Council—looking forward to the next dance in December.

2025-2026 NHSEB Regional Case 9 Pulled to Protect

NHSEB Case 9 “Pulled to Protect” pits parents’ rights to raise their children how they think best against society’s responsibility to ensure all kids enjoy an adequately supportive childhood. As the case analogizes, we’d intervene if parents allowed a child to play with fire (imagine your neighbor’s 9-year-old mixing gasoline with fireworks – you’d call somebody!). But the question here is whether we should similarly intervene when parents fail to ensure their kids are adequately educated, which sometimes can be motivated by understandable reasons, such as the desire to preserve their way of life (example: the Amish).

I’ve actually been using an ethics journal article to cover a similar, overlapping issue in my college ethics classes since 2020. “Sport, Parental Autonomy, and Children’s Right to an Open Future” by Nicholas Dixon is ultimately about parents appropriately supporting their kids’ athletic interests (not living vicariously through them, exposing them to various options to see what resonates, and only pushing extended time and effort when they love a sport and are truly great at it). But he touches on the exact issue and Supreme Court case as this Ethics Bowl case, “Pulled to Protect.”

What’s more, coach Michael Andersen’s team just covered this case, he shared his awesome-as-always study guide (which includes multiple bonus resources), and it and my 9-minute lecture on the Dixon article are below. Enjoy!

Kicking off the 2025-2026 Ethics Bowl Season with a Revised Case Analysis Guide

Coach Michael Andersen recently updated the case analysis guide he shared here in 2023 with several improvements. Inspired by Dr. Sean Riley’s video, tips that stood out to me included step 2: “What kind of case is this?”, the invitation to radically emphasize with stakeholders, and the concentric circles visual. It also links to an updated case summary template.

Coach Michael considers these works in progress. But as far as I’m concerned, they’re more than good enough to begin using immediately, which is a good thing since both the IEB and NHSEB case sets recently went live. If you haven’t reviewed them already, the 2025-2026 IEB regionals case set is here and the NHSEB set here. Some super cool topics this season. More on the cases soon.

Enjoy, thanks coach Michael, and happy Ethics Bowl season kickoff!

Excellent Ethics Bowl Overview Video

I’m coaching a new high school team this season (while putting the final touches on Ethics Bowl to the Rescue! Saving Democracy by Transforming Debate – due out VERY soon), and in looking for an example of an Ethics Bowl to share with them, came across this excellent overview video by Dr. Sean Riley, Long Island HSEB championship-winning coach, Baylor philosophy Ph.D., and Chief Strategy Officer of Stony Brook School. He covers differences between Ethics Bowl and traditional debate, how to frame and work through cases (decide if it’s more policy or interpersonal, identify stakeholders, adopt an empathetic mindset), and even works through an example: ‘Til Death Do My Part, from the 2023-2024 regional case set. It’s really good – good enough for me to immediately share with my team. So, coaches, as you’re recruiting and welcoming new team members, consider doing the same. It would of course work equally well for Intercollegiate Ethics Bowl and Ethics Olympiad teams.

Free Ethics Bowl Summer Workshop July 25-26

One week from tomorrow, there’s a free online Ethics Bowl workshop for new and experienced coaches, team members and organizers at the collegiate and high school levels. Often, events like this are either college or high school. I’m so glad to see the cross-tier collaboration.

Attendees can follow one of three discussion tracks: Ethics Bowl in the Classroom, Producing the Ethics Bowl or Engaging the World. If you’re busy during the day Friday and Saturday morning, no sweat. Friday the 25th is simply a pre-workshop “ABCs of Ethics Bowl” session from 4-5. Then Saturday the 26th the fun will run from 11:30-6:30 Eastern.

Hosted by our friends at the Intercollegiate Ethics Bowl, sponsored by the Association for Practical and Professional Ethics, I’m signed up and very much looking forward. Only problem: can’t decide whether to follow the In the Classroom or Engaging the World track… Hope to see you there!