Extended Interview with Archie Stapleton

Special thanks to Archie Stapleton of the Modus Ponens Institute and TKEthics for the superb extended interview with Yours Truly. If you have the time and interest, check out the whole thing. Otherwise, Archie has kindly hyperlinked to the various topics, so feel free to jump straight to the section on the critique that philosophy encourages indecision, or my take on the role of religious moral reasoning in Ethics Bowl (and the public sphere generally), or on metaethics (in what way do moral claims have objective truth values – more on my thoughts on that here), or AI in education and Ethics Bowl. Enjoy!

00:00 Who Is Dr. Deaton? + Ethics Bowl to the Rescue

01:37 How Dr. Deaton Got Into Philosophy

05:17 What Is Ethics Bowl?

08:42 Saving Democracy by Transforming Debate

12:11 Is Philosophy Too Passive?

16:20 Religious Reasoning in Ethics Bowl

25:25 Metaethics: Are Moral Claims True?

35:30 Nihilism, Free Will, and Moral Meaning

39:16 AI Ethics: Superintelligence & Alignment

47:25 AI in Education: Cheating & Assessment

53:38 Should Ethics Bowl Teams Use AI?

55:54 Case Analysis: Professor Using ChatGPT

1:03:16 Do Ethical Theories Belong in Competition?

1:10:27 Is It Wrong for Professors to Use AI?

1:15:30 AI in Essay Competitions

1:20:13 What’s Next for MPI & TKEthics?

1:23:31 Closing Thoughts

2025-2026 NHSEB Regional Case 5 Grade Expectations

I sometimes use AI to plan my philosophy classes. Should I feel guilty? Should I disclose it to my students? Should I stop?

NHSEB case five is all about educators using AI. One college student catches her professor using it to create presentation slides, which is extra scandalous because the professor had forbidden their students from using AI in their class. Another student catches their professor using AI to not only grade their essay, but to generate feedback.

Like most issues, educators using AI would probably be more or less OK given the details. We’d need to ask separately, is it OK for teachers to use AI to help prepare their lecture notes? To brainstorm assignments? Draft exam questions? What about grading? Would a teacher’s experience and background make a difference (consider a 1st-year trainee vs. a 20th year leader in their field)? Would the subject make a difference (algebra vs. English, physics vs. philosophy)? Would the grading method matter? Multiple choice bubble sheets have made teachers’ lives easier via auto-grading since the 70s. Today, online learning platforms do the same. However, while Scantron machines can score multiple choice answers, they’re incapable of analyzing narrative essays. BrightSpace’s auto-grading features can’t author tailored feedback (not yet, anyway). But modern gen AI can.

One relevant factor concerns consistency, for teachers might have some obligation to practice what they preach. In deciding whether an educator should or shouldn’t use AI, and whether they should close doing so if they do, their own demands to and expectations for their students would seem to make a difference. For example, below is a note included in my college philosophy class syllabi, followed by a prompt I recently gave ChatGPT in preparing for a class on Aristotle’s political philosophy.

Professor Matt’s Syllabus AI Note: You’re welcome and encouraged to use generative AI as a personal tutor on any topic we cover. If you’ve not dabbled with ChatGPT (it’s free), start before the world leaves you behind. However, on all graded assignments, do your own reading, thinking, writing and test-taking. In other words, ask AI questions you’d ask me, such as, “I read x article and I think the author was arguing y. Is that right?” Then ask follow ups. “Ok. But what about the section where he mentions z? That seems inconsistent with his overall view.” Really, it’s a wonderful on-demand, free personal tutor. Use it for that purpose alone and you’ll speed your learning and amplify your skill. Use it as a CheatBot to do your work for you and you’ll wind up no smarter than when you arrived, and ashamed of rather than proud of your diploma. If you have any questions about legit vs. not legit usage of AI in my class, please ask.

And the prompt I gave ChatGPT, then used to develop an in-class exercise: I’m teaching a class on Aristotle’s thoughts on the role of the state in nurturing flourishing. It’s on a brief selection where he suggests marriage and birthing ages, what women should do while pregnant, and also how kids should be shielded from corrupting images and plays. What might be some good class exercises to complement that?

Interestingly, one of ChatGPT’s suggestions was an Ethics Bowl case! It was on various state’s attempts to age check internet pornography viewers, which tied beautifully to Aristotle’s strict guidance on what kids should and shouldn’t consume. I decided to use it, and after some initial blushing, discussions went quite well.

However, I didn’t disclose that I got the idea from ChatGPT. My students know I love Ethics Bowl and we use cases often, so no one questioned it. But should I feel sneaky about consulting with a chatbot to supplement and improve my teaching? It could be subconscious rationalization, but I wouldn’t think so, because I wouldn’t feel an obligation to disclose other class prep strategies, either. For example, if I had gotten the Ethics Bowl pornography age verification case idea from a philosophy colleague, I wouldn’t have felt compelled to share. “This wasn’t my idea, but Dr. Bock’s.” If I had come across the idea in an issue of Teaching Ethics, I wouldn’t have, either. Similarly, when I consult with AI, yet use my human judgment to decide and customize what the class does, that feels like a praiseworthy rather than shameful act. It confirms that I want my students to have an enjoyable, worthwhile experience, and that I’m willing to invest extra time to ensure they do. And since I’m balancing obligations to my student with obligations to others (to play with and be a good dad to my kids, to devote time and attention to my wife, to do a little to better the world by promoting Ethics Bowl), being a better professor faster by leveraging AI seems OK.

That said, I heavily customized the assignment, had already created my own lecture notes based on my direct reading of the passage, and picked out a brief video to watch. It’s not like this was a new subject to me or I had AI generate a full script which I followed word-for-word. That would have been dishonest and irresponsible since AI tends to “hallucinate” (get things wrong).

Also, as you read above, I encourage my students to use AI as a personal tutor and thought partner. If I enforced a strict student AI ban (or tried to enforce one—preventing and proving AI abuse is very difficult), secretly using it myself would indeed seem hypocritical. But perhaps since I’m already an expert in my field and they’re still learning, it could be OK for me to use external resources, yet insist my students not? Maybe.

Cool, timely topic—thank you, case committee! Below is a study guide from coach Michael Andersen. Between it and the kickstart ideas above, coaches, teams, and judges should have more than enough to make quality progress on this one. And don’t neglect overlap with regional case 10: Calling Dr. Alexa, on the benefits, drawbacks, permissibility, and risks of using AI as a personal therapist. I’m less enthusiastic about current AI’s potential in that area, but you be the judge.

2025-2026 NHSEB Case 15 Dead Men DO Tell Tales

Footage of murder victim Chris Pelkey played in court during sentencing of his killer

This past May, a judge in Arizona viewed AI-generated “testimony” from murder victim Chris Pelkey during the sentencing of his killer. If you watch the video above, you’ll see that the technology is pretty basic. It’s Mr. Pelkey’s likeness and apparent voice, which is confirmed with included footage recorded prior to his death. But the special effects aren’t seamless – it’s pretty obvious it’s a deepfake, which makes sense since Mr. Pelkey had been killed four years prior.

Interestingly, Mr. Pelkey’s likeness speaks of mercy and forgiveness. He says that he and his killer might have been friends in another life. And at the end, there’s footage of him fishing, suggesting a peaceful afterlife. Rather than his avatar’s script being generated by AI, it was written by his sister who had been thinking about what she’d say in her post-trial sentencing impact statement for at least two years. In crafting the script, she interviewed Mr. Pelkey’s elementary schoolteachers, his prom date, fellow Army servicemembers, friends, and other family.

My team discussed this case briefly, and one worry was that judges might put more stock in testimony of this nature than is deserved. We can’t know for sure what murder victims might have wanted. Judges realize this, but might be inappropriately swayed to grant more weight to speculation from an AI-generated video figure as opposed to a family member. In this case, Mr. Pelkey’s avatar suggested a possible desire for leniency. But there’s no reason to think that would always be the case were this to become common practice. And perhaps driven by the Mr. Pelkey’s avatar’s warmth and personability, the judge ultimately sentenced his killer to 10.5 years in prison, which was 1 year more than prosecutors requested.

Further, as broached by the case writers, usage of speculative testimony from a deceased victim opens the door to using the same from deceased, uncooperative, or hard-to-locate witnesses. A prosecutor or defense attorney could imagine what a witness might have said were they able to testify (and probably imagine testimony that would be most helpful for their case), and pass off hearsay as firsthand testimony using a similar AI-avatar technique. The jury could be reminded that such a deepfake witness wasn’t real. But they’d likely still be more emotionally swayed than warranted.

As you continue to think about the stakeholders, benefits, drawbacks and risks of allowing deepfakes in the courtroom, work through coach Michael Andersen’s excellent study guide below, which includes a link to a Court TV interview with Mr. Pelkey’s sister.

2024-2025 NHSEB Regional Case 3 It Tastes Like Dog Food Study Guide with Bonus AI Script Experiments

Here’s a really nice study guide from Coach Michael Andersen with two superb generative AI experiments on the case, as well as a bonus guide on evaluating sources on controversial topics.

Mr. A is going above and beyond per usual! And I think the generative AI engagement stuff is especially cool. Give his strategies a try with other cases and let us know what’s working, what isn’t, etc.

CheatBot or SuperTutor? ChatGPT for Ethics Bowl Zoom Debrief

This past Sunday, a small group of Ethics Bowl organizers, coaches and enthusiasts met for an informal, unofficial discussion on how ChatGPT and other generative AI tools might be used for Ethics Bowl. The purpose wasn’t to settle much of anything, but to inspire further discussion at the upcoming NHSEB regionals and nationals, as well as IEB nationals.

Why? Teams are surely using it. And given that Ethics Bowl participants, coaches, judges, moderators, organizers, their families and fans are among the most thoughtful people in the world, inviting them into a collective discussion on how to properly incorporate this technology seems a no-brainer. It’s an ethics question about Ethics Bowl – doesn’t get much more relevant than that. If you agree, please share this article and/or the accompanying recording, and report back any and all ideas worth sharing. Some upshots:

  • How to Best Leverage AI for Ethics Bowl Prep: Think of it as a conversation partner, tutor, rough draft-generator and/or judge/opposing team simulator. Understand its limitations. Fact check. Reason check. Moral blind spot check. Bias check. It’s a strong supplement to, but not a replacement for, human wisdom and deliberation. And it performs best when guided with insightful follow-ups.
  • On Worries that a Team Might Use AI to Write a Presentation Script: Using ChatGPT for Ethics Bowl prep isn’t analogous to asking it to do your homework because a) teams need to come to a consensus prior to the event (and it’s unlikely an entire team would agree to memorize and regurgitate a chatbot’s script), and b) due to EB’s live, interactive nature, any team overly reliant on an AI script would be embarrassingly exposed during commentary response and judge Q&A. Also, bowlers are a special self-selected subgroup of the population, far less likely to do anything that might constitute cheating than your average student (most of whom are also unlikely to cheat, but we educators are often paranoid about that).
  • Steps Ethics Bowl Leaders Can Take: While a team might get away with memorizing an eloquent opening presentation script written for them by a chatbot (the risk is low, but one could), this can be partially mitigated by adjusting score sheets to increase the relative weighting of the commentary, commentary response and judge Q&A portions. (Rules committees, steering committees, other leaders – please give this additional thought – tweaking rubrics might help as well.)
  • Steps Ethics Bowl Coaches Can Take: The broader community of Ethics Bowl coaches (including Ethics Olympiad, John Stuart Mill Cup, etc. coaches) can and should work together to test, share and recommend AI prompts and techniques that produce the highest quality outputs. They should also remind students of the virtues of democratic deliberation and the risks of intellectual laziness. Consider EthicsBowl.org one place to share such insights.
  • Steps Case Committees Can Take: Since generative AI seems more effective at scripting responses on cases about real world events (with published editorials for the AI to scan), case writing committees should consider using more fictitious scenarios or putting twists on real world cases (focusing on some interpersonal moral tension within the broader context of a real world issue). This may be unnecessary, but definitely deserves additional thought.

There was more – please watch the video when you have time. But one thing I argued is that AI can serve as an equalizer, connecting all teams (both advantaged and disadvantaged) with an on-demand tutor with an unmatched knowledge base and inexhaustible stamina. Students with the time and interest can learn pretty much anything, including philosophical ethics, so long as they know how to ask good questions. Background knowledge definitely helps, and learning will be slower when the topic is new. But I’m very optimistic about AI’s potential for education.

Special thanks to Michael Andersen for the idea, the planning and co-hosting, as well as to coaches Dick Lesicko, Angela Vahsholtz-Andersen and Chris Ng (thanks also, Chris, for your notes which helped with this article), organizers Jeanine DeLay and Greg Bock for your preparation, attendance and engagement. And apologies to Gabe Kahn, who gets credit for trying to attend! Next time I’ll more closely monitor the Zoom host notifications…

AI and Ethics Bowl Round 2: ChatGPT Wrote Our Presentation?

Fast, free and virtually undetectable, ChatGPT offers a tempting combination of ease and stealth. While it can be used as an on demand, universal tutor for the ambitiously inquisitive, it can also serve a secret substitute thinker for the time-pressed, disillusioned or simply unscrupulous.

The line between learning aid and cheatbot isn’t obvious. But there are clear cases. Ask it to help you understand Parfit’s Repugnant Conclusion? Sure. Direct it to write a paper on Parfit’s Repugnant Conclusion which you plan to submit as your original work? No.

Similar logic would seem to apply to Ethics Bowl. Enthusiastic, dedicated bowlers can expand their thinking after hours, engaging a tireless conversation partner with an unmatchable knowledge base, and they can do it without the fear of asking a stupid question or suggesting something taboo. On the other hand… a team could feed AI the case and discussion questions (ChatGPT now has direct access to the internet – just provide a hyperlink to the case set), subcontract every bit of the analysis with the right prompts (see the experiments at the end of the attached article), memorize and regurgitate a received view, and as a result learn and grow very little. Such a team might score well on their initial presentation, but would risk an embarrassing exposure during judge Q&A. Maybe judge interaction will be our primary weapon for combating chatbot abuse. But rest assured that in this season’s bowls, many, many teams will have used ChatGPT and services like it. It is therefore incumbent upon the Ethics Bowl community to think hard (and fast) about appropriate guidelines, and to share them as a baseline to be refined as soon as possible. Even if imperfect, almost any guidance would be preferable to silence, for silence implies anything goes.

Back in April, we invited ChatGPT itself to write this article on the risks and promise of using it for Ethics Bowl prep. Today, naturally intelligent organic person Michael Andersen adds to the discussion with the below article. Organizers, judges, coaches: if you’re not convinced this is a risk (an AI drawback denier), click one of Michael’s experiments at the end of the article. Still not worried? I actually had second thoughts about publishing the prompts he used to guide the AI to provide a full presentation script. But the community needs to understand the tool’s power. Plus, if Gen X dinosaurs like Michael and myself can stumble our way through an AI conversation, the Gen Z tech wizards whom we work so hard to honorably mentor aren’t likely to learn anything new.

Last, if you have thoughts on acceptable use of AI for Ethics Bowl prep, please share them in a comment. We’re also considering some sort of video discussion in the near future – shoot me an email if you’d like to be included, and thanks to everyone in the community who’s taking this topic seriously.