I sometimes use AI to plan my philosophy classes. Should I feel guilty? Should I disclose it to my students? Should I stop?
NHSEB case five is all about educators using AI. One college student catches her professor using it to create presentation slides, which is extra scandalous because the professor had forbidden their students from using AI in their class. Another student catches their professor using AI to not only grade their essay, but to generate feedback.
Like most issues, educators using AI would probably be more or less OK given the details. We’d need to ask separately, is it OK for teachers to use AI to help prepare their lecture notes? To brainstorm assignments? Draft exam questions? What about grading? Would a teacher’s experience and background make a difference (consider a 1st-year trainee vs. a 20th year leader in their field)? Would the subject make a difference (algebra vs. English, physics vs. philosophy)? Would the grading method matter? Multiple choice bubble sheets have made teachers’ lives easier via auto-grading since the 70s. Today, online learning platforms do the same. However, while Scantron machines can score multiple choice answers, they’re incapable of analyzing narrative essays. BrightSpace’s auto-grading features can’t author tailored feedback (not yet, anyway). But modern gen AI can.
One relevant factor concerns consistency, for teachers might have some obligation to practice what they preach. In deciding whether an educator should or shouldn’t use AI, and whether they should close doing so if they do, their own demands to and expectations for their students would seem to make a difference. For example, below is a note included in my college philosophy class syllabi, followed by a prompt I recently gave ChatGPT in preparing for a class on Aristotle’s political philosophy.
Professor Matt’s Syllabus AI Note: You’re welcome and encouraged to use generative AI as a personal tutor on any topic we cover. If you’ve not dabbled with ChatGPT (it’s free), start before the world leaves you behind. However, on all graded assignments, do your own reading, thinking, writing and test-taking. In other words, ask AI questions you’d ask me, such as, “I read x article and I think the author was arguing y. Is that right?” Then ask follow ups. “Ok. But what about the section where he mentions z? That seems inconsistent with his overall view.” Really, it’s a wonderful on-demand, free personal tutor. Use it for that purpose alone and you’ll speed your learning and amplify your skill. Use it as a CheatBot to do your work for you and you’ll wind up no smarter than when you arrived, and ashamed of rather than proud of your diploma. If you have any questions about legit vs. not legit usage of AI in my class, please ask.
And the prompt I gave ChatGPT, then used to develop an in-class exercise: I’m teaching a class on Aristotle’s thoughts on the role of the state in nurturing flourishing. It’s on a brief selection where he suggests marriage and birthing ages, what women should do while pregnant, and also how kids should be shielded from corrupting images and plays. What might be some good class exercises to complement that?
Interestingly, one of ChatGPT’s suggestions was an Ethics Bowl case! It was on various state’s attempts to age check internet pornography viewers, which tied beautifully to Aristotle’s strict guidance on what kids should and shouldn’t consume. I decided to use it, and after some initial blushing, discussions went quite well.
However, I didn’t disclose that I got the idea from ChatGPT. My students know I love Ethics Bowl and we use cases often, so no one questioned it. But should I feel sneaky about consulting with a chatbot to supplement and improve my teaching? It could be subconscious rationalization, but I wouldn’t think so, because I wouldn’t feel an obligation to disclose other class prep strategies, either. For example, if I had gotten the Ethics Bowl pornography age verification case idea from a philosophy colleague, I wouldn’t have felt compelled to share. “This wasn’t my idea, but Dr. Bock’s.” If I had come across the idea in an issue of Teaching Ethics, I wouldn’t have, either. Similarly, when I consult with AI, yet use my human judgment to decide and customize what the class does, that feels like a praiseworthy rather than shameful act. It confirms that I want my students to have an enjoyable, worthwhile experience, and that I’m willing to invest extra time to ensure they do. And since I’m balancing obligations to my student with obligations to others (to play with and be a good dad to my kids, to devote time and attention to my wife, to do a little to better the world by promoting Ethics Bowl), being a better professor faster by leveraging AI seems OK.
That said, I heavily customized the assignment, had already created my own lecture notes based on my direct reading of the passage, and picked out a brief video to watch. It’s not like this was a new subject to me or I had AI generate a full script which I followed word-for-word. That would have been dishonest and irresponsible since AI tends to “hallucinate” (get things wrong).
Also, as you read above, I encourage my students to use AI as a personal tutor and thought partner. If I enforced a strict student AI ban (or tried to enforce one—preventing and proving AI abuse is very difficult), secretly using it myself would indeed seem hypocritical. But perhaps since I’m already an expert in my field and they’re still learning, it could be OK for me to use external resources, yet insist my students not? Maybe.
Cool, timely topic—thank you, case committee! Below is a study guide from coach Michael Andersen. Between it and the kickstart ideas above, coaches, teams, and judges should have more than enough to make quality progress on this one. And don’t neglect overlap with regional case 10: Calling Dr. Alexa, on the benefits, drawbacks, permissibility, and risks of using AI as a personal therapist. I’m less enthusiastic about current AI’s potential in that area, but you be the judge.
