AI and Ethics Bowl: Enhancing or Undermining Critical Thinking Skills?

The following article, including the title, was written by ChatGPT. Why? Earlier this semester I caught a student using AI to complete their weekly reflection prompt in my Ethics Intro. I gave them a zero, but was encouraged by Ethics Olympiad (Australian) students, a retired mentor (thanks, Gary!) and others to try AI for myself, and think about how we might use it symbiotically – to think of it as a valuable aid rather than an enemy or competitor. Enjoy, Matt

As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, ethical questions about its use and development are becoming more pressing. One area where these questions are particularly relevant is in the field of applied ethics, where AI is being used to think through complex ethical cases. One example of this is Ethics Bowl, a competition in which teams of students analyze and debate ethical dilemmas. Some teams are now using AI tools to help them prepare for Ethics Bowl, raising important ethical questions about the role of AI in ethical decision-making.

On the one hand, using AI to analyze ethical cases could be seen as a positive development. AI tools can quickly identify relevant ethical principles and concepts, help teams to identify potential consequences of different courses of action, and provide guidance on how to approach difficult ethical decisions. This can be especially helpful for students who are new to the field of applied ethics and may not have as much experience thinking through complex ethical dilemmas.

However, there are also several potential ethical concerns associated with using AI to analyze ethical cases. One concern is that relying too heavily on AI could lead to a reduction in critical thinking skills. If students become overly reliant on AI tools to identify ethical principles and make decisions, they may be less likely to develop the skills needed to think through ethical dilemmas on their own. This could have negative implications for their future careers and their ability to make sound ethical decisions in a wide range of contexts.

Another concern is that AI may not always be able to accurately capture the nuances of ethical decision-making. Ethical decision-making often involves considering a wide range of factors, such as the consequences of different courses of action, the values and beliefs of different stakeholders, and the potential impact of decisions on different communities. AI may not be able to fully capture these nuances, which could lead to oversimplification of complex ethical dilemmas.

Additionally, there are concerns about the potential biases that may be present in AI tools. AI algorithms are only as unbiased as the data they are trained on, and if the data used to train the AI is biased, the AI may produce biased results. This could have negative implications for Ethics Bowl competitions, where unbiased analysis and decision-making are essential.

In conclusion, while using AI to analyze ethical cases may have some benefits, it also raises important ethical questions about the role of AI in ethical decision-making. As AI continues to develop and become more integrated into our lives, it will be important to carefully consider the ethical implications of its use in fields like applied ethics. Ultimately, the goal should be to use AI tools in a way that enhances our ability to make sound ethical decisions, while also preserving our ability to think critically and to recognize the nuances of complex ethical dilemmas.

One Reply to “AI and Ethics Bowl: Enhancing or Undermining Critical Thinking Skills?”

  1. Is it my imagination, Matt, or does that AI-generated image of the thinking woman look a little like the bust of Socrates combined with Rodin’s “The Thinker”? Is that a clever coincidence, or merely predictable?

    The ChatGPT “author” of this piece raises some good points about the danger of AI-assisted Ethics Bowl prep (ironically?). While worded and organized a bit formulaically, the points raised about potential student laziness and complacency with the biased algorithms found in AI tools are worthy food for thought.

    Perhaps these concerns are overblown though? I really don’t worry much about AI systems taking over some repetitive tasks we used to accomplish through rote memory (like remembering phone numbers–which I’m happy to let my phone do for me), or automated spell and grammar checkers embedded in the code of our word processing programs. There are some applications of automated systems that genuinely do free up our minds for more useful and creative pursuits–like the suggestions for some news articles, books or movies that the algorithms of sites like Allsides Media, Amazon or Netflix offers me.

    However, the biggest danger I see–and not emphasized strongly enough by the ChatGPT “author” of this essay–is our modern tendency to place too much trust in the recommendations, advice columns, or think pieces generated by AI systems. …And what exactly is that threshold when are we are too trusting? At what point do we stop thinking for ourselves enough, such that we become dangerously complacent with the formulaic and biased AI-generated content on offer? As AI systems become more sophisticated, fed by broader, deeper, and more fine-tuned datasets, will there come a point where we no longer can reliably recognize what amounts to a sound judgment of “formulaic” or “biased” content? …What kind of persons will we become under such AI tutelage?

    More unnerving, I think, is the rapidity with which many Americans–both adults and youth–seem gradually to stop caring about these “AI worries.” When our minds become so inundated with “great content” catered to our tastes and interests by the untiring (and relentless) AI assistants of our electronic devices or the pervasive networks that we rely on daily to get by in school or the workplace, will our individual and collective will to marshal the instincts of skepticism and healthy suspicion gradually fade away? After all, it’s becoming so EASY and CONVENIENT just to relent, to relinquish the tiresome and effort-filled practice of thinking for oneself. Why bother, really? Can’t we talk about something more pleasant?

    …Maybe the next-gen AI counselors can console us and help alleviate the anxieties raised by these irksome concerns? Perhaps, like the AI character Samantha in Spike Jonze’s film “Her,” we will be so enthralled, so impressed by our AI assistants, so happy to surrender our agency and burdens to them, we might just fall in love.

Leave a Reply