While AI use is ever-increasing amongst students, for many of us, this is accompanied by a sense of fear - or guilt.

While AI use is ever-increasing amongst students, for many of us, this is accompanied by a sense of fear - or guilt. Whether it's feelings of guilt about the environmental implications of AI, or fears of an overreliance on AI (due to the weakening of the human capacity to think critically), many of us feel… uncomfortable.
Conceivably, this discomfort arises from a cognitive dissonance, where our actions aren’t aligned with our beliefs, ideas or values. Within our AI-saturated environment, this phenomenon has been described as “AI guilt”. As Pratika Katiyar put it: “It feels impossible to succeed without AI - and equally impossible to ignore the consequences of using it”.
Is the solution for students to stop using ChatGPT altogether, or does the guilt arise only when using it in specific ways?
ChatGPT, since its release in 2022, has rapidly become an essential tool for students, with studies suggesting that 86% of students use AI tools weekly, including 71% of higher-education students.
Meanwhile, most students are ethically opposed to the damage that arises from excessive AI use, particularly environmental costs. A 2019 study from the University of Bath revealed that 41% of young people aged 16-25 were hesitant to have children due to environmental fears, and yet it is widely claimed that a ChatGPT search uses around 10x more energy than a simple web search.
For many of us, AI guilt arises not only from the environmental consequences of AI use, but the sense of dependency we feel while using the tool. As Olson put it, “something troubling is happening to our brains as artificial intelligence platforms become more popular”. It is causing us to lose our critical thinking skills and motivation, essential qualities in any good student. Fundamentally, AI is creating a “cheating utopia”, with a survey of 1000 university students (just two months after ChatCPT’s launch) finding that 90% of them had used the chatbot to help with homework assignments.
Thus, here, we find two guilt-inducing phenomena: the first is an over-reliance on AI, and the second is the production of poor quality work.
The first phenomenon - the cognitive implications of AI use - is increasingly receiving academic attention. A recent study: “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking”, revealed that those who turned to AI for tasks like researching and writing exhibited lower “metacognitive” awareness and analytical reasoning. In other words, every time we use ChatGPT to provide answers, our critical and analytical skills are being slightly weakened.
The second phenomenon can be summarised in what Harvard is calling “workslop”, defined as “AI generated work content that masquerades as good work, but lacks substance". In other words, AI is being used to create bad essays and bad research: low-effort, passable looking work that lacks depth and imagination. We inevitably experience guilt when submitting work that isn’t actually ours, but this guilt is doubled when the work itself isn’t even good.
So, is the solution to stop using ChatGPT altogether? Not necessarily.
In order to reduce the negative implications of ChatGPT use, we must begin to use AI less often and more selectively.
Let’s not see ChatGPT as an answer-giver, but rather as an occasional aid. This means using it to tidy up grammar, clarify complex topics, organise one’s notes or generate practice questions. “Used wisely, it can be a powerful self-study tool,” says Megan Chin, a master’s student in technology policy at Cambridge.
Ultimately, ChatGPT should never be the source of one’s answers or truths, as this can lead to intellectual complacency. AI must always be approached critically, and thus students must always question its assumptions and seek alternative viewpoints. As a UAL spokesperson put it “approach it with both curiosity and awareness”.
Let’s not use AI as often - for the sake of the environment and your academic fulfilment. Let AI be your intellectual assistant, not your intellectual replacement.