If you’ve ever asked ChatGPT to summarize a reading for class, or used it to “inspire” your research outline, you’re not alone. AI has quietly crept into study habits, classrooms, and laboratories—not just helping students and professionals, but in many cases, doing the actual thinking for them. It’s pitched as a productivity boost, a shortcut, a “study buddy.” But let’s be honest: it’s not studying if a machine does it for you.

 

Let me make my position clear. I’m not against AI, but I am against how it's increasingly being used to replace genuine thought, creativity, and labor. Especially in education and creative fields where original work should matter most. It's frustrating to see people submit AI-written essays, generate AI art for contests, or even use AI to simulate experiments or write scientific papers, and then profit from it. The idea that this is just “efficiency” is a dangerous illusion. What’s really happening is the slow erosion of intellectual and creative integrity.

 

Studies back this up. A 2024 paper in Computers & Education: Artificial Intelligence found that students who relied on large language models (LLMs) like ChatGPT for academic tasks had significantly lower comprehension and long-term retention. Another study published in Nature Human Behaviour showed that frequent AI use reduced user originality and critical thinking over time. We’re not just outsourcing tasks, we’re outsourcing the mental muscle that learning is supposed to develop.

 

This isn’t just about personal growth. It’s about fairness. Detection tools meant to identify AI-generated content often fail, especially when the content is lightly edited. A 2023 study in Patterns found that GPT detectors struggle with high false-positive and false-negative rates, making it difficult to flag academic dishonesty accurately. That means students who cheat with AI often get away with it, while others, doing the real work, are graded on an uneven field. It’s like submitting an essay for a scholarship, only to lose to someone who copy-pasted theirs and still got the award.

 

But the damage isn’t limited to classrooms. In the arts, sciences, and even journalism, AI is rapidly displacing human contributors. Generative AI models like GPT-4, Midjourney, and Gemini are trained on massive datasets scraped from books, artwork, articles, and code, much of it without consent. A New York Times investigation in late 2023 revealed that OpenAI used copyrighted material in training ChatGPT, raising serious legal and ethical concerns. This isn’t “learning”; it’s large-scale intellectual theft wrapped in a progress narrative.

 

There’s also the environmental cost. Training AI models demands enormous energy. According to a 2023 study by Stanford University’s Center for Research on Foundation Models, the carbon footprint of developing a single model like GPT-3 can be equivalent to that of five cars running for their entire lifetime. We argue over plastic straws while our AI-generated essays and digital paintings burn through energy like there’s no tomorrow

.

And no, AI isn’t just automating repetitive labor, it’s coming for the jobs that require human insight: writers, designers, therapists, even scientists. In a system where profit trumps people, companies will always choose the cheaper, faster, synthetic version, no matter how soulless the output.

 

In response to the growing risks of unchecked AI development, the Philippine government has started exploring regulation. Senator Pia Cayetano recently filed the “Artificial Intelligence Development Regulation Act,” which proposes ethical safeguards, regulatory oversight, and sustainability measures for AI systems. It’s a step forward, but unless it tackles root issues like data consent, environmental impact, and mass job displacement, it risks becoming just another well-meaning policy with no real bite—more patchwork than protection.

Here’s the truth: AI isn’t just a tool. It’s a cultural force. One that, if left unchecked, won’t just help us think, it will replace the need to. And in doing so, it risks flattening the rich, messy, human process of learning, creating, and discovering into a predictable, profit-driven algorithm.

 

So no, I don’t believe AI should be doing our homework, our art, or our research. I believe it should stay in its lane, supporting, not substituting. Because when machines start to take credit for what makes us most human—our curiosity, our creativity, our struggle to understand the world—what’s left for us to own?

 

We’re not just automating tasks. We’re automating thought, expression, even identity. And for what? Convenience? Clout? A few more likes on a post that a robot helped generate? If we don’t draw the line now, the future won’t be one we built, it’ll be one we quietly outsourced. Let the machines write code. Let them process numbers. But let us be the ones to imagine, to question, to create. Otherwise, we’re not using AI, we’re letting it replace us.

Amaranth Online Newsletter

Be part of our awesome online community!