ChatGPT Overreliance in College Erodes Critical Thinking Skills

Tufts senior Ben Borgers blogged about ChatGPT's disruption in his Engineering Psychology courses, where students overrelied on AI for essays and discussions, eroding critical thinking and sparking integrity concerns. Studies confirm AI boosts efficiency but risks deep learning. Academia must integrate AI ethically to preserve intellectual rigor.
ChatGPT Overreliance in College Erodes Critical Thinking Skills
Written by Mike Johnson

In the hallowed halls of Tufts University, where computer science meets the intricacies of human-technology interaction, senior Ben Borgers found himself grappling with an unexpected disruptor: ChatGPT. As detailed in his personal blog post on benborgers.com, Borgers recounts how the AI tool permeated his Engineering Psychology courses, a field akin to UX design that examines how people engage with technology. What began as subtle integrations escalated into a semester marred by overreliance, raising questions about academic integrity and the true value of learning in an AI-augmented era.

Borgers, who also works as an engineer at Buttondown and has interned at Notion and Locket, observed classmates turning to ChatGPT for everything from essay outlines to full responses in group discussions. In one instance, he describes a peer submitting AI-generated work that lacked the nuanced understanding required for psychological analyses of user interfaces. This wasn’t isolated; professors noted a spike in suspiciously polished submissions, prompting impromptu oral defenses to verify authenticity.

The Rise of AI in Classroom Dynamics

The phenomenon Borgers describes aligns with broader trends documented in higher education. A study published in the Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, accessible via ACM Digital Library, conducted focus groups with students on the East Coast who used ChatGPT extensively over a semester. Researchers found that while AI boosted efficiency, it often eroded critical thinking, with participants admitting to bypassing deep research in favor of quick prompts. This echoes Borgers’ frustration, where group projects devolved into debates over AI ethics rather than substantive collaboration.

Yet, not all integrations are detrimental. Recent updates from OpenAI, as reported in the OpenAI Help Center, introduce “Study Mode,” a free feature now available in India and expanding globally, designed to foster interactive learning without spoon-feeding answers. According to a Times of India article from last week, this mode encourages step-by-step problem-solving, potentially addressing the cognitive pitfalls highlighted in an MIT study covered by CBS News, which warns of reduced brain activity from AI overreliance.

Professors’ Dilemma and Student Backlash

Faculty responses add another layer of complexity. A May 2025 New York Times piece titled “The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It,” available at nytimes.com, reveals instructors leveraging AI for lesson planning and grading, only to face accusations of hypocrisy from students like a Northeastern senior demanding tuition refunds. Borgers’ account subtly nods to this tension, as his professors implemented AI-detection tools mid-semester, sparking unease among those who viewed ChatGPT as a legitimate aid.

Industry insiders in edtech see this as a pivotal moment. Posts on X (formerly Twitter) from educators and tech enthusiasts, such as those promoting Harvard’s CS50 AI workshops, reflect a growing sentiment that structured AI integration—through courses on prompting and ethical use—could transform curricula. For instance, recent X discussions highlight OpenAI’s “Study Together” mode, tested among Plus users as noted in a BGR report from last month, aiming to make AI a collaborative tutor rather than a crutch.

Navigating Ethical Boundaries in AI Education

Borgers’ semester underscores the ethical tightrope: AI democratizes access to knowledge but risks commodifying education. The ACM study emphasizes participatory design sessions where students co-created AI guidelines, suggesting universities adopt similar frameworks to mitigate misuse. Meanwhile, global adoption statistics from Times of India indicate ChatGPT handles 2.5 billion daily prompts, with 330 million from the U.S., signaling an irreversible shift.

As Borgers prepares to graduate, his reflections serve as a cautionary tale for academia. Institutions must evolve policies, perhaps drawing from OpenAI’s release notes that prioritize educational expansions like Study Mode for Edu plans. Without proactive measures, the line between enhancement and spoilage blurs, potentially diminishing the intellectual rigor that defines higher learning. For tech leaders, this narrative from benborgers.com isn’t just a student’s lament—it’s a blueprint for redesigning AI’s role in education to preserve human ingenuity.

Subscribe for Updates

GenAIPro Newsletter

News, updates and trends in generative AI for the Tech and AI leaders and architects.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us