March 1, 2026
•
5 min (est.)•
Vol. 83•
No. 6When Are AI Shortcuts Bad for Student Writing?

A trio of recent scientific studies from around the world seem to flash a warning that the very thing generative AI is so good at doing—reducing the cognitive load of curating information and wrangling words onto a page—could make it bad for learning.
The studies were grounded in cognitive load theory—the concept that learning requires working memory to juggle three types of mental load: intrinsic (dealing with the complexity of the material), extraneous (filtering out irrelevant details and distractions), and germane (processing information and building mental schema to anchor learning). For writing tasks, one might think AI tools could reduce intrinsic and extraneous load by curating information and presenting it in a digestible way so students’ brains can focus on the germane load of writing about what they’ve learned.
Yet when German researchers randomly assigned university students to research and write about the potential risks of nanoparticles in sunscreen using either a traditional web search or ChatGPT, students using ChatGPT wrote less accurate and nuanced responses despite finding the task to be much easier (Stadler et al., 2024). Students using traditional web searches, on the other hand, found the task harder and had to think more deeply about the content, but had stronger written responses.
Researchers in China found similar results after examining the impact of university students using AI to support revisions to their writing (Fan et al., 2025). They randomly sorted students into four types of support for revising essays: (1) a ChatGPT bot aligned to a grading rubric, (2) live online chats with a writing expert, (3) a self-evaluation checklist based on the rubric, and (4) no support at all. The essays from the students who used ChatGPT were scored highest of all four groups, yet those students engaged in less metacognitive evaluation; many simply cut and pasted text from ChatGPT into their revisions. Although they got a better grade, that score didn’t reflect any greater knowledge of the content than the other groups.
At issue seems to be what students think (or don’t think) about while writing. Researchers in India measured students’ brain waves while engaging in a writing task with and without the use of AI and found lower brain wave activity when relying on AI (Dhawan et al., 2025). They also could recall few details of what they had read or written about.
What should educators do with these findings? First, substitute writing tasks that AI can readily do (e.g., summarizing historical events) with those AI cannot do (e.g., drawing upon personal ethics to develop and defend historical arguments). Second, help students understand that while AI makes researching and writing easier, that doesn’t mean it makes it better. Ultimately, students only learn what they think about, which is why the process of writing is more important than the product: Writing forces us to do the tough mental work of arranging concepts into our own mental schema and become, in a word, educated.
References
•
Dhawan, N., Bhasin, S., Gupta, A., & Khalkho, J. T. (2025). Understanding the impact of AI on the cognitive thinking of students. SSRN, 5433395.
•
Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., et al. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56, 489–530.
•
Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior, 160, 108386.






