A groundbreaking study published on June 19, 2025, by researchers at the Massachusetts Institute of Technology’s (MIT) Media Lab has unveiled a concerning potential link between the sustained use of large language models (LLMs) like ChatGPT and a measurable decline in cognitive function.
The comprehensive study involved a controlled experiment designed to assess the impact of AI tools on human cognition. Researchers enlisted subjects and tasked them with writing multiple SAT essays, a standardized assessment known for demanding complex linguistic and analytical skills. To isolate the effects of AI, these subjects were meticulously divided into three distinct groups. The first group was given access to OpenAI’s ChatGPT, a prominent large language model, for assistance. The second group relied on Google’s search engine, a more traditional digital research tool. The third and final group, termed the “brain-only” group, completed their essays without any digital aids, relying solely on their own cognitive abilities.
To quantitatively measure brain activity and engagement during the essay writing process, the researchers employed electroencephalography (EEG). This non-invasive neurophysiological technique records electrical activity of the brain, providing insights into neural pathways and cognitive processing across multiple brain regions. The detailed EEG monitoring allowed the MIT team to track how each group’s brains responded to the cognitive demands of the task over several months.
The results of the study revealed significant disparities in brain engagement and performance among the three groups. The subjects who consistently utilized ChatGPT for their essays exhibited the lowest levels of brain engagement. Furthermore, this group “consistently underperformed at neural, linguistic, and behavioral levels,” according to the study’s published findings. Delving deeper into the behavioral patterns of the ChatGPT group, researchers observed a notable shift in their usage habits. Initially, these participants would leverage the LLM to pose structural questions, seeking guidance on essay organization and thematic development. However, as the study progressed, a concerning trend emerged: subjects increasingly resorted to directly copying and pasting essay content generated by ChatGPT, indicating a diminishing reliance on their own analytical and writing capacities.
In contrast, the group that used Google’s search engine demonstrated a moderate level of brain engagement. While they utilized an external tool, the nature of a search engine typically requires users to sift through information, synthesize it, and formulate their own responses, thereby necessitating more active cognitive involvement than direct content generation. The “brain-only” group, as anticipated, showcased the “strongest, wide-ranging networks” of brain activity. This outcome reinforces the understanding that engaging in complex cognitive tasks without external shortcuts stimulates a broader array of neural connections, fostering robust intellectual development.
These compelling findings suggest that the pervasive use of LLMs such as ChatGPT could potentially harm an individual’s critical thinking and overall cognitive function over an extended period. This concern is particularly acute when considering younger users, whose brains are still in crucial developmental stages. The study’s release comes at a time when educators globally are grappling with the pervasive integration of AI tools into academic environments, navigating the fine line between leveraging technology for learning and preventing its misuse for academic dishonesty.
Nataliya Kosmyna, the lead author of the study, voiced her urgent concerns regarding the implications of these findings for educational policy. Speaking to TIME, Kosmyna stated, “What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental.” She emphasized the particular vulnerability of young individuals, adding, “Developing brains are at the highest risk.”
Despite these cautionary findings, the trend of incorporating artificial intelligence into educational frameworks appears to be accelerating, rather than slowing. As recently as April, President Trump signed an executive order aimed at integrating AI tools into classrooms across the United States. White House staff secretary Will Scharf elaborated on the rationale behind this executive order, stating, “The basic idea of this executive order is to ensure that we properly train the workforce of the future by ensuring that school children, young Americans, are adequately trained in AI tools, so that they can be competitive in the economy years from now into the future, as AI becomes a bigger and bigger deal.” This policy aims to equip the future workforce with skills deemed essential for an economy increasingly shaped by AI, creating a dichotomy between the perceived benefits of AI literacy and the potential cognitive drawbacks highlighted by the MIT research.
The MIT study serves as a critical contribution to the ongoing discourse surrounding artificial intelligence and its impact on human cognition. As AI technologies become more sophisticated and integrated into daily life, these findings underscore the importance of understanding their long-term effects, particularly within educational settings where the minds of future generations are being shaped.




