Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
Complex AI Models Generate Significantly More CO₂ Emissions

Complex AI Models Generate Significantly More CO₂ Emissions

by Tekmono Editorial Team
02/07/2025
in News
Share on FacebookShare on Twitter

A recent study published in Frontiers in Communication has revealed that the environmental impact of artificial intelligence varies significantly depending on the type of AI prompts used, with more complex models generating substantially more CO₂.

The study, which evaluated 14 different Large Language Models (LLMs) using a standardized set of 500 questions, found a direct correlation between the number of “thinking tokens” generated by a model per query and its associated CO₂ emissions. Maximilian Dauner, a PhD student at Hochschule München University of Applied Sciences and a lead author of the paper, emphasized that “the environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions.”

The findings indicate that reasoning models, which possess larger training sets and require more processing time, produced substantially higher CO₂ outputs. In some instances, these sophisticated models generated up to 50 times the emissions of concise models. This disparity is further exacerbated by the complexity of the questions posed; open-ended or intricate queries resulted in a larger carbon footprint compared to simpler prompts.

Related Reads

OpenAI Launches Customizable Skills for Codex Coding Agent

Amazon’s Alexa+ to Integrate with Four New Services

EA Investigated for AI-Generated Content in Battlefield 6

Apple to Start iPhone 18 Production in January

Reasoning models, sometimes referred to as “thinking models,” are optimized for tackling complex tasks that necessitate logic, step-by-step breakdowns, or detailed instructions. These models employ what LLM researchers term “chain-of-thought” processing, allowing them to respond more deliberately and generate more human-like responses, albeit with the trade-off of increased processing time and higher energy consumption.

The researchers conducted their testing in two phases: initially with multiple-choice questions, followed by free-response prompts. On average, reasoning models generated an astonishing 543.5 tokens per question, a stark contrast to the mere 37.7 tokens produced by concise models. For example, “Cogito,” identified as the most accurate reasoning model examined, produced three times as much CO₂ as similarly sized models optimized for concise responses.

While the difference in emissions per individual prompt may appear marginal, the cumulative effect at scale is significant. The study projects that asking DeepSeek’s R1 model 600,000 questions would generate approximately the same amount of CO₂ as a round-trip flight from London to New York. In comparison, the non-reasoning Qwen 2.5 model could answer three times as many questions before reaching an equivalent emission level.

These findings emerge amidst a fierce global competition among tech giants to develop increasingly advanced AI models. The escalating demand for AI-driven infrastructure is poised to place considerable strain on existing energy grids. Over the past year, major tech companies have announced significant investments in manufacturing and data centers, with Apple planning to invest $500 billion and Project Stargate pledging an equivalent amount toward AI-focused data centers.

The Electric Power Research Institute (EPRI) estimates that data centers supporting advanced AI models could account for up to 9.1 percent of the United States’ total energy demand by the end of the decade, a significant increase from approximately 4.4 percent today. To meet this burgeoning energy demand, major tech companies are exploring diverse power generation strategies, including partnerships with nuclear power plants and investments in geothermal technology and nuclear fusion.

However, the researchers believe their findings can empower everyday AI users to mitigate their carbon impact. By understanding the significantly higher energy intensity of reasoning models, users could opt to use them more sparingly, relying on concise models for general daily tasks. Dauner emphasized this point, stating, “If users know the exact CO₂ cost of their AI-generated outputs, they might be more selective and thoughtful about when and how they use these technologies.”

ShareTweet

You Might Be Interested

OpenAI Launches Customizable Skills for Codex Coding Agent
News

OpenAI Launches Customizable Skills for Codex Coding Agent

24/12/2025
Amazon’s Alexa+ to Integrate with Four New Services
News

Amazon’s Alexa+ to Integrate with Four New Services

24/12/2025
EA Investigated for AI-Generated Content in Battlefield 6
News

EA Investigated for AI-Generated Content in Battlefield 6

24/12/2025
Apple to Start iPhone 18 Production in January
News

Apple to Start iPhone 18 Production in January

24/12/2025
Please login to join discussion

Recent Posts

  • OpenAI Launches Customizable Skills for Codex Coding Agent
  • Amazon’s Alexa+ to Integrate with Four New Services
  • EA Investigated for AI-Generated Content in Battlefield 6
  • Apple to Start iPhone 18 Production in January
  • Connect Your Phone to Wi-Fi Easily

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals