Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
EU AI Act Imposes New Rules on GPAI Models

EU AI Act Imposes New Rules on GPAI Models

by Tekmono Editorial Team
02/08/2025
in News
Share on FacebookShare on Twitter

From August 2, 2025, providers of general-purpose artificial intelligence (GPAI) models operating within the European Union will be subject to specific sections of the EU AI Act, mandating the maintenance of up-to-date technical documentation and summaries of training data.

The EU AI Act, published in the EU’s Official Journal on July 12, 2024, and effective as of August 1, 2024, establishes a risk-based regulatory framework to ensure the safe and ethical use of AI across the EU. This framework categorizes AI systems based on their potential risks and impact on individuals.

While the regulatory obligations for GPAI model providers are set to commence on August 2, 2025, a one-year grace period is provided for compliance, deferring the risk of penalties until August 2, 2026.

Related Reads

OpenAI Launches Customizable Skills for Codex Coding Agent

Amazon’s Alexa+ to Integrate with Four New Services

EA Investigated for AI-Generated Content in Battlefield 6

Apple to Start iPhone 18 Production in January

There are five core sets of rules that GPAI model providers must be aware of and adhere to from August 2, 2025, encompassing notified bodies, GPAI models, governance, confidentiality, and penalties.

Providers of high-risk GPAI models are required to engage with notified bodies for conformity assessments, aligning with the regulatory structure supporting these evaluations. High-risk AI systems are defined as those posing significant threats to health, safety, or fundamental rights. These systems are either used as safety components of products governed by EU product safety laws or deployed in sensitive use cases, including biometric identification, critical infrastructure management, education, employment and HR, and law enforcement.

GPAI models, which serve multiple purposes, are considered to pose “systemic risk” if they exceed 10^25 floating-point operations per second (FLOPs) during training and are designated as such by the EU AI Office. Examples of models fitting these criteria include OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini. All GPAI model providers must maintain technical documentation, training data summaries, copyright compliance policies, guidance for downstream deployers, and transparency measures regarding capabilities, limitations, and intended use. Providers of GPAI models that pose systemic risk must also conduct model evaluations, report incidents, implement risk mitigation strategies and cybersecurity safeguards, disclose energy usage, and carry out post-market monitoring.

This set of rules defines the governance and enforcement architecture at both the EU and national levels. GPAI model providers will need to cooperate with the EU AI Office, European AI Board, Scientific Panel, and National Authorities in fulfilling their compliance obligations, responding to oversight requests, and participating in risk monitoring and incident reporting processes.

All data requests made to GPAI model providers by authorities will be legally justified, securely handled, and subject to confidentiality protections, especially for intellectual property (IP), trade secrets, and source code.

Non-compliance with prohibited AI practices under Article 5, such as manipulating human behavior, social scoring, facial recognition data scraping, or real-time biometric identification in public, can result in penalties of up to €35,000,000 or 7% of the provider’s total worldwide annual turnover, whichever is higher. Other breaches of regulatory obligations, such as those related to transparency, risk management, or deployment responsibilities, may result in fines of up to €15,000,000 or 3% of turnover. Supplying misleading or incomplete information to authorities can lead to fines of up to €7,500,000 or 1% of turnover.

For Small and Medium Enterprises (SMEs) and startups, the lower of the fixed amount or percentage applies. Penalties will take into account the severity of the breach, its impact, the provider’s cooperation level, and whether the violation was intentional or negligent.

To facilitate compliance, the European Commission has published the AI Code of Practice, a voluntary framework that tech companies can adopt to align with the AI Act. Google, OpenAI, and Anthropic have committed to this framework, while Meta has publicly refused to do so. The Commission intends to publish supplementary guidelines to the AI Code of Practice before August 2, 2025, clarifying the criteria for companies qualifying as providers of general-purpose models and general-purpose AI models with systemic risk.

The EU AI Act is being implemented in phases. The ban on certain AI systems deemed to pose unacceptable risk, such as those used for social scoring or real-time biometric surveillance in public, came into effect on February 2, 2025. Companies must also ensure their staff have a sufficient level of AI literacy.

GPAI models placed on the market after August 2, 2025, must be compliant by August 2, 2026. Rules for certain listed high-risk AI systems also apply to those placed on the market after this date, and to those placed on the market before this date that have undergone substantial modification since. Full compliance is required for GPAI models placed on the market before August 2, 2025, by August 2, 2027. High-risk systems used as safety components of products governed by EU product safety laws must also comply with stricter obligations by this date. All AI systems used by public sector organizations that fall under the high-risk category must be fully compliant by August 2, 2030. AI systems that are components of specific large-scale EU IT systems and were placed on the market before August 2, 2027, must be brought into compliance by December 31, 2030.

A group representing Apple, Google, Meta, and other companies had requested that regulators postpone the Act’s implementation by at least two years; however, this request was rejected by the EU.

ShareTweet

You Might Be Interested

OpenAI Launches Customizable Skills for Codex Coding Agent
News

OpenAI Launches Customizable Skills for Codex Coding Agent

24/12/2025
Amazon’s Alexa+ to Integrate with Four New Services
News

Amazon’s Alexa+ to Integrate with Four New Services

24/12/2025
EA Investigated for AI-Generated Content in Battlefield 6
News

EA Investigated for AI-Generated Content in Battlefield 6

24/12/2025
Apple to Start iPhone 18 Production in January
News

Apple to Start iPhone 18 Production in January

24/12/2025
Please login to join discussion

Recent Posts

  • OpenAI Launches Customizable Skills for Codex Coding Agent
  • Amazon’s Alexa+ to Integrate with Four New Services
  • EA Investigated for AI-Generated Content in Battlefield 6
  • Apple to Start iPhone 18 Production in January
  • Connect Your Phone to Wi-Fi Easily

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals