Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
Google Outlines Security Measures for AI Agents

Google Outlines Security Measures for AI Agents

by Tekmono Editorial Team
09/12/2025
in News
Share on FacebookShare on Twitter

Google has detailed its security approach for Chrome’s agentic features, which enable automated actions like booking tickets or shopping, addressing potential data and financial risks associated with these capabilities.

To manage agentic actions, Google employs several models. A User Alignment Critic, built with Gemini, scrutinizes action items generated by the planner model. If the critic model determines that planned tasks do not align with user goals, it prompts the planner to re-evaluate the strategy. Notably, the critic model accesses only metadata of proposed actions, not actual web content.

Google utilizes Agent Origin Sets to restrict agent access to disallowed or untrustworthy sites. These sets confine the model to specific read-only and read-writeable origins. Read-only origins define content that Gemini can consume; for example, product listings on a shopping site are relevant, but banner ads are not. The agent can only interact with designated iframes on a page. According to Google, “This delineation enforces that only data from a limited set of origins is available to the agent, and this data can only be passed on to the writable origins. This bounds the threat vector of cross-origin data leaks. This also gives the browser the ability to enforce some of that separation, such as by not even sending to the model data that is outside the readable set.”

Related Reads

OpenAI Launches Customizable Skills for Codex Coding Agent

Amazon’s Alexa+ to Integrate with Four New Services

EA Investigated for AI-Generated Content in Battlefield 6

Apple to Start iPhone 18 Production in January

In addition to Agent Origin Sets, Google monitors page navigation via another observer model to prevent access to harmful model-generated URLs, further enhancing the security of agentic features.

For sensitive tasks, Google requires user consent. If an agent attempts to access sensitive sites, such as banking or medical platforms, it requests user permission. Should a site require signing in, Chrome will prompt for user permission to utilize the password manager; the agent’s model does not access password data. Users will be asked before the agent initiates actions like making a purchase or sending a message, ensuring transparency and control over automated actions.

Google also employs a prompt-injection classifier to prevent unwanted actions and is evaluating agentic capabilities against attacks developed by researchers. This approach is part of a broader industry effort, as seen in Perplexity’s recent release of an open-source content detection model to counter prompt injection attacks against agents earlier this month.

ShareTweet

You Might Be Interested

OpenAI Launches Customizable Skills for Codex Coding Agent
News

OpenAI Launches Customizable Skills for Codex Coding Agent

24/12/2025
Amazon’s Alexa+ to Integrate with Four New Services
News

Amazon’s Alexa+ to Integrate with Four New Services

24/12/2025
EA Investigated for AI-Generated Content in Battlefield 6
News

EA Investigated for AI-Generated Content in Battlefield 6

24/12/2025
Apple to Start iPhone 18 Production in January
News

Apple to Start iPhone 18 Production in January

24/12/2025
Please login to join discussion

Recent Posts

  • OpenAI Launches Customizable Skills for Codex Coding Agent
  • Amazon’s Alexa+ to Integrate with Four New Services
  • EA Investigated for AI-Generated Content in Battlefield 6
  • Apple to Start iPhone 18 Production in January
  • Connect Your Phone to Wi-Fi Easily

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals