New guidance aimed at agent developers, architects, standards bodies and enterprises throws doubt on security standards around simple AI agent scenarios, claiming AI agents can not work independently, across different companies, or handle complex permission sharing.
In a critical whitepaper, “Identity Management for Agentic AI: The new frontier of authorization, authentication, and security for an AI agent world”, which was well researched by the OpenID Foundation Artificial Intelligence ID Management Community, the full capabilities of AI agents is called into question. The foundation is a global leader in open identity standards, producing the whitepaper which addresses some of the challenges of authentication and identity management posed by AI agents.
For now at least, AI agents are outpacing security frameworks which are inadequate to ensure regulation whilst we invest in an autonomous future. There are major security gaps for technology developers to address. Whilst AI systems are proficient as society is assured that can autonomously take actions and make decisions, they can be overloaded and if driven by companies creating separate identity systems, it can create fragmented infrastructure instead of interoperable standards that complicates development for security.
Another challenge is that AI agents look like regular users onboarding, making it difficult to distinguish who made decisions.
Tobin South, Head of AI Agents at WorkOS, Research Fellow with Stanford’s Loyal Agents Initiative, and Co-Chair of the OpenID Foundation’s AIIM CG, agreed, “AI agents are outpacing our security systems” and emphasised industry collaboration on achieving common standards”.
The onset of AI agents does not omit humans reviewing and having oversight of the decisions it makes. The amount of permission requests that humans would manually have to approve increases security risks.
When agents create other agents or delegate tasks, it creates complex permission chains without clear limits.
The current models work for individuals, but agents can not serve multiple users with different permissions in shared spaces.
Computer systems are needed to automatically verify agent actions without constant human supervision.
Agents controlling screens and browsers bypass normal security checks, potentially forcing internet lockdowns.
Agents can switch between acting independently and acting for users, but current systems can’t handle this dual nature or track which mode the agent is operating in.
Gail Hodges, Executive Director of the OpenID Foundation pointed out that while AI takes the lead in unlocking Agentic AI use cases for humans, humans are interested and understand that safeguards for security, privacy, and interoperability must be incorporated.
As such, the OpenID Foundation’s whitepaper issues a clear call for industry-wide collaboration to securely advance the future of AI.
AI agents in simpler, single-company scenarios can be tackled first using proven security frameworks.
All organisations should implement robust standards, such as OAuth 2.0, and adopt standard interfaces, like the Model Context Protocol, before expanding AI to external tools using recommended security measures.
Companies are urged to use dedicated authorisation servers and integrate agents into existing enterprise login and governance systems, ensuring every agent has a clear owner and is subject to rigorous security policies.













