“People love the idea that an agent can go out, learn how to do something, and just do it,” Jeffrey Hickman, Head of Customer Engineering, Ory, said. “But that means we need to rethink authorization from the ground up. It’s not just about who can log in; it’s about who can act, on whose behalf, and under what circumstances.”
In the latest episode of The Security Strategist Podcast, Ory’s Head of Customer Engineering, Jeffrey Hickman, speaks to host Richard Stiennon, the Chief Research Analyst at IT-Harvest. They discuss a pressing challenge for businesses adopting AI: managing permissions and identity as autonomous agents start making their own decisions.
They particularly explore the implications of AI agents acting autonomously, the need for fine-grained authorization, and the importance of human oversight. The conversation also touches on the skills required for effective management of AI permissions and the key concerns for CISOs in this rapidly changing environment.
The fear that AI agents can go rogue or exceed their bounds is very real. They are not just tools anymore; instead, they can now negotiate data, trigger actions, also process payments. Without the right authorisation model, Hickman warns that organizations will encounter both security gaps and operational chaos.
Human Element Vital to Prevent AI Agent from Going Wild
Traditional IAM frameworks aren’t designed for agents that think, adapt, and scale quickly. Anticipating a major shift, Hickman says, “It’s not just about role-based access anymore. We’re moving toward relationship-based authorization—models that understand context, identity, and intent among users, agents, and systems.”
Citing Google’s Zanzibar model, the Ory lead customer engineer says that it’s a starting point for this new era. Unlike static roles, it outlines flexible, one-to-one relationships between people, tools, and AI systems. This flexibility will be crucial as organizations deploy millions of autonomous agents operating under various levels of trust.
But technology alone won’t solve the issue. Hickman stresses the importance of the human element, saying, “We need humans to define the initial set of permissions. The person who creates an agent should be able to establish the boundaries—in plain language, if possible. The AI should understand those instructions as a core part of its operating model.”
This leads to a multi-pronged identity system where humans, agents, and services all verify authorization on behalf of the user before any action takes place—ensuring accountability even when AI acts autonomously.
The New Organisational Skill Stack for AI Security
As AI systems grow more sophisticated, the people managing them must also evolve. Hickman outlines a three-part skill structure every organization should develop:
- Identity and Access Architects: To define how agents authenticate, represent and act on behalf of users, and scale securely.
- AI Behaviour Analysts: A new role that bridges technical and business insights, understanding how LLMs make decisions and how to align that behaviour with enterprise goals.
- Business Strategists: To figure out what data and capabilities the organization is willing to expose to agents and how those choices support the company’s broader objectives.
“This is more than IAM,” Hickman tells Stiennon. “It’s about understanding how AI consumes, interprets, and acts on information. We’ll need specialists who can analyse agent behaviour much like data analysts examine purchasing trends.”
Hickman adds that, similar to cybersecurity reputation systems, directories of “known good agents” will help organizations confirm the legitimacy and trustworthiness of the AI systems they interact with.
“The future of AI security,” Hickman concludes, “isn’t just about protecting data; it’s about protecting decisions.”
Explore how Ory helps global businesses build fine-grained, scalable, and future-ready identity systems for humans and machines. Visit Ory.com.
Takeaways
- AI agents are increasingly autonomous and can operate outside defined boundaries.
- Permissions for AI agents must evolve beyond traditional models like OAuth.
- The scale of AI agents will significantly impact identity infrastructure.
- Fine-grained authorization is essential for managing AI agent access.
- Human oversight is crucial in ensuring AI agents operate within acceptable limits.
- Organizations need to define clear guardrails for AI agent behaviour.
- The role of traditional IAM professionals will change with AI integration.
- Understanding AI behaviour patterns will become a necessary skill.
- CISOs should prioritise the prompt identification of risks in AI security.
- A new class of professionals will emerge to manage AI interactions.
Chapters
- 00:00 Introduction to AI Agents and Permissions
- 02:47 Challenges in AI Agent Authorization
- 05:59 The Scale Problem of AI Agents
- 09:01 Fine-Grained Authorization for AI Agents
- 12:07 The Role of Human Oversight in AI
- 14:56 Evolving Responsibilities in AI Permissions
- 17:56 Skills Needed for Effective AI Management
- 20:46 Key Concerns for CISOs Regarding AI Agents

Comments ( 0 )