
Why “identity is the perimeter” breaks for agents, and what runtime context must capture to secure them.

The “identity is the perimeter” narrative is everywhere in agentic security.
Identity is definitely becoming more critical as agentic adoption accelerates, but the perimeter framing is incomplete. It assumes AI behaves like a predictable session. It does not.
Agents need broad permissions to work autonomously. They integrate with Workday, query databases, access internal tools, and make decisions across multiple data sources. Traditional security validates identity at the gate, but it has limited visibility into what agents actually do with those permissions.
Once an agent has access, IAM cannot answer the most important questions: what data did it touch, and where did it send it?
PAM and JIT were built for predictable, human-shaped access patterns. Request access, get approved, perform a known task, revoke access.
Agents do not work this way.
An agent answering “Give me insights on team performance” might:
You cannot predict this access path upfront. The agent discovers what it needs as it reasons through the problem. If you lock down permissions with strict JIT, the agent cannot function. If you grant broad standing permissions so it can work, you lose the security boundary.
This is not a failure of identity management. This is the inherent tension of autonomous systems.
What traditional controls can prove, and what they cannot:
What they miss is the execution chain.
User prompt: “Give me team performance insights”
↓
Agent reasoning: [calls Workday API and retrieves 500 employee records]
↓
Agent reasoning: [queries salary database and accesses compensation data]
↓
Agent reasoning: [joins with performance reviews and combines sensitive datasets]
↓
Agent response: “Here’s the summary”From an identity perspective, every step was authorized.
Authorization is not the same as appropriateness.
From a runtime perspective, the agent accessed more data than necessary and combined sensitive information in ways that violate governance policies.
Traditional security sees: “Agent accessed Workday, agent queried employee DB”
Runtime visibility sees: “Agent pulled 500 salary records, joined with performance data, exposed compensation information the requester should not have seen”
A CTO at a public company told me their workplace assistant had an HRIS integration with the permissions needed to answer org and people questions.
A user asked an innocuous question about their team.
The assistant pulled compensation fields too broadly and exposed salary information to the requester. The permissions were valid. The access was authorized. But the behavior violated governance expectations.
They only discovered this when someone noticed the exposed data. That is the governance gap: you only learn after exposure.
There was no runtime visibility into what the assistant was actually doing with HRIS permissions. No alert that sensitive salary fields were accessed at scale. No policy enforcement tied to the data flow between the HRIS connector and the user session.
The question security teams are asking is simple : What else is happening that we do not know about?
When people say “the agent’s identity,” they usually mean one thing. In real systems, multiple identities show up across a single user request.
This is where context gets lost. IAM logs show a legitimate service principal, but the initiating user, original prompt, and intermediate agent steps are not reliably attached.
Even with allowlisted tools, prompt injection and indirect prompt contamination can drive confused deputy behavior. The tool server faithfully executes a harmful request under a valid identity, and authorization logs stay clean.
PAM and JIT were built for predictable workflows. Approve access, perform task, revoke access.
Agents are nondeterministic. The same prompt can trigger different execution paths each time:
Run 1: queries HRIS, accesses 50 records
Run 2: same prompt, accesses 500 records including compensation fields
Run 3: same prompt, joins HRIS plus performance data and exports resultsYou cannot precompute least privilege for a workflow that changes every run.
Either way, you have either added friction or increased blast radius.

This is also why traditional CASB and DLP need to evolve:
Security for agents is not “more identity controls.” It is visibility into what agents do with the permissions you grant them.

For every action in the execution chain, capture:
Then enforce policies based on behavior:
These policies cannot be enforced at the identity layer. They require runtime visibility.
Identity matters in the agentic era, but “identity is the perimeter” is incomplete.
Agents need broad permissions to be useful. The risk is not unauthorized access. It is authorized access used in unauthorized ways.
The organizations getting this right are not adding more identity controls. They are building runtime visibility into agent behavior and data access patterns, so they can answer:
This requires runtime telemetry, data access monitoring, and contextual policy enforcement. The framing needs to shift from “lock down identities” to “see and govern what agents actually do.”
Disclaimer: I’m CEO of Aurva, an AI security company focused on runtime visibility for agentic systems. These views are based on my experience building AI systems at Meta and working with security teams deploying agents in production.
USA
AURVA INC. 1241 Cortez Drive, Sunnyvale, CA, USA - 94086
India
Aurva, 4th Floor, 2316, 16th Cross, 27th Main Road, HSR Layout, Bengaluru – 560102, Karnataka, India
PLATFORM
Access Monitoring
AI Security
AI Observability
Solutions
Integrations
USA
AURVA INC. 1241 Cortez Drive, Sunnyvale, CA, USA - 94086
India
Aurva, 4th Floor, 2316, 16th Cross, 27th Main Road, HSR Layout, Bengaluru – 560102, Karnataka, India
PLATFORM
Access Monitoring
AI Security
AI Observability
Integrations
USA
AURVA INC. 1241 Cortez Drive, Sunnyvale, CA, USA - 94086
India
Aurva, 4th Floor, 2316, 16th Cross, 27th Main Road, HSR Layout, Bengaluru – 560102, Karnataka, India
PLATFORM
Access Monitoring
AI Security
AI Observability
Integrations