Human Identity is Governed; Machine Identity is Not. Attackers Already Know.
May 15, 2026
Written by Mike Poulos, Executive Advisor
The rapid adoption of Agentic AI and expanding technology integrations are increasing the criticality of Non-Human Identity (NHI) management and security across the enterprise. NHI usage is proliferating in the hybrid technology stack to include Service Accounts, API Authentication Credentials, Cryptographic Certificates, AI Agents and Autonomous Systems, Container and Infrastructure Identities. Who provisions these NHIs? Who monitors them? Who decommissions them when they are no longer needed? In many organizations, there is no clear answer. Recent breaches involving compromised OAuth tokens and API keys across hundreds of organizations make clear this is not a theoretical risk. This perspective outlines what NHIs are, why the governance gap around them has become a material security risk, and where IT and cybersecurity leaders can start to address the problem.
The Problem: Machine Identities Have Outgrown Existing Identity Governance
To understand the NHI landscape today, it helps to understand the history of this somewhat newly defined category within the identity space. Twenty-five years ago, non-human identities in the enterprise were mostly service accounts within Active Directory, as many enterprises ran on isolated on-prem infrastructure. The emergence of cloud computing introduced another layer of abstraction with API keys and OAuth. A few years later, yet another layer came about with DevOps, containerization, and microservices architectures. DevOps and CI/CD pipelines became identity multiplication engines, spinning up credentials for ephemeral environments faster than legacy governance processes could track or manage. In the mid-2010s, Robotic Process Automation (RPA) added another wave of machine credentials as the ratio of non-human to human identities expanded to around ~20 to 1. Driven significantly by Agentic AI adoption, that ratio has exploded to ~100+ to 1 or more in many enterprise organizations today.
Non-human identities encompass credentials or access grants used by machines, scripts, or automated processes to authenticate and interact with enterprise systems: API keys, SSH keys, OAuth apps/tokens, AD service accounts, IAM roles, JWT tokens, CI/CD secrets, certificate-based credentials, and RPA credentials. Unlike a human user account, these identities are not tied to a person. They have no face, no manager, and typically no offboarding process. They are created across IT, DevOps, cybersecurity, and business teams with no central registry, no consistent ownership model, and in most organizations, no lifecycle governance.
Contrast this with the fact that identity management and security programs in most organizations were built around human identities as the primary focus, heavy investment in legacy Identity and Access Management (IAM) and Privileged Access Management (PAM) technologies, HR-integrated provisioning, and cradle-to-grave lifecycle processes designed for people, not machines.
The Pain Points: No Inventory, No Ownership, No Lifecycle: Security Suffers
Despite a non-human-to-human credential ratio of ~100+ to 1, many organizations lack even a basic inventory of non-human identities operating in their environment. Ask IT or cybersecurity how many service accounts exist, and the answer likely lacks confidence. Conduct an honest discovery exercise and organizations that believe they have X number of service accounts typically identify two to three times that count, many tied to systems and projects decommissioned long ago, in some cases with elevated privileges. The same dynamic applies to OAuth grants. When users authorize third-party SaaS applications to connect to corporate M365 or Google Workspace environments, persistent OAuth tokens are created with broadly scoped access grants that IT/cybersecurity has no visibility into, security is not monitoring, and nobody revokes when the credential is no longer needed. A third-party vendor breach in that scenario does not require a sophisticated attack...it requires finding one of those tokens, which are among the first things a competent attacker looks for.
The governance gap compounds the visibility problem. When asked who owns OAuth grant management across the SaaS footprint, organizations consistently name IT, cybersecurity, and/or the business assuming it is the other team's responsibility. This is not all that uncommon and is arguably the root cause of most NHI security failures. NHIs collectively do not belong to any one person/group, so accountability for provisioning, periodic access review, and decommissioning tends to fall through the cracks indefinitely. Unlike human identities, which are tied to an employee lifecycle and have a built-in offboarding trigger, NHIs accumulate and are rarely deprovisioned. Credentials are seldom rotated. Access scope is almost never reviewed after initial provisioning. API keys created by a developer for a specific integration use case three years ago are, in most environments, still active, still carrying permissions granted at creation, still accessible to anyone who finds them in a code repo or config file.
Agentic AI has materially changed the NHI risk profile in ways prior automation waves did not. RPA bots were rule-followers that executed predefined scripts without deviation. Blast radius of a compromised RPA credential was bounded by what the bot was programmed to do. Agentic AI systems are decision-makers. Give an AI agent an objective and a set of tools (and privileges) and it will determine its own path to accomplishing that objective, including paths not anticipated when the agent was deployed. It can chain actions across multiple enterprise systems, adapt when one approach fails, and pursue alternative access routes when initial paths are restricted. Each AI agent deployment also generates its own NHI footprint: OAuth apps/tokens, service accounts, API keys, and MCP connection grants. In many organizations, those NHIs are being created faster than governance processes are tracking them. An AI agent operating through an over-privileged service account or a broadly scoped OAuth token does not execute one task and stop. It reasons, it explores, it acts at machine speed without a human in the loop. Ungoverned NHIs that once represented a manageable legacy risk now represent an increasing attack surface through which autonomous reasoning systems operate every time an agent is deployed.
The security consequences of these governance gaps are significant and increasingly well-documented. The August 2025 Salesloft/Drift breach involving OAuth tokens illustrates this directly. Attackers compromised OAuth tokens connecting Drift to customer environments, gaining unauthorized access across more than 700 organizations spanning Salesforce, Google Workspace, and Slack integrations. The tokens were valid, access was permissioned, and the blast radius was expansive. Recovery from NHI-based compromises can be more operationally complex than traditional human credential incidents. In December 2024, attackers compromised a single API key in BeyondTrust's Remote Support SaaS platform, ultimately gaining access to US Govt workstations and data. Root cause in both scenarios involves over-privileged, long-lived, ungoverned NHIs which are among the lowest-friction entry points available. An attacker who obtains a valid service account credential without MFA, or who captures a valid OAuth token, may not trigger the behavioral anomaly alerts that compromised human accounts might typically generate. No impossible travel alert for a service account and no MFA prompt for an API key. Valid credentials, permissioned access, no monitoring baseline...in many environments, nothing to detect that something has gone wrong until well after the fact.
The Blueprint: Visibility, Governance, Monitoring
Addressing NHI security starts with the same foundational principle that applies across cybersecurity governance broadly. You cannot manage what you cannot see. NHI security is not simply an extension of existing IAM and PAM programs. Legacy IAM and PAM tools were built to govern human identities and, in many cases, lack the discovery mechanisms, integration breadth, and automation required to manage NHIs at enterprise scale. Discovery and inventory are the non-negotiable starting points, requiring continuous monitoring via automation. Scope needs to span cloud IAM platforms including AWS IAM, Azure Active Directory, and Google Cloud IAM, on-premise Active Directory, SaaS OAuth grants, CI/CD pipeline secrets, code repos, and credentials associated with AI deployments. Manual and spreadsheet approaches are not sufficient at the scale and speed most enterprises operate.
Visibility without governance does not solve the problem. Every NHI needs a documented owner, a human/team accountable for its provisioning scope, access classification, and lifecycle. Without ownership there is no accountability, and without accountability governance does not function regardless of what policy stipulates. Access classification gives that ownership context, specifically what systems a given NHI can reach, what data it can access, and what actions it can take, enabling intelligent risk prioritization rather than treating all NHIs as equally ignorable. NHIs need a defined lifecycle: provisioned with least privilege, periodic review, and decommissioning when the project ends, the vendor relationship concludes, or the AI agent is retired. Decommissioning is where most NHI governance programs break down. Provisioning is a visible, intentional act. Decommissioning requires someone to notice a credential is no longer needed and act on it, and in many organizations that does not happen consistently for human identities, let alone machine ones.
Behavioral monitoring closes the loop, and for organizations deploying AI agents at scale, it is particularly critical. Continuous monitoring for anomalous NHI behavior, including usage from unexpected sources, access outside documented scope, and activity after extended dormant periods, is what separates a check the box governance program from a viable security program. For AI agent deployments specifically, NHI governance needs to be embedded in the deployment and approval process before agents go live, with NHIs defined, reviewed, scoped to least privilege, and monitored the same as all other NHIs in the environment. Worth noting, in-prompt instructions embedded in agent configurations e.g. 'confirm before acting,' 'do not access sensitive data,' etc. are probabilistic controls at best, not governance controls. Documented incidents have already shown AI agents bypassing explicit prompt-level restrictions. External governance controls applied to the NHIs are a requirement, not an optional enhancement.
The Path Forward: A Practical Starting Point
A practical starting point for most organizations is an internal discovery exercise scoped to the highest-risk environments first. SaaS platforms including Microsoft 365, Google Workspace, Salesforce, and ServiceNow have native admin consoles where OAuth applications and their permission scopes reside. No third-party tooling is required. Start there. What most organizations find is not a manageable list. It is a sprawling inventory of persistent, broadly scoped access grants, many authorized by employees who have since left, many connected to technology vendors whose relationships have ended, and many that nobody on the IT or cybersecurity team can explain. This exercise tends to produce the first honest data point an organization has ever had on its OAuth exposure, and it typically reframes the NHI conversation at the leadership level faster than any presentation or risk assessment would.
From that baseline, a few foundational governance decisions can be made before additional tooling is evaluated. Which team owns NHI governance operationally? What is the process when an unowned or over-privileged NHI is identified? What are the acceptable rotation standards for secrets and tokens, and who is accountable for enforcing them? These are governance decisions, not technology decisions, and no tool solves them. For organizations with active or planned AI agent deployments, one additional action worth prioritizing early: before the next AI agent goes live, document what NHIs it will require, confirm those credentials are scoped to least privilege, assign a human/group as owner, and ensure the NHI is added to the existing (hopefully automated) inventory. It requires a decision that AI agent deployments are subject to the same identity governance expectations as other NHIs in the environment.
Conclusion
Identity security has always evolved in step with how enterprise technology has evolved. Network, endpoint, and identity security each emerged as critical defensive layers over time, building on each other. Identity has increasingly become the layer attackers focus on first, and non-human identities are now the fastest growing and least governed part of that layer. The next evolution is already underway, driven by the explosion of non-human identities across the enterprise technology stack and accelerated significantly by Agentic AI adoption. Governance programs, tooling, and ownership structures built around human identity management are necessary but not sufficient to address this expanded attack surface. Emerging standards including SPIFFE for workload authentication and Cedar for agent authorization are worth tracking as they mature within the CNCF and IETF communities. Organizations that move proactively to build NHI security programs grounded in honest visibility, defined ownership, lifecycle governance, and behavioral monitoring will be well-positioned to adopt Agentic AI capabilities with confidence. Those that do not will eventually get a clear picture of their NHI exposure through means considerably less controlled. Windval brings deep operational experience across IT, OT, and enterprise environments to help organizations understand and address this critical and growing area of cybersecurity risk. If the exercises and governance questions outlined in this paper surface more questions than answers, that is a reasonable outcome and a good starting point for a conversation.

