According to a CSA survey, 82% of enterprises host unknown AI agents in their environments.

dominic Avatar

A Cloud Security Alliance study revealed that many enterprises are operating undetected AI agents, with almost two-thirds of surveyed companies citing security breaches associated with these agents in the past year.

Conducted online by Token Security in January 2026 and based on responses from 418 IT and security professionals across various organization sizes and regions, the report titled “Autonomous but Not Controlled” explores how organizations are handling (or neglecting to handle) the increasing number of autonomous agents in their environments.

Significant Incidents with Business Impact

Of those who reported AI agent incidents, 61% experienced data breaches, 43% faced operational disruptions, and 35% suffered financial losses. All respondents noted that these incidents had substantial business impacts, indicating a broad trend of harm rather than isolated events.

The study also points out a significant disparity between what organizations think they can monitor and their actual ability to do so. Although 68% felt confident in their monitoring capabilities, the same group discovered previously unknown AI agents, with 41% reporting this multiple times. Shadow AI agents were most frequently found in internal automation or scripting environments (51%), language model platforms including custom tools and plugins (47%), software-as-a-service tools with built-in automation (40%), and developer-built workflows (40%).

Lack of Decommissioning Processes and Lifecycle Risk Management

A crucial finding is that only 21% of organizations have formal processes for decommissioning agents when they are no longer needed. The report refers to this as “decommissioning debt,” a situation where agents continue beyond their intended use, retaining permissions and credentials that expose the organization to ongoing risks.

The sample shows varied autonomy models. A majority (53%) uses agents for low-risk tasks with human review for higher-risk actions. A further 24% rely on a human-in-the-loop model for most tasks, and just 13% report fully autonomous deployments. When agents surpass their defined scope, 38% of respondents need human approval, 24% require the action to be logged, and only 11% automatically block such actions.

Furthermore, the survey highlights that action risk and human authorization are key in AI agent governance. Context-aware controls are expected to become increasingly important over the next two years, with 79% of respondents considering them vital or very important. Parallel to this, 66% report having set up guardrails defining agent boundaries.

Consequently, organizations are focusing on risk management (29%), monitoring (28%), and permission control (19%) post-incident documentation, moving towards large-scale behavioral governance.

The findings have significant implications for enterprise security teams as autonomous AI deployments escalate. Additionally, the combination of inadequate decommissioning practices, inconsistent oversight models, and widespread shadow deployment suggests that current governance frameworks may not be keeping up with operational adoption rates.

Latest Posts