80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier

This post has been republished via RSS; it originally appeared at: Microsoft Security Blog.

Today, Microsoft is releasing the new Cyber Pulse report to provide leaders with straightforward, practical insights and guidance on new cybersecurity risks. One of today’s most pressing concerns is the governance of AI and autonomous agents. AI agents are scaling faster than some companies can see them—and that visibility gap is a business risk.1 Like people, AI agents require protection through strong observability, governance, and security using Zero Trust principles. As the report highlights, organizations that succeed in the next phase of AI adoption will be those that move with speed and bring business, IT, security, and developer teams together to observe, govern, and secure their AI transformation.

Agent building isn’t limited to technical roles; today, employees in various positions create and use agents in daily work. More than 80% of Fortune 500 companies today use AI active agents built with low-code/no-code tools.2 AI is ubiquitous in many operations, and generative AI-powered agents are embedded in workflows across sales, finance, security, customer service, and product innovation. 

With agent use expanding and transformation opportunities multiplying, now is the time to get foundational controls in place. AI agents should be held to the same standards as employees or service accounts. That means applying long‑standing Zero Trust security principles consistently:

  • Least privilege access: Give every user, AI agent, or system only what they need—no more.
  • Explicit verification: Always confirm who or what is requesting access using identity, device health, location, risk level.
  • Assume compromise can occur: Design systems expecting that cyberattackers will get inside.

These principles are not new, and many security teams have implemented Zero Trust principles in their organization. What’s new is their application to non‑human users operating at scale and speed. Organizations that embed these controls within their deployment of AI agents from the beginning will be able to move faster, building trust in AI.

The rise of human-led AI agents

The growth of AI agents expands across many regions around the world from the Americas to Europe, Middle East, and Africa (EMEA), and Asia.

A graph showing the percentages of the regions around the world using AI agents.

According to Cyber Pulse, leading industries such as software and technology (16%), manufacturing (13%), financial institutions (11%), and retail (9%) are using agents to support increasingly complex tasks—drafting proposals, analyzing financial data, triaging security alerts, automating repetitive processes, and surfacing insights at machine speed.3 These agents can operate in assistive modes, responding to user prompts, or autonomously, executing tasks with minimal human intervention.

A graphic showing the percentage of industries using agents to support complex tasks.
Source: Industry Agent Metrics were created using Microsoft first-party telemetry measuring agents build with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

And unlike traditional software, agents are dynamic. They act. They decide. They access data. And increasingly, they interact with other agents.

That changes the risk profile fundamentally.

The blind spot: Agent growth without observability, governance, and security

Despite the rapid adoption of AI agents, many organizations struggle to answer some basic questions:

  • How many agents are running across the enterprise?
  • Who owns them?
  • What data do they touch?
  • Which agents are sanctioned—and which are not?

This is not a hypothetical concern. Shadow IT has existed for decades, but shadow AI introduces new dimensions of risk. Agents can inherit permissions, access sensitive information, and generate outputs at scale—sometimes outside the visibility of IT and security teams. Bad actors might exploit agents’ access and privileges, turning them into unintended double agents. Like human employees, an agent with too much access—or the wrong instructions—can become a vulnerability. When leaders lack observability in their AI ecosystem, risk accumulates silently.

According to the Cyber Pulse report, already 29% of employees have turned to unsanctioned AI agents for work tasks.4 This disparity is noteworthy, as it indicates that numerous organizations are deploying AI capabilities and agents prior to establishing appropriate controls for access management, data protection, compliance, and accountability. In regulated sectors such as financial services, healthcare, and the public sector, this gap can have particularly significant consequences.

Why observability comes first

You can’t protect what you can’t see, and you can’t manage what you don’t understand. Observability is having a control plane across all layers of the organization (IT, security, developers, and AI teams) to understand:  

  • What agents exist 
  • Who owns them 
  • What systems and data they touch 
  • How they behave 

In the Cyber Pulse report, we outline five core capabilities that organizations need to establish for true observability and governance of AI agents:

  • Registry: A centralized registry acts as a single source of truth for all agents across the organization—sanctioned, third‑party, and emerging shadow agents. This inventory helps prevent agent sprawl, enables accountability, and supports discovery while allowing unsanctioned agents to be restricted or quarantined when necessary.
  • Access control: Each agent is governed using the same identity‑ and policy‑driven access controls applied to human users and applications. Least‑privilege permissions, enforced consistently, help ensure agents can access only the data, systems, and workflows required to fulfill their purpose—no more, no less.
  • Visualization: Real‑time dashboards and telemetry provide insight into how agents interact with people, data, and systems. Leaders can see where agents are operating, understanding dependencies, and monitoring behavior and impact—supporting faster detection of misuse, drift, or emerging risk.
  • Interoperability: Agents operate across Microsoft platforms, open‑source frameworks, and third‑party ecosystems under a consistent governance model. This interoperability allows agents to collaborate with people and other agents across workflows while remaining managed within the same enterprise controls.
  • Security: Built‑in protections safeguard agents from internal misuse and external cyberthreats. Security signals, policy enforcement, and integrated tooling help organizations detect compromised or misaligned agents early and respond quickly—before issues escalate into business, regulatory, or reputational harm.

Governance and security are not the same—and both matter

One important clarification emerging from Cyber Pulse is this: governance and security are related, but not interchangeable.

  • Governance defines ownership, accountability, policy, and oversight.
  • Security enforces controls, protects access, and detects cyberthreats.

Both are required. And neither can succeed in isolation.

AI governance cannot live solely within IT, and AI security cannot be delegated only to chief information security officers (CISOs). This is a cross functional responsibility, spanning legal, compliance, human resources, data science, business leadership, and the board.

When AI risk is treated as a core enterprise risk—alongside financial, operational, and regulatory risk—organizations are better positioned to move quickly and safely.

Strong security and governance do more than reduce risk—they enable transparency. And transparency is fast becoming a competitive advantage.

From risk management to competitive advantage

This is an exciting time for leading Frontier Firms. Many organizations are already using this moment to modernize governance, reduce overshared data, and establish security controls that allow safe use. They are proving that security and innovation are not opposing forces; they are reinforcing ones. Security is a catalyst for innovation.

According to the Cyber Pulse report, the leaders who act now will mitigate risk, unlock faster innovation, protect customer trust, and build resilience into the very fabric of their AI-powered enterprises. The future belongs to organizations that innovate at machine speed and observe, govern and secure with the same precision. If we get this right, and I know we will, AI becomes more than a breakthrough in technology—it becomes a breakthrough in human ambition.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Data Security Index 2026: Unifying Data Protection and AI Innovation, Microsoft Security, 2026.

2Based on Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

3Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

4July 2025 multi-national survey of more than 1,700 data security professionals commissioned by Microsoft from Hypothesis Group.

Methodology:

Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the past 28 days of November 2025. 

2026 Data Security Index: 

A 25-minute multinational online survey was conducted from July 16 to August 11, 2025, among 1,725 data security leaders. 

Questions centered around the data security landscape, data security incidents, securing employee use of generative AI, and the use of generative AI in data security programs to highlight comparisons to 2024. 

One-hour in-depth interviews were conducted with 10 data security leaders in the United States and United Kingdom to garner stories about how they are approaching data security in their organizations. 

Definitions: 

Active Agents are 1) deployed to production and 2) have some “real activity” associated with them in the past 28 days.  

“Real activity” is defined as 1+ engagement with a user (assistive agents) OR 1+ autonomous runs (autonomous agents).  

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on Microsoft Security Blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.