AI agents pose new cybersecurity risks

As adoption accelerates, many companies still aren’t including cybersecurity teams in AI planning. Experts warn this could lead to problems if agents are deployed without oversight.

AI agents pose new cybersecurity risks

As autonomous AI agents begin performing critical tasks across corporate networks, some cybersecurity experts are sounding the alarm about a new class of digital risk. These agents, designed to operate with minimal human oversight, now require the same identity protections and access controls as human users, but with new models tailored to their machine nature.

Why It matters: AI agents, when left unsecured, can inadvertently trigger data breaches, misuse credentials, or compromise regulatory compliance. Unlike traditional software, these agents can reason, adapt, and act continuously, increasing both their utility and their potential for unintended harm.

Industry leaders, including Okta and 1Password, warn that companies must implement identity frameworks specific to AI entities. Treating them like human accounts—relying on conventional multifactor authentication or manual oversight—is no longer sufficient. As Okta’s Chief Security Officer, David Bradbury, notes, these systems require elevated trust protocols that reflect their continuous, automated nature.

Emerging standards and responses: The 2025 RSA Conference marked a turning point in recognising agent identity management as a cybersecurity priority. Several vendors released tools designed to help IT teams provision, monitor, and revoke AI agent access. This reflects a broader shift toward securing ‘nonhuman identities’—a category that includes bots, APIs, and now, generative AI agents.

Deloitte projects that a quarter of generative AI-using firms will pilot agentic systems this year, with that figure expected to double by 2027. Yet many organisations remain unaware of the security governance implications. Discussions on AI deployment often exclude cybersecurity teams, delaying the integration of crucial controls such as audit trails, permission scopes, and kill switches.

Governance implications: The proliferation of AI agents calls for a reevaluation of digital identity standards and cybersecurity frameworks. Traditional governance models based on static roles and human behaviour patterns are ill-equipped to manage autonomous software that can duplicate itself, operate 24/7, and evolve based on feedback loops.

Some rxperts are calling for:

  • Clear identity assignment for each agent, with unique credentials and traceable activity logs.
  • Policy-based access control systems that define which systems an agent can interact with and under what conditions.
  • Emergency deactivation mechanisms to instantly revoke an agent’s access in case of malfunction or malicious behaviour.

The broader picture: These developments intersect with global efforts to define responsible AI practices. As entities like the OECD and the EU explore regulatory frameworks for AI governance, the question of AI identity and accountability is becoming central. Without updated standards, autonomous agents could introduce systemic vulnerabilities into critical infrastructure, financial systems, and government networks.

Cybersecurity must now evolve alongside AI capability. Securing AI agents is no longer a niche technical task—it is a core requirement for maintaining trust, compliance, and operational control in digital environments.

Go to Top