Hello, Guest!

US, Allies Issue Guidance on Securing Agentic AI Systems

AI. The US and allied nations released a cybersecurity information sheet on agentic AI systems.

The National Security Agency and allied cybersecurity agencies have released new guidance outlining how government and critical infrastructure organizations should securely adopt agentic artificial intelligence systems.

The cybersecurity information sheet, titled Careful Adoption of Agentic AI Services,” provides a framework for managing risks tied to AI systems capable of autonomous decision-making and action.

It was developed with the Cybersecurity and Infrastructure Security Agency, the Australian Signals Directorate’s Australian Cyber Security Centre, and counterparts in Canada, New Zealand and the United Kingdom.

Why Are Agencies Focusing on Agentic AI Risks?

Unlike traditional generative AI, agentic AI systems can independently plan, reason and execute tasks with limited human intervention, increasing both operational value and risk.

The guidance notes that these systems are already being integrated into defense and critical infrastructure environments. While AI agents can automate workflows, they also introduce new vulnerabilities tied to autonomy, system complexity and expanded attack surfaces.

Agencies warn that these characteristics can amplify conventional cyber risks and create new failure modes, particularly when AI agents interact with external tools, data sources and other systems.

What Key Risk Areas Does the Guidance Identify?

The document highlights several risk categories agencies should evaluate before deployment:

  • Privilege risks: Over-permissioned agents can enable large-scale compromise if breached
  • Design and configuration risks: Weak architecture and third-party integrations may introduce vulnerabilities
  • Behavior risks: Misaligned objectives, deceptive outputs and emergent behaviors can produce unintended outcomes
  • Structural risks: Interconnected systems increase the likelihood of cascading failures
  • Accountability risks: Limited transparency makes auditing and attribution difficult

What Security Practices Are Recommended?

To mitigate these challenges, the agencies recommend integrating agentic AI security into existing cybersecurity frameworks rather than treating it as a standalone domain.

Key practices include:

  • Applying least-privilege access controls and strong identity management
  • Implementing layered security and continuous monitoring
  • Conducting threat modeling and adversarial testing
  • Using phased deployment with increasing autonomy
  • Maintaining human oversight for high-impact actions

The guidance also stresses the importance of governance, auditability and real-time monitoring as foundational requirements for operational use.

How Does This Fit Into Broader AI Security Efforts?

The CSI builds on prior joint guidance from the U.S. and allied nations on securing AI in operational technology environments and protecting AI system data.

Those earlier efforts outlined principles such as risk-informed deployment, governance frameworks and operator oversight — elements that are reinforced in the latest agentic AI guidance.

Related Articles

;