Last updated: May 2026
The frameworks haven't caught up.
ISO 27001 wasn't written for agentic AI. It assumes humans use systems. It assumes vendors are companies, not models. It assumes the attacker's payload is code, not a sentence in a PDF. None of those assumptions hold any more — and the gap between what your business actually runs and what your ISMS describes is widening every quarter.
Five new risk surfaces have opened up in the last 18 months. Each one deserves its own treatment. Together they redefine what a credible security program looks like for an Australian business in 2026.
The five new risk surfaces
01
Agentic AI
LLM-driven agents with tool and API access now act on behalf of staff. They authenticate, read documents, write to systems, and call other agents. ISO 27001 was written for humans operating systems, not for software that decides which system to operate. Identity, authorisation, and audit assumptions all break.
02
Shadow AI
Employees push corporate data into ChatGPT, Claude, Copilot, Gemini, and Glean every day. Most security teams treat this as a DLP problem. It isn't. It's a data-classification and identity problem, and it's already past the point where blocking is realistic.
03
AI supply chain
Models, weights, prompts, MCP servers, and third-party agents are new dependencies. They ship with implicit trust, no SBOM, and no equivalent of a SOC 2 report. TPRM and SSRM frameworks were built for SaaS and code libraries, not for opaque models trained on uninspected corpora.
04
Generative threats
Deepfake voice and video, AI phishing, synthetic identity, and polymorphic malware are now commodity. Detection tooling is signature- and heuristic-based and is lagging by quarters. The first AU CFO-fraud cases using cloned voice are already in the press.
05
Algorithmic accountability
The EU AI Act, ISO/IEC 42001, the NIST AI RMF, and the Australian Voluntary AI Safety Standard now demand AI-specific governance — risk management, documentation, human oversight, post-market monitoring. Most cyber firms don't have an AI governance practice and pretend the gap doesn't exist.
What good looks like.
An AI-aware security program treats agents as principals, classifies data before it touches a model, runs an AI inventory the way it runs a CMDB, and pairs ISO 27001 with ISO 42001 as the dual baseline. It is opinionated about which AI is allowed where, and it threat-models every Copilot and MCP server before it goes to production.
Small firms with AI-augmented delivery can build this faster than incumbents. The work is judgement, not typing. The remaining moat is people who have done it before.
Glossary
- Agentic AI
- AI systems that take actions on behalf of users by calling tools, APIs, or other agents — not just generating text.
- Shadow AI
- Use of AI tools (ChatGPT, Claude, Copilot, Gemini, Glean) by employees outside formal IT and security governance.
- AI supply chain
- Models, weights, prompts, datasets, MCP servers, and third-party agents your business depends on but does not produce.
- Algorithmic accountability
- The discipline of ensuring AI systems are documented, auditable, and answerable under standards like ISO 42001 and the EU AI Act.
- MCP server
- A Model Context Protocol server — exposes tools, data, or capabilities to an AI agent over a standard interface. The new third-party vendor.
- Prompt injection
- An attack where instructions hidden in untrusted content (a document, email, or web page) hijack an AI agent's behaviour.
- Tool poisoning
- Embedding malicious instructions in the metadata of an MCP tool so the agent misuses it during normal operation.
- Agent rug pull
- A previously-benign MCP server or third-party agent updates to behave maliciously after trust has been established.
- Agent sprawl
- The unmanaged proliferation of agents and copilots across teams — the agentic-era equivalent of shadow IT.