Blog

Governing AI Agents Before They Go Rogue

Written by Maiky | Feb 12, 2026 1:28:02 PM

AI agents have quietly slipped from experiments into production. They draft emails, triage tickets, update records, and even trigger changes in infrastructure. What started as helpful copilots is fast becoming a new class of operational actor inside your organisation.

For CISOs, this is no longer a hypothetical risk. AI agent governance is now squarely on the table: these systems touch critical data and systems, yet most were never designed with security, compliance or auditability in mind. Under ISO/IEC 27001:2022, NIS2, and the emerging ISO/IEC 42001 AI Management System standard, that gap is becoming untenable.​​

This article turns the “AI agents going rogue” narrative into a practical governance playbook. It shows how to treat agents as first‑class assets, define what they are allowed to do, and continuously control and evidence their behaviour, using Maiky as the GRC backbone that makes all of this manageable.

Why AI agents change your risk surface

From copilots to agents with real powers

Classic chatbots answered questions. Modern AI agents perform automated actions and make decisions without real-time human oversight.

They can:

  • Call APIs and internal tools
  • Read and update records in CRMs, ERPs or ticketing systems
  • Orchestrate workflows in CI/CD or incident response pipelines
  • Interact with third‑party platforms using organisation credentials

An “AI DevOps assistant” that restarts services, or a “finance agent” that updates invoices, is functionally similar to a junior employee with powerful access, but without the same HR processes, training, or identity governance.

Typical “rogue” behaviours that matter to CISOs

When people talk about AI agents “going rogue”, it usually looks like one of these very mundane, but serious failures:

  • Mis‑scoped actions: Accidentally deleting or overwriting production data.
  • Over‑broad access: Pulling far more data than is needed for a task, including sensitive or regulated information.
  • Data exfiltration: Logging sensitive content to external tools or sending it to third‑party services without appropriate safeguards.
  • Process and compliance breaches: Bypassing required approvals (for payments, access, changes) that are mandated by internal policy, NIS2, or ISO‑aligned controls.

Most incidents start as misconfiguration and missing governance, not some cinematic AI jailbreak. That’s good news: governance and GRC teams are exactly the ones who can prevent most of these events from happening.

 

How NIS2, ISO 27001 and ISO 42001 expect you to govern AI agents

You do not need a new rulebook; you need to apply the one you have

Regulators and standards bodies generally don’t make special exceptions for AI. Instead, they expect you to bring AI into your existing governance and control framework:
  • NIS2 requires risk management, secure operations, incident handling, and reporting for systems that are critical to the continuity of essential and important entities. If AI agents touch critical services or data, they fall in scope.​
  • ISO 27001 defines how to build and run an Information Security Management System (ISMS). Clauses 4–10 cover scope, leadership, planning, support, operation, performance evaluation and improvement, all of which must reflect the reality that AI agents now exist in your environment.​
  • Annex A to ISO 27001 introduces updated sets of organisational, people, physical and technological controls, including inventory and ownership of information and associated assets, clear roles and responsibilities, access management, and logging and monitoring.​

AI agents are simply a new class of systems that must be brought under these expectations.

Where ISO 42001 fits in

ISO 42001 is the new AI Management System (AIMS) standard focused specifically on governing AI systems across their lifecycle from design and data management to deployment, monitoring and improvement.

Where ISO 27001 is about securing information and systems overall, ISO 42001 adds AI‑specific expectations, such as:

  • Systematic AI risk and impact assessments
  • Human oversight and intervention mechanisms
  • Managing AI behaviour across its lifecycle (training, deployment, updates)

The playbook below is designed so it can live inside an ISO 27001 ISMS today and become a core building block of an ISO 42001‑aligned AI Management System as your organisation matures.

A practical AI agent governance playbook for CISOs

The following six steps give CISOs a realistic roadmap to bring AI agents under control. Each step aligns with NIS2 and ISO 27001, while laying groundwork for ISO 42001, and can be operationalised in Maiky without drowning in spreadsheets and email threads.

Step 1: Build an inventory of your AI agents

Objective: Know what agents exist, where they run, and what they connect to.

You cannot govern what you cannot see. Start by discovering:

  • AI‑enabled features in core platforms (CRM, ITSM, ERP, CI/CD, monitoring tools)
  • Embedded agentic AI, such as agent-enabled browsers
  • Internal bots, scripts and copilots that can take actions, not just answer questions
  • Third‑party automations driven by large language models or off‑the‑shelf “agents”

For each agent, capture at least:

  • Owner and accountable team
  • Business purpose and processes it supports
  • Underlying AI model or service provider(s)
  • Systems and data it reads from and writes to
  • Environment (dev / test / staging / production)

This aligns with ISO/IEC 27001 Annex A organisational controls requiring a maintained inventory of information and associated assets, with clear ownership. Under NIS2, that inventory is the foundation for meaningful risk assessment and reporting.​

How Maiky helps

In Maiky, AI agents can be registered as a dedicated asset type, with custom fields for:

  • Owner
  • Business unit
  • Criticality
  • Connected systems
  • Data categories

This gives CISOs a single, filterable view of “all AI agents in the organisation”, rather than chasing half‑remembered pilots across Slack and email.

Step 2: Define and document allowed actions for each agent

Objective: Make explicit what each agent is allowed and not allowed to do.

Right now, many agents operate on fuzzy understandings: “it can do useful DevOps stuff” or “it helps with finance”. For governance, that is not enough.

For each agent, define:

  • Allowed operations: For example,
    • Read‑only access to specific datasets
    • Ability to create tickets, update non‑critical fields, or suggest changes
    • Actions permitted only in certain environments (dev/stage vs prod)

  • Forbidden operations:
    • Changes to production databases
    • Actions involving payments, contracts, or identity stores
    • Use of sensitive categories of personal data

  • Guardrails and conditions:
    • Human approval is required for actions above a sensitivity threshold
    • Rate limits and scope limits
    • Clear escalation paths when the agent is uncertain

This directly ties into Annex A technological controls around access control and privilege management in ISO 27001, and secure operations expectations in NIS2.

How Maiky helps

With Maiky, you can:

  • Create standard policy and control templates for AI agents (e.g. “AI Agent Access Policy”, “Guardrail Requirements”)
  • Register your AI agents in the Asset Management module and associate them with relevant risks, controls and policies by linking assets to risk entries, control sets and processes.
  • Store and version the “Allowed Actions” description as a policy artefact, linked directly to the agent

The result is a traceable connection between “this agent exists” and “this is precisely what it is allowed to do”.

Step 3: Map AI agents to policies, risks and controls

Objective: Bring agents into your core risk and control framework, instead of treating them as experimental edge cases.

For each agent:

  • Identify the risks it introduces or amplifies, such as:
    • Unauthorised disclosure of confidential or regulated data
    • Unapproved changes to production systems
    • Fraudulent transactions or inaccurate financial entries
    • Loss of integrity in logs or metrics

  • Link those risks to existing or new policies and controls, including:
    • Information security policy and topic‑specific policies
    • Access and change management controls
    • Vendor and third‑party risk controls for external AI services
    • Training and awareness controls for teams designing and operating agents

This is classic ISMS work, now applied to AI. It supports ISO 27001 requirements for risk assessment and treatment, as well as NIS2’s emphasis on proportionate technical and organisational measures.

How Maiky helps

Maiky provides:

  • Risk registers, use your main risk register or create a separate, dedicated AI risk register where you can define AI‑specific risk scenarios (e.g. “AI Customer Support Agent Exposes PII”, “AI DevOps Agent Misapplies Runbook”)
  • The ability to link each risk to one or more AI agent assets
  • Control libraries that you can map to these risks (technical, organisational, and procedural controls)

External dashboards can be configured to let you see at a glance which agents are associated with unmitigated or partially mitigated risks, and where to focus first.

Step 4: Enforce change and approval workflows for AI agents

Objective: Ensure new agents and material changes cannot bypass security and compliance.

New AI agents are often spun up informally by enthusiastic teams. That is fine during early experimentation that is contained and pre-approved, but dangerous once they touch real systems and data.

Define clear thresholds when formal review and approval are mandatory, such as:

  • Creating a new AI agent that can read or change production data
  • Granting an existing agent new tools or broader permissions
  • Promoting an agent from test/staging to production
  • Changing the underlying AI model or vendor

Approvals should include:

  • The accountable business owner
  • Security / GRC (for risk and control mapping)
  • Data protection or legal, if personal data or regulated data is involved

This is how you satisfy ISO 27001 expectations on operational control and change management, and NIS2’s requirements for secure operations and governance, applied specifically to agents.

How Maiky helps

In Maiky, you can:

  • Configure workflow templates like:
    • “New AI Agent Request”
    • “Agent Permission Change”
    • “Promotion to Production”
  • Automatically route tasks to the right approvers
  • Attach all decisions and supporting evidence to the AI agent asset

Auditors see a clean history: when each agent was introduced or changed, who approved it, and which risks and controls were considered.

Step 5: Instrument monitoring, logging and alerts around agent activity

Objective: Detect and respond when agents behave unexpectedly or dangerously.

Under ISO 27001, organisations are expected to log and monitor activities that could affect information security. For AI agents, that means at least:​

  • Technical activity logs:
    • What actions the agent took, against which systems, and when
    • Inputs and outputs that materially affect security or compliance

  • Governance logs:
    • Who approved the agent’s deployment or change
    • When reviews were performed, what was decided

  • Monitoring and alerting:
    • Triggers for anomalous behaviour: unusual volumes, out‑of‑hours actions, attempts to access forbidden resources
    • Integration with your SOC, SIEM or detection tooling so agent‑driven incidents are handled like any other security event

This supports both NIS2’s incident handling and reporting obligations and ISO 27001’s logging and monitoring control categories.

How Maiky helps

Maiky sits above the technical logging systems as the governance layer:

  • Each AI agent asset can have monitoring‑related controls attached (e.g. “Agent activity logged to SIEM”, “Alert rules configured”, “Log retention meets policy”).
  • Control status (implemented, in progress, not implemented) is tracked with clear owners and deadlines.
  • Incidents related to specific agents can be recorded in Maiky and linked back to the relevant risks and controls, closing the loop.

This gives CISOs a consolidated, audit‑ready view of how AI agents are observed and managed.

Step 6: Run regular reviews and readiness checks

Objective: Make AI‑agent governance a living process, not a one‑off project.

AI agents evolve quickly, new use cases, updated models, and expanded permissions. Governance has to evolve with them.

Introduce a recurring review cycle, for example, quarterly, where for each agent you:

  • Validate that the business purpose is still valid and justified
  • Confirm the owner and stakeholders
  • Review incidents, near‑misses, and security findings involving the agent
  • Re‑assess risks and adjust allowed actions or controls as needed
  • Check that logging, monitoring and approvals are still working as intended

This continuous improvement mindset is deeply embedded in both ISO 27001 and ISO 42001, as well as NIS2’s expectations for ongoing risk management and governance.

How Maiky helps

Maiky makes recurrence and follow‑through practical:

  • Set recurring tasks like “AI Agent Governance Review” for each agent
  • Use dashboards to track:
    • Percentage of agents with a completed review in the last 90 days
    • Agents missing owners or critical controls
    • Trends in AI‑agent related incidents

Instead of occasional fire drills when something goes wrong, you get a predictable, structured review rhythm.

 

Looking ahead: from ISMS to AIMS (ISO 42001)

Many organisations will start by integrating AI agents into their existing ISMS, aligned with ISO 27001. Over time, especially as AI becomes more central to products and services, adopting ISO 42001 will become a natural next step.

The good news: the work you do now on AI agent governance, inventory, risk mapping, controls, monitoring, and reviews is also foundational for an AI Management System:

  • Your agent inventory becomes part of your AI system inventory.
  • Your risk scenarios and controls map into AI‑specific risk and impact management.
  • Your workflows and reviews demonstrate leadership, planning, and operational control over AI use.

With Maiky, you have one place to organise evidence across information security (ISO 27001), AI governance (ISO 42001), and regulatory frameworks like NIS2.

Turning AI agent governance into a board‑ready story

Boards and executives do not need the technical detail; they need clarity and confidence.

Using the approach above, CISOs can report simple, meaningful metrics, such as:

  • Number of AI agents in use, segmented by criticality
  • Percentage of agents with:
    • A named owner
    • A defined “Allowed Actions” document
    • All mandatory controls implemented
    • A completed review in the last quarter
  • Number and severity of AI‑agent related incidents or near‑misses

With Maiky, those metrics are available from live data, not last‑minute spreadsheet consolidation. That makes “governing AI agents before they go rogue” an ongoing competence, not a one‑off initiative.

FAQ: AI agent governance for CISOs

What is AI agent governance?
AI agent governance is the set of policies, processes and controls that ensure AI agents operate within defined boundaries, with appropriate approvals, monitoring and accountability. It treats agents as governed assets, similar to applications or users, but with AI‑specific risks in mind.

How does ISO 27001 apply to AI agents?
ISO 27001 does not mention AI agents by name, but its requirements for defining scope, managing risk, tracking assets, controlling access, and logging and monitoring all apply directly once agents can access or change information systems. Agents must be brought into the ISMS like any other critical system.​

What is ISO 42001, and how is it different from ISO 27001?
ISO 42001 is an AI Management System standard focused specifically on governing AI systems and their lifecycle. While ISO 27001 covers information security broadly, ISO 42001 adds AI‑specific expectations around impact assessments, data and model governance, and human oversight across design, deployment and operation.

How can a GRC platform help govern AI agents?
A modern GRC platform like Maiky centralises AI agent inventory, connects agents to risks and controls, automates approval and review workflows, and provides real‑time oversight and evidence. That turns AI agent governance from an ad‑hoc, spreadsheet‑driven effort into a structured, auditable program aligned with ISO 27001, ISO 42001 and NIS2.