Share at:

Over the past year, nearly every conversation I have had with fellow chief information security officers (CISOs) has revolved around the same tension: how do we bring Agentic AI into the enterprise without increasing risk?
Recently, UiPath co-founder and CEO Daniel Dines wrote “enterprises want the speed and intelligence of AI agents and automation, but never at the expense of security or control.” That observation mirrors what I hear from security leaders across industries.
The question is no longer whether agents will operate inside our enterprises. 2025 answered that. The real question is whether we are governing them with the same rigor we applied when we moved critical workloads to the cloud.
Not long ago, our security models assumed software followed instructions. Now, it makes decisions.
In 2025, agentic AI moved from experimentation to operational deployment. Autonomous systems are planning and executing actions across enterprise environments with delegated authority.
For CISOs, this is not another tooling cycle. It is a structural shift in how digital action is initiated, authorized, and governed.
We are no longer securing static applications. We are securing digital actors.
Governance models are evolving, but deployment is outpacing them. This creates a trust gap between autonomy and oversight.
Closing this gap requires more than policy, periodic reviews, and static controls. It requires governance that operates at the same speed, applied with the same rigor and precision we have brought to every major technology shift before it.
Traditional cybersecurity rested on four foundational assumptions:
Deterministic behavior
Fixed roles and permissions
Predictable execution paths
Human accountability at each decision layer
Agentic systems challenge each of these assumptions. They:
Adapt at runtime
Dynamically select tools
Persist memory across interactions
Make intermediate decisions independently
These capabilities create value through workflow compression, cross-system orchestration, and operational acceleration.
They also redefine exposure.
Security teams must now evaluate not only what a system can access, but how it sequences actions, how it forms intent, and how that intent can be influenced over time.
Prompt injection becomes execution manipulation. Memory persistence introduces longitudinal risk. Delegated authority concentrates impact.
This is not an edge case of misconfiguration. It is inherent to autonomy itself. The control question shifts from “Was access appropriate?” to “Was the decision pathway governed?”.
When an autonomous system can access our customer relationship management (CRM) or other SaaS platforms, modify infrastructure, or initiate payments, it must be governed with the same rigor as any privileged human operator.
Every AI agent must:
Possess a unique governed identity
Operate under enforced least privilege
Be subject to credential lifecycle controls
Generate immutable audit trails
Undergo continuous behavioral monitoring
Many organizations classify agents as non-human identities. That framing is useful but incomplete.
Unlike traditional service accounts, agents reason about how they use their permissions.
In the agentic era, identity is no longer merely an access layer. It is the enforcement layer for autonomy.
Zero trust principles must extend fully to digital actors with the same rigor applied across the broader enterprise control environment. Manual provisioning models designed for human onboarding will fail under autonomous scale. Identity governance must become dynamic, policy-driven, and continuously validated.
When autonomy scales, identity becomes infrastructure. Digital actors do not operate in isolation. They operate within interconnected enterprise workflows that must be governed in a unified manner.
Traditional observability answers a retrospective question: what happened?
Agentic systems require us to answer a different one: why did it happen?
Governance now depends on visibility into:
Decision provenance signals (inputs, constraints, and outcomes)
Tool invocation sequences
Policy evaluations
Memory interactions
Decision overrides
If we cannot reconstruct how intent was formed and actions were selected, using observable execution and policy artifacts, governance becomes theoretical.
AI governance and security without explainability and auditability is fragility. CISOs must demand cognitive telemetry not as forensic evidence after failure, but as a continuous assurance layer operating alongside execution and visible in our security information and event management (SIEM) tooling.
Autonomy operating at machine speed requires oversight that operates at machine speed.
Human review alone cannot scale to that requirement. Autonomous systems will increasingly participate in supervising other autonomous systems, enforcing policy, validating behavior, and escalating only when predefined risk thresholds are exceeded. Governance becomes a distributed capability, not a manual checkpoint.
Policies written in documentation do not constrain autonomous systems. We must ensure governance operates at runtime.
That requires:
Pre-execution policy enforcement
Continuous conformance monitoring (against approved policies)
Version and model traceability
Explicit human-in-the-loop (HITL) approval thresholds for high impact actions
Clear human override pathways
This represents a shift from static compliance to dynamic supervision.
Boards and regulators are evolving accordingly. The era of aspirational responsible AI statements is closing. Executive leadership increasingly requires demonstrable control environments backed by evidence.
Governance must be embedded into monitoring pipelines, identity systems, and orchestration layers spanning multiple models, external AI services, and the various bring-your-own components usually found in enterprise automation.
Governance cannot assume a single model boundary. It must operate continuously and consistently across heterogeneous model environments, integrating with existing security and governance frameworks rather than replacing them.
Enterprise customers are no longer asking whether AI is secure. They are asking how you prove it.
Independent validation, structured AI management systems, and formalized risk frameworks are maturing rapidly. Procurement expectations are shifting from feature velocity to verifiable governance.
Autonomy without demonstrable assurance will stall at the board level. Across the industry, structured maturity models and formal certification pathways are beginning to emerge, translating governance principles into measurable accountability.
Trust is not implied by innovation. It is earned through evidence.
Organizations that will lead in 2026 are not those deploying the most AI agents. They are those governing them deliberately. Five pillars define secure agentic deployment:
1. Identity first
Every AI agent is a governed identity with enforced least privilege and continuous validation.
2. Tool segmentation
High-impact systems sit behind contextual authorization gateways with explicit approval thresholds.
3. Memory protection
Persistent state is encrypted, integrity validated, access controlled, and auditable.
4. Runtime guardrails
Pre-execution constraints and runtime anomaly monitoring operate continuously. In mature environments, supervisory agents may assist in enforcing these guardrails, enabling continuous validation at scale.
5. Auditability, observability, and decision provenance
Autonomous systems must provide traceable records of inputs, policy evaluations, and resulting actions, not just task completion logs.
6. Human escalation pathways
Clear governance thresholds define when autonomy yields to executive accountability.
Governance does not slow innovation. It unlocks executive trust. Trust accelerates adoption.
The trajectory is clear.
In 2024, most organizations were experimenting
In 2025, autonomy moved into production
2026 will test whether governance kept the pace
Adversaries are leveraging autonomy to scale reconnaissance and exploitation. Boards are demanding demonstrable oversight. Regulators are formalizing expectations around AI risk management and operational control.
The leadership gap is not technological; it is the lack of trust in governance maturity.
As CISOs, our mandate is not to resist or slow down transformation. It is to shape it into trustworthy components of our enterprise.
We are responsible for engineering the unified control planes that allow digital actors to operate safely at machine speed and for ensuring their autonomy is deployed deliberately, governed transparently, monitored continuously, and validated independently.
Autonomy does not remove accountability, it amplifies it. The next era of enterprise security will not be defined by firewalls or models; it will be defined by how well we govern autonomous action.
The organizations that lead will not be those that adopted agentic AI first. They will be those that secured it, governed it, and proved it.
And that responsibility sits with us.

Chief Information Security Officer (CISO), UiPath
Sign up today and we'll email you the newest articles every week.
Thank you for subscribing! Each week, we'll send the best automation blog posts straight to your inbox.