Share at:

Inadequate risk controls are cited as one of the main reasons why agentic AI projects are failing to deliver on their promise.1 AI agents don’t “break” systems in the same way bad code or phishing links do. They make decisions. They act autonomously. That means your attack surface is no longer just software, it’s behavior. And that changes everything about how we think about control.
A few months ago, one of our banking customers described their dilemma: “We don’t want to stop people from using AI, but we can’t afford for an AI agent to move money it shouldn’t.”
That’s the core tension in enterprise AI today. You want speed, but not at the cost of control. You want intelligent autonomy, but with accountability. Traditional IT governance was about who can do something. Agentic governance is about what actions an autonomous system is allowed to take, when, and under what policy.
We hear the same questions across industries, from chief information officers (CIOs) in healthcare to IT admins in global manufacturing:
“How can we be sure our data isn’t used to train third-party models?”
“How do we guarantee agents won’t access systems they shouldn’t?”
“Can we stop an agent from exposing PII when it calls an LLM?”
“What if an agent’s decision violates internal approval workflows?”
The first concern is data training risk. Enterprise leaders want assurance that their proprietary data won’t end up improving someone else’s model. UiPath enforces strict separation between customer environments and any AI model, whether managed by us or a partner. All third-party large language model (LLM) providers integrated into the UiPath Platform™ operate under signed agreements that explicitly prohibit data retention or model training using customer content. That contractual assurance is reinforced technically: every model call runs through the UiPath AI Trust Layer, which strips identifying metadata and maintains audit trails of every interaction.
The second issue is control over third-party and regional models. Multinational enterprises often face conflicting mandates: adopt modern AI, but ensure data never leaves a jurisdiction. The UiPath Platform allows IT teams to specify which LLMs can be used, where they are hosted, and under what data residency rules. Customers can bring their own LLM subscriptions or route inference through private gateways that comply with local regulatory frameworks. This means you can innovate globally while remaining compliant locally.
A third challenge is unauthorized access and privilege sprawl. Traditional applications inherit a user’s access permissions; autonomous agents require the same discipline. In UiPath, every agent is identity-bound. It inherits folder-level and role-based access controls from the same governance framework that manages people and robots. If an agent is not explicitly granted permission to use a system or tool, it cannot do so, no exceptions or shadow credentials. This keeps your zero trust model intact even as agents proliferate.
The fourth concern is personally identifiable information (PII) exposure through AI workflows. Many organizations want to use generative models in HR, customer service, or finance, but they’re concerned about personal data being exposed to LLMs. UiPath mitigates that through in-flight masking: the AI Trust Layer pseudonymizes personally identifiable information before it reaches a model, replaces it with reversible tokens, and ensures maximum accuracy by rehydrating the data once the model’s output returns. It’s the same logic data-loss-prevention systems use for storage, now applied to AI inference in real time.
Finally, auditability remains essential. As one IT director told us, “If I can’t prove what an agent did, it didn’t happen safely.” Our unified governance model captures every agentic action as an auditable event: the decision trigger, the model used, the confidence level, and the policy context. These records integrate into enterprise audit systems and security information and event management (SIEM) tools for continuous oversight. The goal isn’t just visibility after an incident, it’s operational transparency at scale.
Together, these measures translate long-standing enterprise security principles like least privilege, separation of duties, and traceability into the world of autonomous software, making governance an operating boundary for AI. In practice, this means your governance framework will adapt as your automation landscape changes, much like continuous configuration monitoring does for cloud environments today.
The UiPath governance model follows a layered approach similar to how enterprises structure their own security and compliance architecture. It combines agentic, IT, and infrastructure governance into a single, enforceable system of controls.

At the top is agentic governance, which governs how AI agents reason, act, and stay within defined policy boundaries. This layer is built around three interlocking elements that together define responsible autonomy:
Controlled agency: each agent’s scope of action is predefined and constrained. Agents operate only within assigned roles and systems, with their behavior orchestrated through UiPath Maestro™. When Autopilot suggests optimizations, human review and approval remain part of the deployment cycle, ensuring accountability for every change.
Centralized policies: enterprise policies apply universally across all automations. Administrators can define rules such as which agent groups can access payment systems or handle confidential data, and enforce them platform-wide. Even if local configurations drift, global policies override inconsistencies to maintain compliance.
Model governance: LLMs are subject to the same rigor as any enterprise service. The UiPath AI Trust Layer filters and masks PII during model interactions, applies region and data-handling restrictions, and ensures every model call is logged and auditable.
Beneath this sits IT governance, which manages access, identity, and collaboration across people, robots, and agents. Role-based controls, folder permissions, and unified audit capabilities ensure that every automation operates under intentional and traceable authorization.
Finally, infrastructure governance provides the foundation. It enforces encryption standards, data-residency requirements, network isolation, and compliance alignment with frameworks such as GDPR, FedRAMP, and ISO 27001.
Together, these layers provide a complete chain of accountability—from how an agent decides to act, to who approved the policy enabling it, to how the data supporting that action is secured.
The 2025.10 release turns governance principles into operational controls that IT teams can configure, test, and demonstrate. Each capability is designed to address a specific governance gap identified by our customers.
Agentic guardrails provide a catalog of prebuilt behavioral constraints that can be tailored to your environment. Administrators can define thresholds for confidence levels, restrict actions involving sensitive data, or block entire task categories when risk conditions are met. For example, a financial operations team can limit agents so that no payment is processed without a 95% confidence score or a secondary approval. Guardrails bring measurable, rule-based consistency to autonomous behavior.

Personally identifiable information never needs to leave your control. With this feature, data such as employee names, emails, or account IDs are pseudonymized before being sent to an LLM. The AI Trust Layer automatically rehydrates the original values when the response is received, maintaining accuracy without exposing sensitive data.
This approach meets internal privacy policies and external requirements such as GDPR while allowing teams to safely automate text generation, summarization, or document review.

Customers can now connect their own large language models—or existing cloud subscriptions—while maintaining all of the platform’s governance and auditing safeguards. Whether the model runs on a private endpoint, a regional cloud, or through a preferred provider, the AI Trust Layer validates schemas, logs usage, and applies the same policy rules. This capability gives IT complete flexibility over model selection, cost control, and data residency without fragmenting oversight.

Agent design policies extend UiPath Automation Ops to agent creation. Administrators can enforce standards such as minimum reliability scores, maximum token counts, or temperature thresholds before agents go live. These policies ensure consistent cost, accuracy, and reliability across the organization. They build compliance into the development process rather than inspecting it afterward.

Unified audit introduces a centralized, schema-based audit system for UiPath Automation Cloud™. It consolidates event data from all services into a single location, making it easier to search, filter, and export for compliance or incident response. For large enterprises, this provides the missing link between automation activity and corporate observability systems like Splunk or Azure Sentinel.

UiPath has achieved certification under ISO/IEC 42001:2023 the world’s first AI Management System standard. The certification confirms that the UiPath Platform’s design, development, and governance practices meet rigorous global benchmarks for transparency, risk management, and accountability. For regulated industries, this provides independent assurance that UiPath AI systems meet the highest recognized standards for responsible automation.
Behind the scenes, UiPath has strengthened the platform’s security foundation through collaboration with Noma Security, adopting advanced AI security technologies to ensure our customers benefit from greater resilience, continuous assurance, and trust in their automation platform of choice.
We’re improving AI asset discovery, strengthening AI and ML supply chain risk detection, and embedding continuous monitoring into every stage of platform development. This ensures customers can innovate faster with confidence—on a platform that’s continuously protected against emerging risks and built with the same governance rigor they apply across their own operations.
UiPath Automation Cloud™ is now available in the Switzerland and UAE regions, enabling customers to store and process data locally to meet national data-protection requirements. This expansion supports public -sector organizations and regulated industries that require local data residency without compromising performance or scalability.

These capabilities turn governance from an abstract framework into something administrators can configure, monitor, and demonstrate directly.
As agentic automation ecosystems evolve, static policy enforcement becomes insufficient. The world changes, industries evolve, regulations shift, and organizations reorganize. This is why we are developing a continuous assessment engine that evaluates governance posture in real time. It monitors AI agents and automations, analyzes policy adherence, and assigns dynamic risk scores to agents and processes. It can also translate human-readable policies into machine-enforceable rules, allowing organizations to detect compliance drift before it becomes a violation.
In practice, this means governance will keep pace with innovation. It will provide continuous visibility into how agents and processes are performing against enterprise policies, just as modern cloud security platforms continuously assess configuration posture.
Explore the UiPath Trust Center, your hub for agentic automation security, compliance, audit certifications, and more.

Senior Product Marketing Manager - Governance, Security, and Trust, UiPath
Sign up today and we'll email you the newest articles every week.
Thank you for subscribing! Each week, we'll send the best automation blog posts straight to your inbox.