In February 2002, Michael Rasmussen coined the term GRC at Forrester Research. He gave compliance a name and a framework. Now, more than two decades later, the discipline he defined is being redefined by autonomous artificial intelligence. The gap between automation and autonomy is not a marketing abstraction. It is a structural shift in how compliance work gets done, who does it, and what evidence means when machines can generate, review, and attest to it.
Michael Rasmussen himself now maps the evolution to GRC 7.0, the orchestration phase, where agentic AI and digital twins converge to create systems that sense, analyze, act, and adapt. His framework traces the progression: GRC 1.0 through 3.0 were manual, fragmented, and document‑driven. GRC 4.0 through 6.0 digitized and integrated compliance into platforms and enterprise architectures. GRC 7.0 is where the system thinks.
This is not theoretical. In 2026, GRC agents are operating in production environments across enterprises, reducing manual work by 70 %, cutting investigation time by 60 %, and achieving 99 % accuracy on compliance evidence analysis. The EU AI Act provisions taking full effect on August 2 2026 are accelerating adoption as companies face the reality of governing AI systems with tools that are themselves powered by AI.
What Is an AI Agent in GRC?
Before unpacking the capabilities, we need to define what we mean by agent, because the term has been stretched to cover everything from a chatbot that answers compliance policy questions to a system that autonomously collects evidence, maps it to control requirements, and escalates only when confidence falls below a defined threshold.
An AI agent in GRC is a system that possesses three properties:
- Observe – continuously pulls data from integrations, scans for changes in control states, regulatory requirements, and organizational risk indicators.
- Decide – works within predefined boundaries, determining whether evidence is sufficient, whether a control is operating effectively, and whether a finding requires escalation or remediation.
- Act – collects evidence, updates control states, drafts responses, and creates audit trails without requiring human initiation at each step.
Automation merely executes a predefined task. Intelligence assists a human with decision‑making. An agent observes, decides, acts, and escalates in a continuous loop. Humans set the parameters and review the outcomes; the agent handles the execution in between.
Complyance describes this distinction clearly in their framework: a co‑pilot suggests actions for approval, an agent executes actions independently and logs them with human oversight. The difference is who holds the keyboard during routine operations.
The Agent Loop: Escalate, Act, Reason
Every GRC agent operates through variations of the same core loop, which Michael Rasmussen describes in the GRC 7.0 framework as observe → analyze → act → escalate.
Escalate. When complexity or uncertainty exceeds the agent’s confidence threshold, it notifies the appropriate human or supervisory system. A vendor‑risk scoring agent encounters a third party with ambiguous security documentation. Instead of forcing a score, it flags the ambiguity, provides its best assessment with a confidence level, and routes it to the responsible analyst. Escalation is not failure—it is the agent knowing what it does not know and routing appropriately.
Act. Within confidence boundaries, the agent executes tasks autonomously. It collects evidence from cloud infrastructure, HR systems, and code repositories; maps that evidence to control requirements across multiple frameworks; drafts policy documents from organizational context; answers customer security questionnaires using mapped controls and historical responses; and generates audit‑ready reports. Every action is logged with complete metadata, timestamps, and chain‑of‑custody documentation that auditors can review and verify.
Reason. The agent evaluates evidence quality, identifies patterns, correlates data across systems, and draws conclusions that inform risk posture. When a control fails, the reasoning engine does not just flag the failure. It analyzes the failure context, identifies likely root causes, cross‑references similar failures across the organization, and recommends remediation paths ranked by effectiveness and effort.
These three properties form a continuous control loop. The agent monitors, detects, evaluates, acts, and escalates 24 × 7 across every integrated system and every mapped control. The human oversight layer reviews escalated items, adjusts confidence thresholds, and refines agent behavior based on audit feedback and organizational risk appetite.
The Agents in Production Today
Complyance: Evidence Review, Vendor Risk, Policy Drafting
Complyance, backed by Series A funding led by GV, operates the most mature set of specialized GRC agents in production. Their platform reports a 70 % reduction in manual GRC work, a 7× average return on investment, and a 4.9/5 customer satisfaction rating.
Key agents include:
- Evidence Review Agent – examines collected evidence before audits and flags gaps proactively.
- Vendor Questionnaire Review Agent – surfaces risky responses in third‑party assessments.
- Vendor Risk Scoring Agent – provides objective, data‑driven risk scores based on evidence rather than self‑reported questionnaires.
- Customer Trust Questionnaire Agent – auto‑drafts responses to client security inquiries using historical data and mapped controls.
Complyance also offers specialized agents for NIST CSF, HIPAA, risk‑treatment planning, policy drafting, and control remediation. The platform supports 100+ frameworks and 100+ enterprise integrations.
A critical architectural choice is full isolation of each client’s AI environment—Complyance AI never trains on client data. This isolation is a procurement requirement for regulated sectors such as financial services, healthcare, and government.
Neal Bridges, CISO: “After years of juggling disparate tools, Complyance centralized our audit assessments in one place, saving weeks of work each quarter.”
Anecdotes: The Agentic Platform with the Data Engine
Anecdotes builds its agent capabilities on a proprietary data foundation that it argues is the differentiator. The company operates 230+ native integrations (AWS, Azure, Okta, GitHub, Jira, etc.) and normalizes all evidence into a GRC‑native data structure that includes complete metadata, item counts, and end‑to‑end audit trails with precise timestamps.
The core premise: AI accuracy in GRC is fundamentally a data problem. Messy, incomplete, or inconsistently structured evidence undermines even the smartest agents. Anecdotes solves the data layer first, then deploys agents on top of clean, audit‑grade data.
- Agent Studio – a no‑code builder that lets compliance teams create custom agents with specific triggers, tasks, and actions. For example, a compliance analyst can design an agent that monitors employee off‑boarding across Okta and GitHub, detects deprovisioning gaps, creates tickets in Jira, and escalates if the gap persists beyond a defined threshold.
- Agent Library – production‑ready agents for common GRC workflows. ChatGRC provides a conversational interface: “Show me all failing controls in our ISO 27017 scope,” or “What is our current risk posture for data‑encryption controls?”
Anecdotes’ requirement‑level mapping eliminates duplicate work across frameworks. A single piece of evidence can satisfy multiple overlapping requirements, cutting investigation time by 60 % in a pharma case study and delivering daily audit readiness.
Mario Duarte, VP of Security, Snowflake: “The platform’s ability to bring credible data sets together has been invaluable.”
Drew Gutstein, CISO, Hudson River Trading: “Real‑time risk posture insights enable confident executive reporting.”
Trustero: Multi‑Agent AI with Patented Accuracy
Trustero adopts a multi‑agent architecture where specialized agents collaborate:
- Control Agent – monitors and validates control performance.
- Evidence Agent – collects and organizes audit‑ready data.
- Policy Agent – analyzes policies against frameworks.
- Risk Agent – surfaces and assesses organizational risks.
Trustero claims efficiency gains up to 100‑to‑1 for specific GRC roles and holds US Patent 12,032,908 for its AI‑powered compliance evidence management approach. Their patented AI Trust Graph connects evidence to controls to frameworks, delivering 99 % accuracy in internal testing.
On March 17 2026, Trustero launched an enhanced Evidence Management system that unifies evidence collection, AI‑powered evidence‑to‑control mapping, and the Trustero Intelligence copilot. The copilot lets teams query, correlate, filter, and analyze evidence using natural language, combine multiple pieces of evidence to generate derived evidence, and perform row‑by‑row analysis on tabular data with automated counting and pass/fail determination.
Key capabilities:
- Receptors – automate evidence collection from dozens of systems.
- Versioning – stores every version of collected evidence and surfaces the version relevant to each audit’s timeframe, eliminating hours of manual reconstruction.
- Natural‑language querying – “Which controls failed last quarter because of missing encryption keys?”
Justin Dooley, CFO, Chassi: “We see time savings of 10‑to‑1 overall and approaching 100‑to‑1 for my specific role, with a 75 % reduction in internal audit costs.”
Izak Mutlu, former CISO, Salesforce: “I wish I’d had a tool like Trustero providing full context and continuous control insights during my tenure.”
The Enterprise Architecture Shift
What these implementations reveal is not just new features but a new architecture for how compliance programs operate.
- From periodic to continuous – Traditional GRC relies on quarterly or annual assessments. Agentic GRC runs 24 × 7, continuously ingesting data, updating control states, and surfacing risks in real time.
- From siloed tools to integrated data fabric – Agents require a unified evidence layer. Platforms like Anecdotes and Trustero treat data as a product, normalizing it before any AI logic is applied.
- From manual escalation to confidence‑driven routing – Human reviewers are only involved when the agent’s confidence falls below a configurable threshold, dramatically reducing noise and fatigue.
- From static policies to adaptive controls – Reasoning engines can suggest policy refinements based on emerging patterns, enabling a feedback loop that keeps controls aligned with evolving regulations.
The shift also forces IT and security teams to rethink governance. AI‑driven agents must be auditable themselves, which is why isolation (Complyance), provenance (Trustero), and transparent confidence scores (all three vendors) are now non‑negotiable requirements.
Practical Steps to Adopt AI Agents in Your GRC Program
If you’re convinced that autonomous agents can accelerate your compliance journey, here’s a pragmatic roadmap:
- Assess your data readiness – Inventory evidence sources, evaluate data quality, and map gaps. Platforms like Anecdotes provide a data‑engine assessment that can be completed in a few weeks.
- Define confidence thresholds – Work with risk owners to set acceptable confidence levels for each control domain. Start low (e.g., 80 %) and tighten as you gain trust in the system.
- Pilot a single use case – Choose a high‑impact, low‑complexity workflow (e.g., vendor questionnaire review). Deploy the relevant agent, monitor its performance, and refine thresholds based on real‑world results.
- Expand incrementally – Once the pilot proves its ROI, layer on additional agents—evidence review, policy drafting, continuous control monitoring—while keeping the data fabric synchronized.
- Establish governance for the agents themselves – Document model versioning, isolation boundaries, and audit logs. Treat the AI layer as a regulated component subject to the same oversight as any other critical system.
- Train the people – Provide hands‑on workshops for compliance analysts and auditors so they understand how to interpret confidence scores, override decisions, and feed corrective feedback into the learning loop.
Following this approach lets you reap early efficiency gains without overwhelming your organization with a wholesale technology overhaul.
Key Takeaways
- Data first: Clean, normalized evidence is the foundation of any successful GRC AI deployment.
- Start small, scale fast: A focused pilot demonstrates value quickly and builds confidence for broader roll‑outs.
- Human‑in‑the‑loop matters: Set clear confidence thresholds and keep escalation pathways transparent to avoid blind automation.
- Choose a platform with strong isolation and provenance: Regulatory scrutiny will soon demand proof that the AI itself is auditable.
- Treat AI as a living control: Continuously monitor model performance, update thresholds, and incorporate feedback to keep the system aligned with evolving regulations.
Conclusion
Autonomous GRC agents are no longer a futuristic concept—they’re already delivering measurable savings and higher assurance across finance, healthcare, and tech firms. By moving from periodic checklists to a continuously observing, reasoning, and acting ecosystem, organizations can cut manual effort by up to 70 %, shrink investigation cycles by more than half, and trust that the evidence they present is 99 % accurate.
The journey starts with a clear look at your data landscape, a modest pilot, and a governance framework that treats the AI as a regulated asset. When you combine those steps with a platform that isolates client data, provides transparent confidence scores, and offers a rich library of ready‑made agents, the path to GRC 7.0 becomes a series of achievable milestones rather than a leap into the unknown.
Ready to bring autonomous compliance into your organization? Begin with a data‑readiness assessment, pick a high‑impact use case, and let the agents do the heavy lifting while you focus on strategic risk decisions. The future of GRC is here—make sure you’re part of it.