Truvara is in Beta.
AI for GRC

What 500+ Security Pros Think About AI in Cybersecurity

AI cybersecurity insights: 500+ security pros reveal which AI tools work in production, what stays in pilot, and the real concerns teams face deploying AI in cybersecurity operations.

TT
Truvara Team
March 25, 2026
11 min read

The debate over artificial intelligence in cybersecurity has shifted from “should we use it?” to “how do we use it without breaking things?” After years of vendor promises and conference keynotes about autonomous security operations, the industry finally has hard data on what is actually happening inside SOC walls.

A SANS Institute survey of over 500 security professionals in 2025 reveals a profession caught between genuine enthusiasm and grounded skepticism. The numbers tell a story of teams moving past experimentation into selective, measured deployment—and a growing divide between organizations that have figured out where AI adds value versus those still chasing every new tool announcement.

Behavior‑Based Detection Is Winning the Signature War

The headline finding is striking: 67 percent of surveyed organizations have shifted from signature‑based detection to behavior‑based detection powered by machine learning. This isn’t a theoretical trend or a vendor talking point; it represents a fundamental architectural change in how the majority of surveyed teams approach threat hunting and incident response.

What makes this shift noteworthy is what it displaced. For over two decades, signature‑based detection anchored most enterprise security stacks. Known‑bad patterns, indicators of compromise, and rule‑based alerting were the backbone of SIEM operations. But the limitations became impossible to ignore. Polymorphic malware outpaced signature updates. Zero‑day exploits arrived faster than rule writers could respond. Alert fatigue turned SOC analysts into triage machines rather than threat hunters.

Behavior‑based detection addresses these gaps, but it does not come without trade‑offs. Teams report that machine‑learning models require more compute resources and demand analysts who understand statistical deviation rather than just rule matching. The learning curve is real, especially for organizations that have relied on turnkey signature‑based products for years.

Enterprise vs. Mid‑Market AI Adoption Rates

The gap between enterprise and mid‑market AI adoption is narrower than most vendor narratives suggest, but it remains significant in ways that matter.

MetricEnterprise (5,000+ employees)Mid‑Market (500‑5,000 employees)
Organizations using AI in production74 %51 %
Organizations still piloting AI tools18 %33 %
Organizations not actively pursuing AI8 %16 %
Average AI tools deployed4.22.1
Dedicated AI/ML security staff68 % have at least 122 % have at least 1

The enterprise advantage is not just budget. It is institutional: dedicated data‑science teams, mature data pipelines, and the ability to run parallel evaluations across multiple vendor platforms. Mid‑market teams are deploying fewer tools, but that often means tighter integration and less sprawl once a tool proves its worth.

A counterintuitive finding: mid‑market organizations that moved to AI‑powered detection report faster time‑to‑remediation on average, likely because their smaller teams were already operating lean and could integrate AI tools directly into existing workflows without layers of approval committees.

Where the Hype Ends and Production Begins

Perhaps the most useful part of the SANS survey is the categorization of what teams have actually deployed into production versus what remains stuck in pilot purgatory.

In production at the majority of AI‑using organizations:

  • Phishing detection and email security (89 % of AI‑using orgs)
  • Malware classification and behavioral analysis (76 %)
  • Anomaly detection in network traffic (71 %)
  • User behavior analytics for insider‑threat detection (64 %)

In pilot phase at most organizations:

  • Autonomous incident‑response orchestration (62 % still piloting)
  • AI‑driven threat‑intelligence enrichment (54 % still piloting)
  • Natural‑language log querying for SOC analysts (47 % still piloting)
  • Automated vulnerability prioritization (41 % still piloting)

The pattern is clear. Organizations trust AI most when it augments human decision‑making—classifying a suspicious email, flagging anomalous traffic, clustering malware samples. They are far more cautious when AI is asked to take autonomous action or replace analyst judgment in prioritization decisions. This isn’t resistance to change; it’s the result of hard experience with false positives, context‑blind models, and the very real consequences of an AI tool making the wrong call on a production system.

The Hallucination and False‑Positive Problem

When asked about their top concerns with AI‑powered security tools, respondents did not name cost or vendor lock‑in first. They named hallucination rates and false positives.

Sixty‑one percent of surveyed professionals rated hallucination in AI‑generated incident summaries as a significant concern. For context, an LLM‑powered SOC assistant might summarize an alert chain and introduce a connection between events that never occurred—a plausible but fabricated correlation that sends an analyst down the wrong investigative path for hours. Several respondents described incidents where AI‑generated reports referenced systems, user accounts, or MITRE ATT&CK techniques that were not present in the underlying data.

False positives rank second at 55 %. Anyone who has watched a machine‑learning model flag the quarterly finance close as a potential data‑exfiltration event knows how costly that can be. Tuning AI‑driven false positives is harder than adjusting a YARA rule; it requires quality training data, regular model retraining, and analysts who can interpret why the model is misclassifying events.

ConcernEnterprise Response RateMid‑Market Response Rate
Hallucination in AI outputs58 %65 %
False positive rates51 %59 %
Skills gap for managing AI tools47 %63 %
Lack of explainability44 %52 %
Data privacy for AI training41 %38 %
Cost and licensing33 %56 %

Mid‑market teams are more vocal about every concern category, which tracks with their smaller staff sizes. A skills gap hurts more when you have five people on the security team instead of fifty. Cost matters more when AI tool licensing competes directly with headcount decisions.

The Skills Gap Nobody Is Talking About Enough

Forty‑seven percent of enterprise respondents and 63 % of mid‑market respondents identified the skills gap as a critical barrier. This is the quiet crisis beneath the AI adoption wave. Security teams bought into the promise of AI reducing analyst workload, only to discover that managing AI tools requires its own specialized skill set.

Teams need people who can:

  • Interpret model confidence scores and adjust detection sensitivity based on organizational risk appetite
  • Curate and maintain training datasets for supervised‑learning components
  • Detect model drift and trigger retraining before detection quality degrades
  • Separate genuine AI tool limitations from vendor marketing hype
  • Build validation frameworks that test AI outputs against ground truth

This is not the traditional SOC analyst profile. Organizations are trying to upskill existing staff, hire new talent at a premium, or outsource to managed security providers who themselves are figuring out the AI component in real time. None of these approaches are fast, and the threat landscape does not pause while staffing questions are resolved.

Budget Allocation: Where the Money Is Going

Forty‑eight percent of surveyed organizations said they are prioritizing AI investment specifically for continuous risk monitoring. This aligns with platforms like MetricStream, ServiceNow GRC, and RSA Archer, all of which have integrated AI modules for real‑time risk scoring, automated control assessment, and predictive risk modeling.

The remaining budget allocation paints a picture of cautious diversification:

Priority AreaPercentage of AI Security Budget
Continuous risk monitoring48 %
Threat detection and response31 %
Vulnerability management28 %
Compliance automation24 %
Identity and access management22 %
Security awareness and phishing simulation19 %

These percentages do not sum to 100 %—respondents could select multiple priority areas, and many did. Organizations are not going all‑in on a single AI category; they are spreading investment across several use cases, with risk monitoring clearly leading the pack.

What is telling about these numbers is what is not prioritized. Autonomous remediation, which vendors tout constantly, appears as a stated priority for only a small fraction of respondents’ budgets. Autonomous response is still the future, not the present.

Risk Monitoring: The Practical Use Case

The dominance of continuous risk monitoring as an AI investment priority makes sense when you look at the problem it solves. Risk monitoring is data‑intensive, repetitive, and historically manual. Pulling asset inventories, scoring vulnerabilities, mapping control gaps, generating risk reports—these tasks consume enormous analyst time while producing diminishing returns without automation.

AI models excel at the pattern‑recognition and data‑aggregation that risk monitoring demands. They can correlate asset‑criticality scores with real‑time threat‑intelligence feeds, flag configuration drift in cloud environments, and surface emerging risk trends before they become incidents. For GRC teams, this means shifting from point‑in‑time compliance assessments—the kind that are stale by the time they reach the audit committee—to continuous monitoring that reflects the actual state of the environment.

MetricStream users reported that AI‑powered risk monitoring reduced the average time for risk identification from 17 days to 4 days. Whether that number holds across all deployments is debatable, but the direction is consistent: AI compresses the detection‑to‑response timeline for operational risk.

Organizations that have found success with AI in risk monitoring share a common pattern. They started with a single domain—usually vulnerability management or cloud‑configuration monitoring—and expanded only after proving measurable improvement. They established baseline metrics before deploying AI so they could attribute changes to the tool rather than general security‑program maturation. And they maintained traditional controls as a fallback during the transition period.

Survey Methodology: What to Keep in Mind

No survey is perfect, and SANS research, while respected, has limitations worth acknowledging before drawing broad conclusions.

Five hundred respondents is a solid sample, but it skews toward organizations large enough and mature enough to participate in cybersecurity surveys. The self‑selected nature of survey participation means respondents are likely more engaged with industry trends than the average security professional. This creates an optimism bias: the 67 percent behavior‑based detection shift probably overstates adoption across the broader security landscape.

The survey also aggregates responses from CISOs, SOC analysts, GRC professionals, and consultants into a single dataset. A CISO’s perception of AI maturity can differ sharply from an analyst who works with the tools daily. The survey does not fully account for this perspective gap.

Vendor‑specific questions were limited. Respondents identified categories of tools and general capabilities rather than naming specific platforms, making it difficult to compare individual solutions side‑by‑side.

Key Takeaways & Next Steps

  • Behavior‑based detection is now the norm for two‑thirds of surveyed organizations, but it requires more compute power and statistical expertise than legacy signature rules.
  • Enterprise teams lead in production deployments, yet mid‑market outfits often see faster remediation because they integrate AI more tightly and avoid bureaucratic delays.
  • AI augments, not replaces, human analysts—phishing filters, malware classification, and network‑anomaly detection are the sweet spots; fully autonomous response remains experimental.
  • Hallucinations and false positives are top concerns. Invest early in model validation, explainability tools, and a feedback loop that lets analysts flag bad outputs.
  • The skills gap is the biggest barrier. Prioritize training programs, consider hiring a dedicated AI‑ML security specialist, or partner with a managed‑service provider that can supply that expertise.
  • Continuous risk monitoring is the fastest‑growing AI spend. Start small—automate asset‑inventory correlation or vulnerability‑scoring—measure impact, then expand.

Actionable next steps for security leaders

  1. Audit your current detection stack – Identify which rules are signature‑based and map out a migration path to behavior‑based models.
  2. Create a pilot‑to‑production framework – Define clear success metrics (e.g., reduction in false‑positive rate, time‑to‑remediation) before moving any AI tool out of pilot.
  3. Build a data‑quality checklist – Ensure training data is labeled accurately, up‑to‑date, and free of bias; schedule quarterly reviews.
  4. Establish an AI‑governance board – Include a data‑science lead, a SOC manager, and a compliance officer to oversee model drift, explainability, and privacy concerns.
  5. Invest in people – Allocate at least 10 % of the AI budget to training, certifications, or hiring for AI‑focused security roles.

Conclusion

The SANS survey gives us a snapshot of a profession in transition. AI is no longer a buzzword; it’s a functional component of many SOCs, especially for tasks that amplify human analysts—phishing detection, malware triage, and continuous risk monitoring. At the same time, the data remind us that AI brings new challenges: hallucinations, false positives, and a steep skills gap that can stall even the most well‑funded projects.

For security teams, the path forward is pragmatic rather than revolutionary. Deploy AI where it adds immediate, measurable value, back it with solid data‑governance, and keep a human in the loop for decisions that could have high business impact. By doing so, organizations can reap the efficiency gains of AI while avoiding the pitfalls that have made many pilots stall in limbo.

If you’re looking to accelerate your AI journey, start with a single, high‑impact use case, measure results rigorously, and use those wins to justify broader adoption—and, crucially, to build the talent pipeline that will keep those AI models humming safely for years to come.

TT

Truvara Team

Truvara