Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy & Commodities
Technology & Innovation
Podcasts & Newsletters
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy & Commodities
Technology & Innovation
Podcasts & Newsletters
Research — April 23, 2026
The 2026 RSAC Conference saw its highest-ever attendance, with nearly 44,000 participants, 700 speakers and 600 exhibitors. Attendees came to San Francisco from more than 100 countries for the largest annual gathering of the cybersecurity industry, but the celebratory mood was tempered yet again by the disruptive impact of AI. This year, agentic functionality was front and center, as the world comes to grips with the reality of what agentic AI entails for organizations and their cybersecurity product and service providers.

Security for GenAI was one of the top pain points for cybersecurity practitioners in 2025, but the impact of agentic AI was most keenly felt in the weeks before RSAC, when thousands not only tested the extent of what OpenClaw open-source agents could do, but often did so with few, if any, security controls. Days before the conference, users were discovering that agents could seemingly do many things that were supposed to be the exclusive domain not only of enterprise software (including security products) but also of software developers and security testers. The potential agentic AI offers security for handling overwhelming volumes of data and taming myriad detailed exposures is undeniable — if it can be made to be reliable. During RSAC week, Anthropic PBC inadvertently introduced the market to its Mythos model, with more advanced capabilities for vulnerability assessment — and exploitation. In a separate incident, Anthropic sustained a revealing source-code leak. Although both cases were ascribed to "human error," all these factors together effectively posed an existential question for RSAC attendees and exhibitors alike: Is agentic AI a blessing or a curse?

Analyst observations
Scott Crawford, Research Director: If there was a microcosm of the messages prevalent at RSAC 2026, it was one of the conference's enduring drawing cards, the Innovation Sandbox, in which young cybersecurity companies compete to be named the most innovative startup of the year by a panel of experienced judges. This year, it had a nearly wholesale focus on agentic AI. Only two of the finalists (Fig, focused on change management in the increasingly complex security operations center, and Humanix) had nothing explicit to say about "agentic" functionality — although Humanix did reference "conversational AI" in its techniques for combating social engineering attacks. Three focused on agentic AI governance (Realm Labs, Token Security and the competition winner, Geordie AI), while another two (Clearly AI Inc., ZeroPath) centered on secure agentic software development and leveraging agentic automation of application security. Glide Identity, the runner-up, focused on "digital identity for the AI era," including nonhuman identity central to agent authentication and authorization, while Charm leveraged agents to combat fraud.
Daniel Kennedy, Principal Research Analyst: If a prevailing theme at RSAC 2025 was the role of co-pilots to leverage large language model capabilities in security tooling via a chat interface, RSAC 2026 considered agentic AI in situations where the price of hallucinations may be a wrong or destructive autonomous action in a live production environment. Much of "security for AI" today consists of guardrails in a chat context; similarly, the first stages of AI in code creation consisted of auto-complete. Without necessarily solving the application security issues associated with greater speed, larger pull requests and a different mix of vulnerabilities, we are quickly entering a phase where applications can be created via prompt and agentic interplay, and questions are emerging just as quickly about how reasonable controls can ensure security visibility into agentic workflows and assure that code entering production is secure.
A number of potential answers are emerging from various corners of application security, including how to efficiently use AI to find vulnerabilities or token-efficient triage, without it inefficiently spinning out using excessive tokens. There is ongoing exploration into guiding developers' prompting to ensure that security requirements are accurately captured. Secure supply chain vendors are exploring how to ensure only safe models, along with vetted open-source and known-good Model Context Protocol servers. This effort aligns with software bill of materials (SBOM) initiatives, which are now incorporating AI bill of materials (AI BOM) to enhance transparency and security. There are considerations for how "human in the loop" can actually work. Given that, in a liability-based culture, AI agents can't be held responsible, what tooling will actually be needed to ensure accurate review? The "proximate human most likely to be blamed" needs some arming to actually evaluate AI outputs and reasoning. There are considerations to providing a "paved road" of components to support prompt-created micro-applications. On the offensive security side, AI is facilitating machine-speed vulnerability chaining. Increasingly, "runtime context" means examining what autonomous agents are accessing.
Garrett Bekker, Principal Research Analyst: As noted above, agentic AI can be both a blessing and a curse, although perhaps more so for the identity and access management industry. IAM is rife with both complex workflows and manual processes like user access reviews and provisioning that should, in theory, benefit disproportionately from agentic AI. At the same time, prevailing IAM messaging at RSAC underscored the central role the space is positioning to play in controlling and managing AI agents as the new "control plane" for agentic AI. However, this "identity perimeter" will need to be less porous than the one that exists for human identities.
To be fair, we are very early in the agentic AI security journey. As we have seen with other security innovations, such as the cloud access security broker (CASB) and SaaS security posture management (SSPM) segments, the most common starting point is discovery and observability to deal with "shadow AI."
Discovery is just a starting point, however, and as we have also seen with CASB and SSPM, customers will quickly want to do something about their newfound problem. Other agent-centric approaches presented at RSAC include frameworks and maturity models, orchestration, agent permission and entitlement management, runtime authorization, and maintaining zero standing privileges.
Mark Ehr, Principal Research Analyst: Cloud security remains among the top 10 pain points, strategic objectives and spending-increase categories in 451 Research's Voice of the Enterprise: Budgets & Outlook 2025 survey, driven by increasing use of cloud-native platforms for AI and the increasing focus on security operations center technologies as the proving ground for agentic AI in the enterprise. Further driving this trend are SOC teams that are unable to address over 40% of security alerts in a typical day — a recurring issue since 2020. This is driven in part by security's "dark data" problem: the perpetual growth in the volume of low-fidelity security and observability data combined with legacy security analytics technologies that can no longer keep up. Organizations are looking to AI to address these issues, and early signals indicate this could be the case, with AI chatbots improving analyst productivity and AI agents driving the next generation of security automation. At RSAC 2026, "agentic AI in the SOC" messaging was abundant, but organizations are not ready for full SOC automation. "Human in the loop" will likely be the prevailing operational mode for some time to come, although full automation of low-risk, repetitive actions will likely become commonplace as these systems gain trust.
Justin Lam, Senior Research Analyst: "Hill climbing" — the iterative algorithmic improvement of results from processes such as search — is one of the latest phrases characterizing incremental and iterative agentic progress. This concept could well be applied to the industry's progression in agentic autonomy from human-led to human-assisted agents — and with it, the desire to improve productivity. However, without good alignment with user intent, such iterative processes could derail long-term progress.
Various controls in data, email, agentic, SaaS and identity platforms are converging. Understanding all the components of user intent evident in telemetry — such as existing collaboration patterns, data classification and monitoring, and other data protection mechanisms — will be essential for emerging tools to correlate, identify and mitigate potential risk with less human user intervention. Understanding user intent to better support user purpose could better unify the disparate value propositions for various data, collaboration and human/technology-interaction controls to help reduce security operations fatigue and improve user productivity. Such mutual understanding among human users, their agents (insofar as they emulate reasoning) and security teams might enable more effective implementation of shared responsibility and provide more proactive secure-by-design principles in the long run. Despite the agentic push evident at RSAC 2026, however, many enterprises still need the security basics. "Do nothing" remains the strongest incumbent competitor among the 650 vendors.
Paige Bartley, Senior Research Analyst: Agentic AI was again the focus of the bustling RSAC 2026 Conference, yet there was another buzzword echoing in the halls. The focus on sovereignty was undeniable, particularly among governance, privacy and compliance specialists. Amid geopolitical instabilities, many jurisdictions are turning inward with their data and AI strategies, placing emphasis on independence and control over digital assets. This evolving "data nationalism" has driven demand for all manner of sovereignty across the stack, including compute sovereignty, cloud sovereignty, data sovereignty and AI sovereignty.
Demand is driven by motivation, and enterprise cloud sovereignty efforts provide a constructive example. In our recent Voice of the Enterprise: Data & Analytics, Data Governance & Privacy 2026 survey, a predominantly North America-based sample indicates that protecting sensitive data from economic espionage was a top motivation for pursuing sovereign cloud architecture — a trend that was especially pronounced among multinational organization respondents. Other common motivations include adherence to specific data residency requirements and compliance with industry-agnostic regulations such as GDPR.
Yet sovereignty isn't pursued simply to avoid the pain of data theft or regulatory enforcement. Sovereignty efforts are also poised to grease the wheels of business opportunity. In the race to adopt AI-supported techniques and tooling, cloud sovereignty is commonly cited as a means to help safely implement AI systems. With data pumping through the "circulatory system" of AI, businesses are pressured more than ever to protect IP and sensitive data sources.
451 Research from S&P Global Energy Horizons provides technology industry research, data, and advisory solutions. For more information or to contact us, please visit 451 Research.
This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.
Content Type
Language