Meanwhile, the explosion of generative artificial intelligence (AI) agents has created new, daunting challenges to securing digital identity. Recent industry events, in concert with perspectives shared by top experts, highlight the gravity of this situation. The opportunity for generative AI agents is primed for massive expansion. Organizations are exposed to existential threats because their infrastructure and overall plans to deal with and protect these entities are far behind the threat.
Fueled by advancements in AI and machine learning, AI agents are becoming increasingly integrated into various sectors, driving efficiency and automation. This growth presents a complex challenge: securing these non-human identities and ensuring they operate within defined parameters. The RSA Conference 2025 amplified the urgency of transforming these security gaps into opportunities for a more secure future.
Market Growth and Emerging Challenges
Now the global AI agent market is skyrocketing! According to some forecasts, it will skyrocket from $5.25 billion in 2024 to an astonishing $52.62 billion by 2030. As per a report by MarketsandMarkets, the CAGR is 46.3 percent. Frontegg recently pointed out this incredible trend when announcing the launch of its identity layer.
Even with this tremendous growth, organizations are having a tough time staying ahead of the security risks. According to Okta’s Senior Emerging Tech Researcher Fei Liu, many organizations lack a comprehensive approach to overview AI agents. Unless corrected, this shortcoming creates a serious security blind spot for them.
"Unlike traditional identities, AI agents need access to user-specific data and workflows to make decisions." - Fei Liu, Okta Senior Emerging Tech Researcher
Addressing the Security Gap
As it stands, there are a plethora of companies building practical solutions to address the distinct security challenges AI agents introduce. Anetac, a Silicon Valley-based company, recently released its Human Link Pro software. This cutting-edge technology provides unprecedented, real-time clarity administrative and public-facing identities, human and non-human, across a variety of networks. Anetac proved their software’s new capabilities in the real world at RSA Conference 2025.
Frontegg’s experience in creating Dorian, an autonomous identity security agent, further underlines the challenges in securing these entities. Dorian was built to identify and neutralize threats from all third-party digital ID providers.
"Without proper identity infrastructure, you can build an interesting AI agent — but you can’t productize it, scale it, or sell it." - Aviad Mizrachi, co-founder and CTO of Frontegg
These hurdles highlight the need for strong identity infrastructure to push the deployment and management of AI agents.
The Need for Trust and Security
As AI agents grow in power and autonomy, the stakes of their misuse through nefarious intent or competitive sabotage rises accordingly. Alfred Chan, CEO of ZeroBiometrics, emphasized the importance of establishing trust anchors to prevent malicious actors from exploiting these systems.
"AI agents are becoming more powerful, but without trust anchors, they can be hijacked or abused." - Alfred Chan, CEO of ZeroBiometrics
AI agents require strong preventive safeguards to keep them safe. Without these safeguards, they face hijacking, abuse, unauthorized access, and data privacy breaches. Building trust and security will be key to facilitating AI agents’ positive promise while minimizing possible pitfalls.