Mike O’Malley Connectivity Cybersecurity expert Diana Kelley warns on the dangers of blindly jumping to using Artificial Intelligence (AI) for identity management. She cautions that if we don’t get foundational pieces right, we could deepen vulnerabilities that already exist. At a recent cybersecurity conference in Kentucky, Kelley reiterated his call for transparency in AI systems as an urgent matter. She further urged for thorough training and ongoing supervision to deter possible hazards. She recognized AI’s potential to help improve security. She cautioned that using it under a broken identity-first paradigm would only compound the issue.

Kelley’s presentation highlighted the value of having a strong identity base before layering on solutions with AI capabilities. Her rallying cry inspired groups to move. She urged them to take the lead on tackling root causes rather than rushing to adopt AI solutions without first understanding the problem.

The Double-Edged Sword of AI in Identity Management

Kelley acknowledged the great potential for AI to fill in security gaps. She emphasized that its success is based on strict training and strict oversight. She stressed that AI systems demand the same level of scrutiny and discipline that defenders apply to code, infrastructure, and policy. She cautioned the audience against viewing AI as a silver bullet. She wanted to make it clear that it is based in rigorous math and that identity management was just far too critical to be done half-hazardly.

Kelley noted the fast expansion of machine identities. She envisions these multiplying exponentially, as AI systems begin autonomously executing tasks for users across multiple services and environments. She guessed we’re already at 50 machine identities for each human one. This explosion of machine identities adds to an already complex landscape and makes it even more critical to have them properly managed.

Kelley discussed how the obstacles of building a 150-person startup today are different from those experienced 20 years ago. She illustrated just how complicated identity management has gotten in the digital world. She warned that most companies are lagging behind and playing catchup when it comes to today’s new threat landscape.

Transparency and Trust in the Age of AI

Transparency became a buzzword in Kelley’s speech. And she encouraged all organizations to be transparent with their users about AI’s role within their systems.

"Tell your users when you’re using AI," - Diana Kelley

She called for requiring the use of AI in responsible disclosure practices and privacy policies and stated that trust starts with transparency.

"Include it in your responsible disclosure. Bake it into your privacy policies. Trust starts with honesty." - Diana Kelley

On Tuesday, Kelley warned about the seductive quality of AI, including its impressive capacity to hallucinate and convincingly lie.

"AI is confident. It will lie to you with charm," - Diana Kelley

Addressing the Core Issues

Kelley warned that attackers are already adept at using AI, even as defenders grapple with governance frameworks and disclosure standards. She warned against rolling out AI on the already poor identity infrastructure, saying that would only add to current dangers.

"You can’t automate a broken system. Fix it before you scale it.” - Diana Kelley

As Kelley explained, once a synthetic identity has access to a system, it’s much, much harder to detect.

"Monitoring AI is just as critical as testing it," - Diana Kelley

Kelley stated that once a synthetic identity gains access to a system, it becomes significantly more challenging to detect.

"Once a synthetic identity is accepted into the system, it becomes much harder to distinguish," - Diana Kelley