These deepfake-as-a-service operations are making it easier for fraudsters to stitch together synthetic identities by mixing stolen data from various swipes with AI-generated media. This concerning trend has resulted in hundreds of millions of dollars in lost research funding to universities around the world.

Earlier this year, an employee in Hong Kong became the latest victim of a 2024 scam. They have been duped into sending $25.6 million more to the British multinational. The fraudster used deepfake technology.

Widespread deepfake operations have targeted businesses, particularly financial institutions over the Asia-Pacific, North America, and EMEA regions. These areas are already struggling to cope with the impacts of this loss. A recent report from BioCatch was surprisingly alarming. In 2023, over half of all surveyed firms experienced losses between $5 million and $25 million from AI-powered fraud.

A United Nations report documented a 600 percent increase in deepfake-related criminal activity in Southeast Asia during the first half of 2024, highlighting the rapid escalation of this threat. Existing cybersecurity attack detection systems are quickly becoming useless defenses against these digitally injected threats fueled by artificial intelligence.

"These aren't opportunistic attacks; they represent highly coordinated, specialized operations that pose an existential threat to the digital transformation of banking" - Dr. Andrew Newell, chief scientific officer at iProov.

The emergence of deepfake scams further illustrates the critical demand for more sophisticated security protocols. Financial institutions need to be agile to stay one step ahead of these ever-evolving threats.