iProov Predicts 70% of Organizations to Face Significant Impact from AI-Generated Deepfakes

August 14, 2024
iProov Predicts 70% of Organizations to Face Significant Impact from AI-Generated Deepfakes

Stay ahead of the curve with our daily and weekly newsletters, bringing you the latest updates and exclusive insights on the AI industry. Discover More


In the gripping and critically acclaimed HBO series “Game of Thrones,” a chilling warning often echoed – “the white walkers are coming” — alluding to a race of icy beings that posed a grave threat to mankind. 

Ajay Amlani, president and head of the Americas at biometric authentication firm iProov, argues that we should view deepfakes in a similar light. 

“Deepfakes have been a growing concern over the past few years,” he shared with VentureBeat. “Now, it seems the winter has arrived.”

Indeed, a recent poll by iProov revealed that nearly half of the organizations (47%) have encountered a deepfake. The company’s fresh survey also unveiled that almost three-quarters of organizations (70%) believe that AI-generated deepfakes will significantly impact their organization. However, only 62% believe their company is taking the threat seriously.

“This is turning into a genuine concern,” Amlani expressed. “You can now create a completely fictitious person, make them look and sound as you wish, and even react in real-time.”

Deepfakes: A Threat on Par with Social Engineering, Ransomware, Password Breaches

In a relatively short span, deepfakes — fabricated avatars, images, voices, and other media delivered via photos, videos, phone and Zoom calls, typically with malicious intent — have become incredibly advanced and often undetectable. 

This poses a significant threat to organizations and governments. For example, a finance employee at a multinational firm was tricked into paying out $25 million by a deepfake video call impersonating their company’s “chief financial officer.” In another shocking case, cybersecurity firm KnowBe4 discovered that a new hire was actually a North Korean hacker who had used deepfake technology to get through the hiring process. 

“We can now create fictionalized worlds that are completely undetectable,” said Amlani, adding that the findings of iProov’s research were “quite staggering.” 

Interestingly, there are regional differences when it comes to deepfakes. For instance, organizations in Asia Pacific (51%), Europe (53%), and Latin America (53%) are significantly more likely than those in North America (34%) to have encountered a deepfake. 

Amlani highlighted that many malicious actors are based internationally and tend to target local areas first. “That’s growing globally, especially because the internet is not geographically bound,” he said.

The survey also found that deepfakes are now tied for third place as the greatest security concern. Password breaches ranked the highest (64%), followed closely by ransomware (63%) and phishing/social engineering attacks and deepfakes (61%). 

“It’s becoming increasingly difficult to trust anything digital,” Amlani stated. “We need to question everything we see online. The call to action here is that people really need to start building defenses to prove that the person is the right person.”

Could Biometric Tools Be the Solution?

Threat actors are becoming increasingly adept at creating deepfakes, thanks to faster processing speeds and bandwidth, the ability to share information and code quickly via social media and other channels, and of course, generative AI, Amlani pointed out.

While there are some basic measures in place to address threats — such as embedded software on video-sharing platforms that attempt to flag AI-altered content — “that’s only going one step into a very deep pond,” Amlani said. On the other hand, there are “crazy systems” like captchas that keep getting more and more challenging. 

“The concept is a randomized challenge to prove that you’re a live human being,” he explained. But they’re becoming increasingly difficult for humans to even verify themselves, especially the elderly and those with cognitive, sight or other issues (or people who just can’t identify, say, a seaplane when challenged because they’ve never seen one). 

Instead, “biometrics are easy ways to be able to solve for those,” Amlani suggested. 

In fact, iProov found that three-quarters of organizations are turning to facial biometrics as a primary defense against deepfakes. This is followed by multifactor authentication and device-based biometrics tools (67%). Enterprises are also educating employees on how to spot deepfakes and the potential risks (63%) associated with them. Additionally, they are conducting regular audits on security measures (57%) and regularly updating systems (54%) to address threats from deepfakes. 

iProov also assessed the effectiveness of different biometric methods in fighting deepfakes. Their ranking: 

  • Fingerprint 81%
  • Iris 68%
  • Facial 67%
  • Advanced behavioral 65%
  • Palm 63%
  • Basic behavioral 50%
  • Voice 48%

But not all authentication tools are equal, Amlani noted. Some are cumbersome and not that comprehensive — requiring users to move their heads left and right, for instance, or raise and lower their eyebrows. But threat actors using deepfakes can easily get around this, he pointed out. 

iProov’s AI-powered tool, by contrast, uses the light from the device screen that reflects 10 randomized colors on the human face. This scientific approach analyzes skin, lips, eyes, nose, pores, sweat glands, follicles and other details of true humanness. If the result doesn’t come back as expected, Amlani explained, it could be a threat actor holding up a physical photo or an image on a cell phone, or they could be wearing a mask, which can’t reflect light the way human skin does. 

The company is deploying its tool across commercial and government sectors, he noted, calling it easy and quick yet still “highly secured.” It has what he called an “extremely high pass rate” (north of 98%). 

All in all, “there is a global realization that this is a massive problem,” Amlani concluded. “There needs to be a global effort to fight against deepfakes, because the bad actors are global. It’s time to arm ourselves and fight against this threat.”

Avatar photo

Anika Patel

Anika holds a Ph.D. in Anthropology from the University of Michigan and specializes in subcultures and fandom communities. She explores the intersection of technology and culture in her pieces for Hypernova.

Most Read

Categories

Star Wars The Acolyte Black Series Figures: A Toy Review
Previous Story

Star Wars The Acolyte Black Series Figures: A Toy Review

Cast of Daredevil: Born Again Discusses Connection to Marvel Netflix Series on Disney+
Next Story

Cast of Daredevil: Born Again Discusses Connection to Marvel Netflix Series on Disney+