AI-Based Attacks , Artificial Intelligence & Machine Learning , Finance & Banking
AI vs AI: Fighting Deepfakes With Biometric Authentication
Experts Recommend Multimodal Biometrics as Mitigation Strategy for AI-Based AttacksScammers are increasingly turning to deepfake photos, videos and audio recordings to fool victims. To fight these emerging tactics, fraud investigators are interested in adopting a multifactor authentication system using multimodal biometrics.
See Also: AI and ML: Ushering in a new era of network and security
While artificial intelligence has spurred the growth of authentication controls such as anomaly detection and predictive modeling, it also has enabled voice cloning and video deepfakes to become much more convincing. Traditional phishing emails and social engineering techniques are more scalable, but deepfakes can have a "massive and disproportionate impact on business," Andrew Shikiar, executive director and CEO of FIDO Alliance, told Information Security Media Group.
By examining both physical and behavioral biometric patterns - such as user keystrokes, navigational patterns and fingerprints - experts believe multimodal biometrics can be one of the better bets to detect anomalies and prevent account takeover incidents that use deepfakes.
Behavior is challenging for machines to emulate, and the attempts BioCatch has seen so far are "clumsy and lazy at best at being a human," Seth Ruden, the company's director, told ISMG. But they will ultimately evolve; therefore, defenders must evolve too, he said.
This is especially true in the cybersecurity-cybercrime standoff. As defenders find something that works reliably and increases the level of security assurance, threat actors find ways to circumvent it, said Melissa Carvalho, vice president of global cybersecurity strategic services at the Royal Bank of Canada.
User behavior and heuristics are also susceptible to AI consumption and replay. For instance, AI can learn and mimic user behavior, such as typing rhythms, to bypass behavioral biometrics. In one case, hackers used AI to replicate a CEO's voice and authorize a fraudulent bank transfer (see: Fraudsters Deepfake Entire Meeting, Swindle $25.5M).
While no technology can ever offer complete protection, the best thing organizations can do for their employees and clients is minimize risk.
Using more than one modality simultaneously is a realistic way to achieve this goal, Carvalho said, adding that there has been promising research on multimodal biometrics in which all fingers are used, and the user is prompted at random for two or more impressions.
"Finger, palm, vein print, retinal scan and ECG biometrics are not as easy to compromise since that data needs to be captured by the threat actor for them to replay it," she said.
But the adoption of multimodal biometrics will not be even across organizations. Some institutions will be softer targets and have weaker security, and that's where the bots identify the exploitation opportunity and readily bypass authentication, Ruden said.
Liveness detection can be an important component of the biometrics authentication, Shikiar said. Organizations must ensure liveness detection using sensors, accelerometers and challenge-and-response interactions to confirm that a user is a real person and not a photo, video or deepfake.
A few years ago, active liveness checks included asking the user to look up and blink, but these methods have been thwarted via replay attacks and generative AI, the Biometrics Institute told ISMG. Now, the best defense is passive liveness checks that use machine learning techniques to determine what live images look like. This can check whether a selfie is a live person or not as well as whether the identity document presented was taken via a live snapshot.
Implementing Multimodal Biometrics
The Biometrics Institute recommends following the three laws of biometrics when you implement multimodal biometrics. They are: Know your algorithm, know your data and know your operating environment. This helps organizations better prepare for evolving new threats while managing existing ones, the organization told ISMG.
The neural network architectures used to create deepfakes can also be designed to detect them. But this technology is always no more than one step ahead, so organizations need additional strategies. For example, AI-generated images could have digital watermarks and other image-provenance tracking techniques that can label and protect the originals, the Biometrics Institute said.
Defenders can use machine learning models to monitor networks, devices and background signals, and they can use local compromise indicators and biometrics and liveness checks to layer in security measures, Ruden said. "This would take us from the binary model to a more advanced dynamic model that won't be overcome by the weakest link: the user," he said.
Biometrics, multimodal or otherwise, are just one part of the equation, Shikiar said. By combining biometrics with the use of passkeys, organizations can best fortify their authentication methods with a simple and more secure login process, he said.
Ruden said those who cannot plug the hole will come under sustained attack. "My take is that AI will be capable of finding these institutions and their exploitable elements more readily, so the world will be in the nuclear escalation model relative to AI," he said. "The time to escalate commitment to a tech stack that is dynamic is now."
According to the Biometrics Institute, organizations looking to balance risks and outcomes must spend money to add protections and should expect more friction in the customer experience.