Fraud perpetrated using this technology is on the rise. In some cases, deepfake is used to create videos with only accurate movements and audio. Recent cases and research have shown that facial recognition technologies carry critical vulnerabilities exploited to carry out scams.
Facial recognition has become one of the most used artificial intelligence systems in the digital market to connect to various accounts or make online payments with greater security. Even today, isolated identification remains more precise than identification mass.
Face Recognition Scams
Facial recognition technologies work by mapping a face used to create a facial imprint.
Experts from the Experian company, which markets data, customer relationship management tools, and offers fraud prevention services, predict an increasing percentage of fake face and identity creation. A particularly relevant process concerning financial crime.
To create a look, hackers use the dots of different facial points to form a new identity. In addition, forged or created digital identities are used to access specific applications on smartphones or pass through the security of a hotel, business center, or hospital. Such businesses also take advantage of the growing use of deepfake, creating a seemingly real video with realistic motion and audio.
Deep Fakes are made up of a deep database based on artificial intelligence that accurately mimics people’s faces and voices to the point that it’s nearly impossible to tell them apart from reality. Thus, Alex Polyakov, CEO of Adversa.ai, an AI security systems company, believes that any facility that has replaced security personnel with artificial intelligence is potentially at risk.
Lemonade’s facial recognition system was almost duped by a man wearing a blonde wig and lipstick. The disguised client was trying to get a 5,000 euro camera compensation using a false identity. In this case, Lemonade’s AI spotted the deception in the videotapes. The video was flagged as suspicious, and the man was unmasked. A company statement stated that the usurper had already filed a claim under his real identity.
Scams Through Facial Recognition: The Counter-Moves Of Companies
Faced with these challenges, companies like Google, Facebook and Apple are working to find ways to prevent scams that exploit facial recognition systems.
Among the most complex systems to fool is the one launched by Apple with the iPhone X. At its WWDC conference, the company rightly put data protection in the spotlight. With more than 30 thousand points identified on a face, the system performs a first mapping considering the depth. The company also uses an infrared image of the face, which is then compared through a mathematical representation of Apple’s multiple databases set up.
This data is then stored only on individual devices. To ensure the protection of facial recognition systems, Alex Polyakov cited above believes that artificial intelligence should be updated regularly to identify the latest scam processes in place. In addition, facial recognition tools need to be trained with more data to make them accurate.
According to Blake Hall, founder, and CEO of ID.me, a company that uses facial recognition software to help identify people on behalf of 26 US states, since last February, his company has been able to stop almost all fraudulent attempts. Of selfies on government sites.
The company has gotten better at detecting specific masks by labeling images as fraudulent and by monitoring the device, IP addresses, and phone numbers of repeated scammers across multiple fake accounts, it says. Now also check how the light from a smartphone reflects and interacts with a person’s skin or other material.
Finally, in June, in collaboration with Michigan State University (MSU), Facebook unveiled a research method for detecting and attributing deep fakes that rely on reverse engineering from a single image generated by artificial intelligence to the generative model used. To produce it.
This new method should make it easier to detect and track deep fakes in real-world settings, where the deep fake image itself is often the only one information detectors have to work with. This collaboration has made it possible to develop artificial intelligence that allows deep fakes to be better detected and traces their source to find out who created them.
Also Read: Emerging Technologies For Facial Recognition