Deepfakes mimicking family, friends: Latest tools for identity fraud in cyber criminal's arsenal

Nowadays, deepfakes are the talk of the town but not for the right reasons. Deepfakes are computer-generated images, videos and audio clips of people that appear deceptively real. It involves the manipulation of digital media to create a virtual replica of someone’s video or audio.
Deepfakes mimicking family, friends: Latest tools for identity fraud in cyber criminal's arsenal

Hyderabad: Nowadays, deepfakes are the talk of the town but not for the right reasons. Deepfakes are computer-generated images, videos and audio clips of people that appear deceptively real. It involves the manipulation of digital media to create a virtual replica of someone’s video or audio. The result? A convincing life-like digital portrayal of a real person through technology.

The ability to make technology mimic real life comes from software that employs deep learning, which forms a large part of many Artificial Intelligence (AI) products and services.

Deepfakes have been around for some time now. In 2017, researchers at the University of Washington released a video of Barack Obama made using AI. It required only 14 hours of Obama’s video footage to create the synthetic video.[1]

As it turns out, India is a prime target in the Asia-Pacific region for cybercrimes using deepfakes.[2] It turns out that AI is being used for nefarious purposes by bad actors. Deepfakes in particular are increasingly being used to commit several kinds of cybercrimes.

 

Deepfakes of friends, prominent personalities most common traps

Bad actors are using deepfake technology to exploit people’s sensitive data, such as biometrics (iris scans, fingerprints, etc.). Once biometric data has been manipulated, they commit digital fraud by easily bypassing authentication mechanisms.

A Kerala native lost Rs 40,000 after receiving an innocuous video call from a ‘friend’. After gaining his trust by mentioning mutual friends, the caller asked the victim to urgently transfer money for his relative who was hospitalised. The victim transferred the money. Later, on learning that he had been duped, the victim filed an FIR with the police.

Another incident involved the use of a deepfake of an IPS officer to extort money from a senior citizen in Uttar Pradesh. The victim was duped of almost Rs 1 lakh by fraudsters using deepfake-generated AI video of former IPS officer, ADG Prem Prakash.

Voice clones of friends, family create urgency

At times, engineered voice calls are also used to trap victims. The con involves the use of deepfake voice manipulation of the victim’s friend, family or relative, who proceeds to ask for money urgently to treat a fake injury. In such hurried circumstances, the victim accedes to demands more easily.

Using deep learning technology, the perpetrators map the face or voice of a person and substitute it with another, making it almost imperceptible to notice the difference. AI allows such ‘swaps’ to happen very convincingly.

Scammers use personal data in public domain against victim

AI-related scams work by exploiting the trust of the victim by impersonating a close friend, a relative or family member in the middle of an emergency. They solidify trust by mentioning incidents, facts or personal information relating to the victim which can be found online. In most cases, the victim is unaware that such information is in the public domain. Many such incidents have been reported over the past year across various states of India.

 

Does the availability of tools have an impact?

Deepfakes which manipulate voice can be found online easily on several websites. These websites offer a high degree of accuracy across more than 25 languages.

More importantly, they are free to use. Furthermore, it becomes evident that the use of deepfakes to mimic people’s voices and audio gives rise to novel challenges in detecting phishing attempts. We can begin to see a growing number of digital identity frauds, biometric data theft, financial frauds, and phishing attempts using such deepfake tools.

These tools are highly sophisticated, as they preserve the emotional tone, inflection, and audio background, making it harder to detect any suspicious activity.

 

Always question, fact-check information found on internet

With AI entering the skillset of scammers, it becomes even more important to be cautious. Verifying or double-checking goes a long way in ensuring online safety. Such cybercrimes must be immediately reported on the national cybercrime helpline – 1930.

The Ministry of Electronics and Information Technology (MeitY) has released an advisory on deepfakes with rules and regulations expected soon.[3] It is important to stay vigilant and ask questions about content viewed online about the source and reliability of the source. As a result, increasing awareness and education of safe digital practices amongst citizens has become essential in today’s hyperconnected online age.

[1] Jennifer Langston, “Lip-syncing Obama: New tools turn audio clips into realistic video”, UW News, 11th July 2017.

[2] Sumsub Identify Fraud Report 2023, UK.

[3] Press Information Bureau, “Advisory to all Intermediaries to comply with existing IT rules”, Ministry of Electronics and Information Technology, 26th December 2023.

Related Stories

No stories found.
logo
South Check
southcheck.in