Picture this: After a long and hectic day, you reach home in the wee hours. Barely having grabbed a glass of water, your phone starts ringing. The call is from an unknown number. Usually, you don't take calls from an unknown number at this time, but you still pick it up. The person on the other side sounds panicked.
"Someone has stolen my purse, Beta, which had my phone too. I saw the thief and was trying to catch him when a car hit me. I went unconscious, and this man brought me to the hospital. I don't have any money right now. I have to make a payment to him. Please transfer Rs 10,000 to this number," says one of your close family members. Immediately, you transfer the money and head towards the garage to take out your car to visit the hospital.
Your phone beeps again; this time, it is a message. You are in a state of shock. The message is from your bank. It takes a few minutes for you to realise that the call was fraudulent and someone has robbed your hard-earned money. You have nothing left in your bank account. Your mind is still not able to accept it.
Hurriedly, you call your relative on his phone only to find they are fine. The sound on the phone sounded so genuine that it didn't create any doubt, even for a fraction of a second. You tried calling that person but to no avail. Well this technique is called voice cloning.
A report published last year by McAfee shows that around 47 per cent of the surveyed Indians have experienced or knew someone who had experienced some form of Artificial Intelligence-generated voice scam. A staggering 40 per cent of them have lost over Rs. 50,000. The number of Indians targeted by AI voice scams is almost twice the global average of 25 per cent. In fact, India has topped the list with the maximum number of victims of such scams. With the rise in popularity and adoption of AI, such scams have overshadowed its advantages.
"The Artificial Imposter" report has found us more vulnerable to such scams. It says that 66 per cent of the respondents in India admitted that they responded to a call allegedly from a friend or a family member seeking urgent monetary help, especially if the voice was that of a parent, spouse, or child.
Most of the incidents where the respondents lost money pertained to messages where senders claimed to have been involved in car accidents or robbery, needed financial aid while travelling abroad, or had lost their phone or wallet. It also showed that 86 per cent of respondents generally shared their voice data online or through voice notes at least once a week. Most of the time, this data is insecure and can be hacked easily.
Not only this, hackers need only 3 seconds on a cold call to clone your voice. Most of us receive spam calls almost daily, purportedly from banks offering loans, credit cards, or an insurance scheme. In such a scenario, how can one protect oneself from such frauds? In fact, it is difficult to even imagine the extent to which technology can be (mis)used to perpetuate crime.
A few years ago, Google's AI division, DeepMind and the University of Oxford created a lip-reading software. Of course, it did raise issues about the privacy of the users. On and off, such companies have been updating their privacy policies. But is our data safe? No. There are several reports where even the data held by banks have been put up for sale on several sites.
Last week, the legendary cricketer Sachin Tendulkar expressed shock and said it was disturbing to see such "rampant misuse of technology." In a viral video, Sachin is purportedly seen promoting an online gaming app named "Skyward Aviator Quest." The video claims his daughter Sara earns around Rs 2 lakh daily using the app.
A few months back, Sachin appeared in an interview with the lifestyle YouTube channel Curly Tales on his 50th birthday. He would have never imagined the same video would be morphed to promote a game. In the past few months, many such videos of celebrities have gone viral.
Morphed images and videos can be created using the simplest of applications. For a layperson, it is difficult to determine whether the end product is fake or real. Sachin's video is not the first deepfake video (a portmanteau of deep learning and fake).
Many celebrities, especially women, have been targeted in the past, projecting them obscenely. A report shows that 96 per cent of such videos are pornographic in nature. Such content is often used to spread misinformation or malign someone's reputation, harass, intimidate, or spread fear. In other words, most of the time, technology is not used positively.
This leaves us with the larger question – how equipped are our regulatory systems to curb the misuse of technology or provide justice to those who have suffered? If we look at the Indian legal system, it still operates archaically.
When the Information Technology Act was promulgated in 2000, no one would have imagined that something like voice cloning, lip-reading or deepfake videos would emerge. There are no provisions to tackle cases involving AI technology. Even the Copyright Act is not sufficiently equipped to deal with such cases.
The issues with current legislation lie in their procedural rules, which were devised to suit the investigation and trial of conventional crimes. The rapid evolution of cybercrime proves challenging for lawmakers to keep pace with. Hence, most cases of cybercrime go unreported and unresolved.
Sachin has filed an FIR for the second time, including the one relating to the recent deepfake video. The earlier FIR was registered in May last year when a fake website was created using Sachin's name to sell "belly fat burner" products. The fraudster also used his name to advertise various herbal products on social media, targeting people facing skin, body, or hair problems.
The Mumbai Police has been unable to arrest the culprit as of this writing, even in this high-profile case. Our existing governance and legal structure are not equipped enough to resolve such cases involving complicated algorithms. Since the Pandemic, cybercrime has increased several times. The latest NCRB (National Crime Report Bureau) report shows a 24.4 per cent increase over the last year. It is high time the government takes this seriously.
NASSCOM has estimated that investments in AI applications are expected to increase at a compound Annual Growth rate of 30.8 per cent. India has considerable potential for developing and leading the AI ecosystem. However, equally important is to understand the threats accompanying AI and devise appropriate regulations and infrastructure for controlling such crime.
The writer, a company secretary, can be reached at email@example.com.