'AI Doctors', deepfakes, audio clones fuel misinformation brigade through fake health claims

Since late 2023, we’ve witnessed a surge in fake AI videos featuring an AI-generated ‘doctor’, known to some as ‘AI Doctor ji.’
'AI Doctors', deepfakes, audio clones fuel misinformation brigade through fake health claims
'AI Doctors', deepfakes, audio clones fuel misinformation brigade through fake health claims

Hyderabad: In a world where misinformation travels faster than ever, its impact on cornerstones of society such as public health is tremendous. Healthcare is one of the most important and fragile sectors dependent heavily on public modes of communication where vast constantly evolving data plays a prominent role in people’s lives.

The intervention of AI (Artificial Intelligence) in public health can be used soon to make decisions directly affecting life and death. AI, which can help in the compilation and dissemination of medical data to researchers, medical professionals and the general public, way faster than traditional means, for the same reason, has become a threat.

How social media consumes info about AI in healthcare

Since late 2023, we’ve witnessed a surge in fake AI videos featuring an AI-generated ‘doctor’, known to some as ‘AI Doctor ji.’

'AI Doctors', deepfakes, audio clones fuel misinformation brigade through fake health claims
'AI Doctors', deepfakes, audio clones fuel misinformation brigade through fake health claims

In these videos, people are using various AI software to create a human-looking doctor, complete with a white coat and stethoscope around the neck, who advises home remedies for various health ailments in a regional language. These health hacks are not only useless but can also cause severe harm in many cases.

Some examples include using onion and jaggery juice to increase height and pani puri being a solution for controlling blood pressure.

The home remedies in these AI doctor videos are unscientific and illogical. They can be called cheap fakes as they are low-quality mimics of a real person speaking.

‘AI humans’ getting more convincing

Artificial Intelligence has a promising ability to create a boom in the healthcare industry but at the same time, the industry is under the fear of AI’s potential to spread health misinformation convincingly.

AI experts have been warning that the technology used to create fake images, audio cloning and videos, also known as deepfakes, is getting better day by day.

The AI videos so far with human images in motion can be easily found to be fake at first look for many who are accustomed to technology. However, the day is not far, when we will fail to distinguish fact from fiction. The alarming trend of increasingly convincing deepfakes is already here.

Covid accelerated spread of fake health information online

The pandemic and the subsequent lockdown caused by Covid-19 in 2020 was when health misinformation was at its peak and with that people realised the urgency of fact-checking and verifying information especially regarding healthcare before sharing it.

Exports believe that in the coming times, deepfakes will become a major challenge and concern for the healthcare industry. False information with fake audio and visuals that appear to come from trusted sources will make it tough for people to differentiate between what is true and what is false.

It can compromise the credibility of certain legitimate and trusted sources as it will create trust issues among people.

Trusted news sources are also at risk

Not just deepfakes but audio cloning also poses a major threat in the disinformation trend. Voice cloning is another way in which deepfakes are used. It can be done by various phone apps and software that helps you change your voice or, with the help of software, allows users to mimic the voice of popular celebrities.

Here is an example: imagine a trusted news anchor and journalist Ravish Kumar advising people to take a fake diabetes supplement that cures the condition in one month. The video is not only false but also puts the anchor’s credibility at risk.

Before mid-2023, we thought AI struggled with non-English languages but the video featuring Ravish Kumar perfectly mimicking his voice in Hindi showed us how wrong we were. The audio cloning technology has also put other legitimate news sources at risk as well.

Here is an example of news anchor Arnab Goswami’s voice being used to sell diabetes medication.

'AI Doctors', deepfakes, audio clones fuel misinformation brigade through fake health claims
'AI Doctors', deepfakes, audio clones fuel misinformation brigade through fake health claims

Videos usually come with a high degree of trust. These videos could manipulate someone’s existing treatment or someone who is searching for treatment. Deepfake videos also create chaos in society during times of health emergencies like pandemics. Remember how fake videos and manipulated images convinced many not to take vaccinations?

How to know if a video is fake?

The answer, though simple but rarely done, is to observe and analyse until confirmation. Don’t be blinded by whatever information you see online. Always crosscheck the information with a trusted health website, usually one made by the government, after consulting your healthcare provider.

Any information about ‘miracle’ home remedies can be almost always disregarded. Be vigilant and think twice before sharing the same information with anyone without verifying it.

The World Health Organization has also warned about using AI in treatments such as diagnosis and surgical intervention. While they see AI’s potential to enhance health information access and diagnostics, WHO is also concerned whether the data used to train AI could be biased, leading to misleading or inaccurate inferences.

Related Stories

No stories found.
logo
South Check
southcheck.in