Despite the world of differences, artificial intelligence, AI, is making in cyberspace, it is equally becoming destructive. From spreading misinformation to impersonating people and sending unpleasant messages to others, the impact of AI amplifies beyond the virtual sphere. In this story, we look at how an AI-generated deepfake video of a Bhutanese woman has impacted her mental health, and the legal framework surrounding the prosecution of such cybercrimes.
Deepfake is a type of digitally altered video of a person created using artificial intelligence to appear as another person usually used to spread false information.
In a recent case of a deepfake concerning a Bhutanese woman living in New Zealand, a photo of the woman was placed over pornographic material using AI and uploaded on the Internet.
BBS got in touch with the woman who shared her experience as a victim of deepfake.
Deepfake Victim said, “When I first learned about the situation, I could not approach anyone, and as a girl, it was very embarrassing and hard for me to open up about the situation to anyone. When things started getting out of hand, I contacted the RBP. Unfortunately, they could not help me. However, they suggested me to talk with the New Zealand police and I did. They gave me a website called stopncii.org, which can be used by anyone.”
She said she used the website to stop the further spread of the video.
“As of today, there is no sign of my video and it is completely erased from the Internet that is such a big relief for me and I am ready to let go. As I have learned recently, I am not the only victim of deep fake, there are lots of people going through the same thing,” said the Deepfake Victim.
According to the GovTech Agency, the Bhutan Computer Incident Response Team or BtCIRT is mandated to enhance cyber security in Bhutan.
The BtCIRT said they usually look after unauthorised access on websites usually called hacking and not really on content cases.
However, the team said it addresses cases that are directly reported to them. They said the New Zealand deepfake case was not reported to them.
Meanwhile, the Bhutan InfoComm and Media Authority, BICMA, is responsible for online content cases.
However, according to BICMA, one of the mandates of BICMA is to hear complaints and settle disputes in relation to offences of online content not amounting to criminal offences.
Since deepfake and impersonation are criminal in nature, they said the case could be pursued directly by law enforcement agencies.
BBS talked to the Bhutan Centre for Media and Democracy, a Civil Society Organisation, which works on issues concerning youth, governance and education among others using media as a mode of operation.
The organisation shared the various measures it is taking to ensure cyber security.
“In the BCMD media literacy programme, we have a statement that states that there is no absolute anonymity in social media; the digital footprint is left everywhere. To tackle this, we usually take measures to manage online privacy by setting up a strong password and encouraging the use of VPN and end-to-end encrypted services, like WhatsApp and Telegram,” said Kinley C Tenzin, the Programme Officer at the Bhutan Centre for Media and Democracy.
He added that to educate the public on AI, the organisation has taken many approaches like formal education on AI and combining it with community engagement and media collaborations.
Kinley C Tenzin said, “BCMD has been actively conducting media literacy programmes across the country. But now we are exploring potential AI literacy programmes by engaging youths who are already in that field.”
Although the Royal Bhutan Police has no separate record of deepfake cases, it received six cases related to computer pornography in the last three years.
Singye Dema/ Interns (Chundu Wangchuk, Choying Dema, Chimi Dorji, Deepika Pradhan)
Edited by Kipchu