Deepfakes and manipulative technology cause jitters online
As they get popular, manipulated photos and videos are creating concern in the industry, especially at a time of fake news and misinformation.
Adalla Allan @adalla_allan
With surge in digital technology, various techniques to manipulate images and visual content have caused concern in the industry, especially with the rise of deepfakes, the use of manipulative technology to create incredibly realistic-looking content intended to come across as legitimate and real.
Deepfakes are so-named because they use deep learning technology, a branch of machine learning that applies neural net simulation to massive data sets, to create a fake.
AI effectively learns what a source face looks like at different angles to transpose the face onto a target as if it were a mask.
Their development traces back to the early 1990s by academic institutions, and they were later fine-tuned by developers in online communities.
Recently, deepfakes have attracted a lot of attention for their use in politics, financial fraud, hoaxes, and fake news.
Recently, a Tiktok video of a top-flight Tom Cruise impersonator, Miles Fisher went viral on social media platforms causing an uproar as netizens had discussions on whether that was the real Tom Cruise or a deepfake.
It was hard to discern the truth since the voice, looks and physique were all identical, until the video creator, Chris Ume, came out to say it was a fake.
Although Ume took it down, a spokesperson from TikTok also said the account was well within its rules for parody uses of deep fakes.
Restecutor Nyawira, a content curator at ShehacksKE, a community of women in Cybersecurity in Kenya, she says identifying a deep fake is hard but possible.
Slow motion
“Deepfakes are created by identifying and analysing a person’s behavioural pattern when they talk or laugh or even get angry. It is hard for one to create a flawless deepfake.
It is not as easy to identify it as well. It could have been easier if there were open source tools one could use to identify them, but for now, you just have to be keen,” she says.
One thing that can help, she says, is by playing a video in slow motion.
“This helps you monitor the movement of all the facial features such as lips, nose, eyes, ears and even neck movement.
If you are keen enough, you can identify some things such as blurred lips or that the jaw bone and the neck does not connect naturally and many other shortcomings,” she says.
Deepfakes can be used for good and bad. The good might be for research while creating personal avatars to be used in apps for people to try on clothes, hair, makeup products and so on.
Similarly, for the film industry where deepfake comes in handy where they can change a dialogue without actually having to reshoot a scene.
The bad might impact people’s lives and the digital existence in various negative ways as Nyawira points out.
Necessary strategies
“In revenge porn, a person insert your face in the video so that it appears as if it is you. Deep nude can also be the case.
This is a software that takes photos of dressed women and undresses them making them look realistically naked.
According to my understanding and research done on deep nudity, I was surprised to learn that it only works on women,” she says.
Monika Bickert, Vice President, Global Policy Management at Facebook describes how they address deepfakes on the world most popular social media platform.
“Our approach has several components, from investigating AI-generated content and deceptive behaviours such as fake accounts to partnering with academia, government and industry to exposing people behind these efforts.
Collaboration is key. Across the world, we’ve been driving conversations with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform our policy development and improve the science of detecting manipulated media.
As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes,” she says. They follow a strict criteria in removing these manipulated content.
“We will remove misleading manipulated media if it has been edited or synthesised – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words they did not say.
And it is the product of AI or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic,” she adds.
This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.
“Videos that don’t meet the standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages.
If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad.
And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false, If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem.
By leaving them up and labelling them as false, we’re providing people with important information and context,” she adds.
Facebook has partnered with Reuters to help newsrooms worldwide to identify deepfakes and manipulated media through a free online training course.
In 2019, Google released an open-source database containing 3,000 original manipulated videos as part of its effort to accelerate development of deepfake detection tools.
It worked with 28 actors to record videos of them speaking, making common expressions, and doing mundane tasks. It then used publicly available deepfake algorithms to alter their faces.