A deepfake of Ukrainian President Volodymyr Zelenskyy circulated on social media last month. It depicted Zelenskyy as stern, calling for his soldiers to cease crossfire and surrender to Russian forces. The video—which was also uploaded to a Ukrainian news website by hackers has since been removed from major platforms for violating policies on misleading manipulated media.
Deepfakes are synthetic media that use a form of artificial neural networks that combine facial recognition algorithms and computer networks known as variational auto-encoders. Deep learning allows computers to process tasks organically, generating new digitized neural pathways. In the deepfake process, one person’s face supplants another, making it difficult to detect its authenticity.
The fabricated video is another incident in the ongoing issue involving the morality and dangers of implementing deepfake technology into the mainstream media. Nonetheless, deepfakes appear to be digitally constructed in historically tense moments, aiming to indoctrinate or baffle viewers. As artificial intelligence advances, there needs to be tighter technological regulation over these deceitful creations.
Though the technology has been around since the 1990s, deepfakes have hit a peak in popularity in recent years. Deepfake technology gained notoriety in 2017, after an online community on Reddit used the technology to doctor and share pornographic videos depicting the faces of female celebrities.
In 2019, there were at least 15,000 deepfake videos posted online according to Sensity, a startup company that offers tools for detecting deepfakes. The company revealed that 94% of the fabricated videos and images were faces of female celebrities transplanted onto the bodies of pornographic actors.
Although the videos were fake, the ability to face-swap celebrities such as Taylor Swift and Scarlett Johansson onto porn stars and exploit them for sexual entertainment has opened the door to a technology with dangerous possibilites of misinfomation and defamation.
According to a 2021 study on political deepfakes, researchers from Harvard University, Pennsylvania State University and Washington University revealed that deepfakes involving fake political scandals can convince nearly 50% of Americans.
Using deepfakes to spread political misinformation is a frightening weapon for one to have. These opportunistic strides underscore deepfakes’ ultimate purpose — to divide and deceit. As demonstrated with the Zelenskyy video, the potential to sway public opinion through digital manipulation can have alarming and perilous consequences.
However, some may argue that deepfakes aren’t as menacing as they may seem. At times, deepfakes are fabricated to provide light-hearted jokes and thrilling entertainment.
Social media platforms such as Instagram and TikTok have demonstrated this technology through the usage of face-swap filters to impersonate celebrities. Whether it’s Tom Cruise doing magic tricks or Donald Trump facetiming Greta Thunberg, these videos provide entertainment value in the form of digital manipulation, which becomes more accessible through commercial applications such as FaceApp, Reface and MyHeritage.
Additionally, Hollywood has shown the incredible achievements VFX can provide in movies and television. Disney and Lucasfilm have been pioneering the technology in the media lately by using it to de-age actors or bring a select few back to life. In 2016’s “Rogue One: A Star Wars Story,” Lucasfilm’s Industrial Light and Magic was able to bring actor Peter Cushing back to life as Grand Moff Tarkin by digitally transplanting his face onto actor Guy Henry.
The big questions that arise when discussing deepfake technology are the moral implications of its use and where to draw the line. De-aging living actors to help tell a story is one goal, but weaponizing it to push political agendas, misinform the general public and exploit people for commercial use raises ethical concerns.
Safeguards need to be established to refrain people from misusing this technology for nefarious purposes. In March 2021, the FBI issued a warning on the rise of synthetic content being released by cyber actors who create highly believable fraudulent messages. These threats can supplement voice message attacks through audio deepfakes, which utilize email voice attachments and malware in order to acquire credentials or other objectives.
In 2021, Facebook partnered with Michigan State University to develop methods that can detect and attribute deepfakes. By reverse-engineering the technology, they can uncover the unique patterns behind the AI model used to generate a single deepfake image.
Microsoft Video Authenticator can determine whether an image or video has been digitally manipulated by analyzing and deconstructing the content in real time. This technology is currently available exclusively to the AI Foundation as part of its Reality Defenders 2020 initiative. Eventually, these tools can become more accessible to the general public, lending a hand in cyber protection.
By allowing an individual’s voice and likeness to be used without their consent, we are instilling a sense of loose morals within our culture that promotes identity fraud.
In a society dependent on digital media, it’s important that we have the knowledge and tools to help filter and detect the manipulated and fraudulent content of deepfakes.
As deepfakes evolve, the line between what’s real and what’s fabricated becomes dangerously thin.