Having a young Luke Skywalker in a new Star Wars project once seemed impossible given Mark Hamill’s age and the costliness of de-aging visual effects. However, in the fifth episode of The Book of Boba Fett, deepfakes turned this fantasy into a reality.
A new and accessible form of video manipulation, deepfakes use deep learning to superimpose a face onto another person’s body with applications ranging from face-swapping mobile apps to big-budget motion pictures. Their accessibility, however, makes them susceptible to those with malicious intent—raising legal and ethical concerns about their usage.
Behind the masks
Imagine studying a photo album of someone’s face under specific conditions for a week then imagining them saying or doing things with one’s eyes closed. This is how visual effects supervisor Kevin Baillie describes the deepfaking process. Artificial intelligence (AI) software utilizing neural networks “[learns] what somebody’s face looks like using hundreds and thousands of images of that person,” he posits.
Deepfakes also adapt to certain environmental conditions present during filming. “It’s not just swapping an image, it actually understands the face and [hallucinates it] with the lighting conditions and everything of the subject onto [another],” Baillie imparts.
Deepfaking is claimed to be less tedious compared to traditional facial reconstruction effects, which can take months of work. Baillie opines that “[training] the AI [takes] probably a couple of weeks for a skilled team or individual and then doing the actual process of deepfaking is a matter of hours or days and not months.”
While deepfakes promise greater efficiency, they are currently incapable of incorporating certain facial nuances such as pore stretching and subsurface scattering—the passage of light through flesh—in post. “One thing with deepfakes to be really aware of is that the deepfake is only [going to] do exactly what the underlying actor is doing,” Baillie explains.
Apart from faces, deepfakes can also be used to synthesize voices. The process behind these audio deepfakes is similar to that of facial deepfakes; audio files are extensively analyzed by a neural network to mimic them.
Advances in deepfake technology such as the aforementioned will allow filmmakers to efficiently tell new stories that they initially could not due to technological constraints. While deepfakes prove themselves to be an asset to the entertainment industry, recent cases of criminal activity have attributed deepfakes as their modus operandi.
Under legal scrutiny
The malicious use of deepfakes has repercussions under current copyright law. Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory, says that in the context of United States copyright law, one has to identify the perpetrator if they want to sue them for infringing on their work.
However, Pfefferkorn expounds that it may be difficult to distinguish how deepfakes are generated due to their use of already available images and videos. “It might be hard to tell just from watching a video whether a particular copyrighted image or video was used to generate the deepfake,” she clarifies.
Baillie notes that in Hollywood, future contracts will be implemented to clear up the extent to what people can do with likenesses.
On the misuse of the technology, he explains that malicious actors can use deepfakes to replicate political figures, while their use in entertainment brings about complications. Concerning this, a manipulated video of Barack Obama was released in April 2018—where he was seen using profane language against Donald Trump. This caused digital agitation among netizens—describing deepfakes as a “menace on the horizon.”
Getting out of hand
Another concerning way deepfakes can be abused is the transplantation of actresses’ faces onto the visages of women in pornographic videos without their consent.
Senate Resolution No. 188—a regulation that seeks a congressional investigation of deepfakes—aims to address this and other aspects of deepfake misuse. Its objectives encompass the monitoring of digital content manipulation which cannot be aided by technological solutionism—a term that technology denizen Evgeny Morozov coins as the solving of problems through technological innovation.
The Philippine National Police Anti-Cybercrime Group mandates that their personnel and the public should thoroughly comprehend deepfake technology by advising them to be media literate, use quality sources, and back their data up against ransomware. By following these, one can avoid being a victim of cybercrime.
Misinformation often ensues from the manipulation of deepfakes to depict scenarios that never happened. In 2019, an estimated 15,000 deepfake videos circulated the internet—contributing to the USD78-billion worth of damages caused by disinformation and fake news.
The danger when deepfakes intersect with disinformation is at its peak during political campaigns. For instance, former US President Donald Trump posted a deepfaked video of himself acknowledging his defeat to Joe Biden in the presidential elections. A doctored video of Ukrainian President Volodymyr Zelensky was likewise spread in March 2022—where he was seen telling his countrymen to concede to Russia.
Both incidents demonstrate the potential perils that deepfakes are capable of—stirring confusion among the public. To prevent this from escalating further, one must identify the authenticity of the information presented to them before delving deeper into its context.
Since deepfakes are easily exploited, Pfefferkorn holds that governments should monitor the creation of deepfakes. In Hollywood, the Screen Actors Guild and the American Federation of Television and Radio Artists are already convincing lawmakers to regulate the elimination of the non-consensual exploitation of actors in deepfakes—especially for pornography. Therefore, industries should be able to use deepfakes responsibly with the aid of government mandates.
When deepfakes cross the fine line from satire to misinformation, they can wreak havoc. With their deceptiveness, netizens may become victims to a technology that is yet to be fully regulated.