Deep Fake technology is a new and emerging type of AI that can be used to generate unique content for a wide range of purposes. However, the general discussion around Deep Fakes has been about the risks that it poses to society. Even still, there are a number of legitimate use cases for Deep Fakes, which we will discuss in more detail below.
There has been some rising concern about the legal and ethical implications of Deep Fake technology among government agencies like the US Department of Homeland Security. However, Deep Fakes remain generally legal, and there is little that law enforcement can do to prevent or prosecute unless they violate other existing laws like hate speech or defamation.
This is still a rather new technology, so future legislation targeting Deep Fakes may result as lawmakers are able to catch up with its advanced capabilities. Plus, there are currently efforts underway to develop tools and techniques that can detect and prevent the misuse of Deep Fake technology.
What are Deep Fakes?
Deep Fakes are a type of artificial intelligence (AI) program that utilizes existing source content, like video or audio, to create falsified and original content. The term Deep Fake can refer to the resulting content, as well as the technology itself.
In other words, the main purpose of Deep Fakes is that people can appear to say or do things that they never said or did. Deep Fake technology works by superimposing the features of one person onto another person’s face and body, which we will dive deeper into below.
How do Deep Fakes Work?
There are a number of ways that Deep Fakes can be created, including techniques like face-swapping, lip-syncing, and puppeteering. Some models can even create 3D renderings of someone’s face, which tend to be even more convincing and realistic when done correctly.
With any of the above techniques, there are typically two main steps involved in creating a Deep Fake:
The training phase involves feeding a deep learning algorithm with large amounts of data, typically photos and videos on the target person, as well as the person whose face will be used to create the Deep Fake. The algorithm then works to map the target person’s face and figure out how to transpose the source’s facial expressions and movements onto the target person to create false video content.
After the deep learning algorithm is trained, it can be used to generate Deep Fakes that are typically hyper-realistic, showing the target person doing or saying something they never actually did.
With many of the developed Deep Fake applications that have become popular today, Deep Fakes can be created in a matter of seconds.
Risks of Deep Fakes
While not inherently created for malicious reasons, there are many risks involved with the advent of Deep Fake technology. Since many people are still unaware that this type of technology even exists, there is a lot of harm that can be done when an unknowing audience sees a fabricated Deep Fake.
Let’s take a look at some of the main risks to be aware of as this type of technology becomes more commonly utilized–aside from just for harmless or fun purposes.
- Spread of False Information
The primary risk of Deep Fakes is that they can perpetuate the spread of false information very convincingly. When done well, Deep Fakes are hard to spot, meaning they can be used to create very convincing propaganda or falsified claims that can do serious harm to the public.
One of the potential uses in this vein is for election or national security purposes, as someone could create a Deep Fake of government leaders or politicians making false claims that aim to sway public opinion or interfere with an election.
In some cases, Deep Fakes may be used in a legal setting where false videos or photos can be created to wrongly imply guilt or innocence on the opposite party.
Deep Fakes can also be used to help criminals wrongfully obtain personally identifiable information (PII) to gain access to sensitive information, systems, or accounts like bank accounts or an employer’s network.
In this case, Deep Fakes pose a major cybersecurity threat, as bad actors could wrongfully gain access by using the target person’s voice or face for identity verification purposes.
Another major risk of Deep Fakes is that they can be utilized for blackmail purposes. This occurs when a Deep Fake is created to show a fabricated scenario where the target person is in an illegal or compromising situation.
Such Deep Fakes would be used to extort the individual in the video, preying on the person’s fear that the content would be released to the public and seriously damage their reputation. In this case, Deep Fakes can also be used as a sort of personal revenge against the target person in the content.
Deep Fake Use Cases
There are a number of legitimate uses for Deep Fakes; however, these often get overshadowed by the risks that they pose. Let’s take a deeper look at some of the ways that Deep Fakes can be used for good:
- Film/Entertainment: Deep Fakes can be used to manipulate an actor’s voice for certain scenes in post-production when the actor is no longer available and the director wishes for different phrasing to be utilized than what was recorded; the technology can also be used to create satire or parody content that is so absurd that all parties are aware it’s a false video
- Video Games: voice actors can have their voices cloned for future uses in the game as it expands and the creators add new versions or storylines for their characters
- Receptionist Services: Deep Fakes can replace automated voice recordings and forwarding systems that sound robotic with personalized responses that sound like the person you are calling
- Customer Support: it can also be used for customer service purposes where robotic-sounding voice automation is already in place (e.g. checking your bank account balance)