DeepFake is a term you may have heard lately. The term is a combination of “deep learning” and “fake news”. Deep learning is a class of machine learning algorithms that impact image processing, and fake news is just that – deliberate misinformation spread through news outlets or social media. Essentially, DeepFake is a process by which anyone can create audio and/or video of real people saying and doing things they never said or did. One can imagine immediately why this is a cause for concern from a security perspective.
DeepFake technology is still in its infancy and can be easily detected by the untrained eye. Things like glitches in the software, current technical limitations, and the need for a large collection of shots of other’s likeness from multiple angles in order to create fake facial models can make this a difficult space for hackers to master. While not a security threat now, given how easy it is to spot manipulations, the possibility of flawless DeepFakes is on the horizon and, as such, yields insidious implications far worse than any hack or breach.
The power to contort content in such a way yields a huge trust problem across multiple channels with varying types of individuals, communities, and organizations: politicians, media outlets, brands and consumers just to name a few. While the cyber industry focuses on the severity of unauthorized data access as the “problem,” hackers are shifting their attacks to now modify data while leaving it in place rather than holding it hostage or “stealing” it. One study from Sonatype, a provider of DevOps-native tools, predicts that, by 2020, 50% of organizations will have suffered damage caused by fraudulent data and software, while another report by DeepTrace B.V, a company based in Amsterdam building technologies for fake video detection and analysis, states, “Expert opinion generally agrees that Deepfakes are likely to have a high profile, potentially catastrophic impact on key events or individuals in the period 2019-2020.”
What do hackers have to gain from manipulated data?
- Political motivation – From propaganda by foreign governments to reports coming from an event and being altered before they reach their destination, there are many ways this technology can impact public perception and politics across the globe. In fact, a quote from Katja Bego, Senior Researcher at Nesta says, “2019 will be the year that a malicious ‘deepfake’ video sparks a geopolitical incident. We predict that within the next 12 months, the world will see the release of a highly authentic looking malicious fake video which could cause substantial damage to diplomatic relations between countries.” Bego was right about Deepfake being introduced to the market this year, so we will see how it develops in the near future.
- Individual impacts –It’s frightening to think that someone who understands this technology enough could make a person do or say almost anything if convinced enough. These kinds of videos if persuasive enough, have far reaching impacts on individuals, such as relationships, jobs, or even personal finances. If anyone can essentially “be you” through audio or video, the possibilities of what a hacker could do are nearly limitless.
- Business tampering – While fraud and data breaches are by no means a new threat in the business and financial sectors, Deepfakes will provide an unprecedented means of impersonating individuals. This will contribute to fraud in traditionally “secure” contexts, such as video conferencing and phone calls. From a synthesized voice of a CEO requesting fund transfers, to a fake client video requesting sensitive details on a project, these kinds of video and audio clips open a whole new realm of fraud that businesses need to watch out for.
While the ramifications of these kinds of audio and video clips seem disturbing, DeepFake technology can be used for good. New forms of communication are cropping up, like smart speakers that can talk like our favorite artists , or having our own virtual selves representing us when we’re out of office. Most recently, the Dalí Museum in Florida leveraged this technology to create a lifelike version of the Spanish artist himself where visitors could interact with him. These instances show us that DeepFake is a crucial building block in creating humanlike AI characters, advancing, robotics, and widening communication channels around the world.
In order to see the benefits and stay safe from the threats, it is no longer going to be enough to ensure your security software is up to date or to create strong passwords. Companies must be able to continuously validate the authenticity of their data, and software developers must look more deeply into the systems and processes that store and exchange data. Humans continue to be the beginning and ending lines of defense in the cyber-scape, and while hackers create DeepFakes, the human element of cybersecurity reminds us that just as easily as we can use this technology for wrongdoing, we have the power to use it to create wonderful things as well.