The joining of ‘deep learning’ and ‘fake news’ makes it possible to create audio and video of real people saying words they never spoke or things they never did.
It’s fair to say “Fake news” is one of the most widely used phrases of our times.
Never has there been such a focus on the importance of being able to trust and validate the authenticity of shared information. But its lesser-understood counterpart, “deepfake,” poses a much more ominous threat to the cybersecurity landscape — far more dangerous than a simple hack or data breach.
Deepfake activity was mostly limited to the artificial intelligence (AI) research community until late 2017, when a Reddit user who went by “Deepfakes” — a portmanteau of “deep learning” and “fake” — started posting digitally altered pornographic videos.
This machine learning technique makes it possible to create audio and video of real people saying and doing things they never said or did. However Buzzfeed brought more visibility to Deepfakes and the ability to digitally manipulate content when it created a video that supposedly showed former President Barack Obama mocking Donald Trump. In reality, deepfake technology had been used to superimpose President Obama’s face onto footage of Jordan Peele, the Hollywood filmmaker.
This is simply one example of a new wave of attacks that are growing quickly. They have the potential to cause significant harm to society overall and to organizations within the private and public sectors because they are hard to detect and equally hard to disprove.
The ability to manipulate content in such unprecedented ways generates a fundamental trust problem for consumers as well as brands, for decision makers and politicians, and for all media as basic info providers.
The emerging era of AI and deep learning technologies will make the creation of deepfakes easier and more “realistic,” to an extent where a new perceived reality is created. As a result, the potential to undermine trust and spread misinformation increases like never before seen.
To date, the industry has been focused on the unauthorized access of data. However the motivation behind and the anatomy of an attack has changed. Instead of stealing information or holding it ransom, a new breed of hackers now attempts to modify data while leaving it in place.
One study from Sonatype, a provider of DevOps-native tools, predicts that, by 2020, 50% of organizations will have suffered damage caused by fraudulent data and software. Companies today must safeguard the chain of custody for every virtual asset in order to detect as well as deter data tampering.
The True Cost of Data Manipulation:
There are a plethora of scenarios in which altered data can serve cybercriminals better than stolen info.
One is financial gain: A competitor could tamper with financial account databases using a simple attack to multiply all the company’s account receivables by a small random number. While a small variability in the data could go unnoticed by a casual observer, it could completely sabotage potential earnings reporting, which would ruin the company’s relationship with its customers, partners, and certainly investors.
Another motivation is changing perception. Nation-states could intercept news reports that are coming from an event and change those reports before they reach their destination. Intrusions that undercut data integrity have the potential to be a powerful arm of propaganda and misinformation by foreign governments.
Data tampering can also have a very real effect on the lives of individuals, especially within the healthcare and pharmaceutical industries. Attackers could alter information about the medications that patients are prescribed, instructions on how and when to take them, or records detailing allergies.
What do organizations need to consider to ensure that their virtual assets remain free from tampering? Firstly, software developers must focus on building trust into every product, process, and transaction by looking more deeply into the enterprise systems and processes that store and exchange data.
In the same way that data is backed up, mirrored, or encrypted, it continually needs to be validated to ensure its authenticity. This is especially critical if that data is being used by AI or machine learning applications to run simulations, to interact with consumers or partners, or for mission-critical decision-making and business operations.
The consequences of deepfake attacks are too large to ignore. It’s no longer enough to install and maintain security systems in order to know that virtual assets have been hacked and potentially stolen. The recent hacks on Quora and Marriott are the latest on the growing list of companies that have had their consumer data exposed. Now, companies also need to be able to validate the authenticity of their data, processes, and transactions.
If they can’t, it’s gonna be toxic.