cyber face made of data points

The Cost of Deepfakes: The Emerging Threat of High-Tech Imposters

folded paper icon

Summary:

If there’s one cybersecurity fact to remember when protecting against cyber threats, it’s that attackers are willing to try any new technology to advance their criminal efforts. A recent alert from the Federal Bureau of Investigation illustrates this reality, warning businesses and organizations to be wary of deepfakes as the latest threat to their digital security measures.

Let’s look at what this technology is, how cybercriminals are using it and what you need to know to defend your organization against this new threat.

What is a Deepfake?

At its most basic level, a deepfake is essentially a super high-tech mask. Deepfake technology takes existing images and video footage and electronically manipulates them using AI programming and machine-learning technologies. It constructs a synthetic media file (typically a video or audio file) that creates a seemingly real person that can talk, move and interact.

You may have seen examples of deepfakes in the media where the face of a celebrity or political figure is digitally mapped onto an actor’s face. The actor talks and motions, but it looks and sounds exactly like the public person they want to impersonate. Whether it is a video or audio, it is very difficult to distinguish the deepfake from the real person.

Why are Deepfakes a Cyber Threat?

The ability to pretend to be someone else or to create a whole new persona creates a tremendous opportunity for criminal misuse. Cybercriminals have used deepfakes and stolen personally identifiable information (PII) to apply for remote work and even to impersonate corporate executives.

The most infamous deepfake attack occurred in 2019 within a British energy company. A fraudster was able to program the voice of the parent company’s CEO and demand a $243,000 wire transfer into their Hungarian supplier’s account. The synthetic audio was incredibly pressing and insisted the transfer be made within the hour. It wasn’t until after the money was transferred that the company realized it was fraudulent. By then, the money had been transferred to an account in Mexico, making the criminal much harder to identify.

In 2020, a similar instance occurred in a Hong Kong bank when the voice of one of the bank’s directors was cloned and requested a $35 million transfer for a new business acquisition. The call was not deemed suspicious and the bank’s lawyer began making the transfers. In fact, the call was a fraud made with deep voice technology. Authorities believe this was a part of an elaborate scheme involving multiple cybercriminals attempting to con banks across the globe out of money.

Experts warn this likely is just the beginning. Law enforcement agencies such as the FBI and Europol have sounded the alarm about the growing threat of deepfakes, and one industry survey even reported an unbelievable two out of three respondents encountered a deepfake as part of an attack.

Deepfakes Threaten Remote Work Environments

The era of a remote or hybrid work environment leaves room for fewer in-person interactions. The typical workday can now be conducted exclusively via online communication and videoconferencing. The increase in virtual employees scattered across the country and often internationally is something criminals with deepfake technologies are taking advantage of.

In one type of attack, a deepfake attacker gets hired virtually by a company, thus gaining access to sensitive company information, customer PII, financial files and IT databases. The FBI warns that fully remote technological jobs are the primary targets for these kinds of attacks, given the combination that there’s a preference for fully virtual tech jobs and companies are keeping their work-from-home format.

How Deepfakes Threaten Individual Security

Criminals could potentially use deepfakes to bypass the security measures that protect an individual’s accounts, credit and identity.

The adoption of biometric security measures has accelerated in the past few years as different platforms and devices utilize facial or voice recognition for authentication purposes. Simulating a person’s voice could eventually provide access to online banking information, healthcare records or their employer’s systems unless the authenticating voice recognition solution stays ahead of deepfake technology.

While advances in deepfake technology are raising concerns about how such hypotheticals could pose a real potential threat, at this stage, it is still possible to distinguish the fakes from the real person.

Spotting a Deepfake is Tough

Despite companies becoming more aware of the dangers of deepfakes and artificial voice programming, they may not be as easy to spot as one might think. Some of the main factors that separate the criminal from the real potential candidates in an online interview could include:

  • The motions and mouth movements of a person are not completely coordinated
  • Auditory sounds like coughing, sneezing, laughing, or other sounds do not line up with the person on screen
  • Information in background checks and PII belonging to someone else
  • Visual inconsistencies within the video, such as distortions or lack of lower body movement

Preparing for the Inevitable Deepfake Attack

Deepfakes and similar complex attacks are on the rise. While cybersecurity measures are getting stronger and more technologically advanced, criminal use of advanced technologies will evolve as well.

Digital security and protection solutions only solve half the problem. Employees can be the weakest link in your cybersecurity defenses if they are not trained in ever-evolving threats. As with other cybersecurity issues, educating employees on the risks is key.

Recent cybersecurity studies rank deepfake technology as one of the more serious and more common threats businesses are destined to face in the future. Educating and empowering employees to be cautious of threats like deepfakes can prepare them for a potential attack and even heighten their awareness of other high-tech threats that might appear in the future.

What You Need to Know:

The credit scores provided are based on the VantageScore® 3.0 model. Lenders use a variety of credit scores and are likely to use a credit score different from VantageScore® 3.0 to assess your creditworthiness.