*This is the first article in a series focused on AI's impact on the field of cybersecurity.

There is little doubt that the proliferation of AI is poised to significantly impact the field of cybersecurity. It's easy to imagine a dystopian future of "AI versus AI" in which bad actors leverage AI to automate cyberattacks while organizations invest in AI to defend their networks and data. In this scenario, humans are relegated to the role of AI security facilitators.

In 2016, the DARPA Grand Cyber Challenge gave us a glimpse of what may lie ahead when autonomous machines successfully defended and patched a network while exploiting vulnerabilities in competitor environments. And though a world dominated by AI cyberwar still seems relatively far off, you can bet that bad actors are already training AI models to hasten its arrival.

In the meantime, AI is already impacting security teams. In fact, we believe the scenario described above represents the third phase of AI's likely impact on cybersecurity:

  • Phase 1: AI-powered social engineering and deepfakes
  • Phase 2: AI-powered polymorphic and metamorphic malware code development
  • Phase 3: AI versus AI cyber warfare

Today, we're fully immersed in Phase 1. And though Phase 2 has kicked off, the widespread impact of AI-driven malware is still in its infancy. We'll dig deeper into Phase 2 in a subsequent article. For now, let's focus on Phase 1.

Phase 1: AI-powered social engineering and deepfakes

Organizations must be vigilant against both social engineering and deepfake attacks.

Social engineering is a technique used by malicious actors to manipulate people into taking actions or providing information that compromises security or privacy. These attacks tend to work, not because they exploit technical vulnerabilities, but because they exploit tendencies in human psychology, emotion and habit.

Deepfakes are a powerful tool social engineers can leverage to target human vulnerabilities. A deepfake is an image, video or audio recording that's been convincingly altered to misrepresent someone as doing or saying something they never actually did or said. The alterations made to deepfakes are often made possible by deep learning algorithms.

The first wave of AI-driven cyberattacks began years ago. One of the first major events occurred in 2021 when members of the European Parliament were fooled by individuals using deepfake filters during video calls to imitate Russian opposition figures.

Deepfake attacks have since expanded in scope and sophistication. Recent examples include deepfaking someone's voice to extort money from financial institutions or family members; faking the voice of the President of the United States to ask citizens not to vote; and even faking multiple voices and personas in a virtual meeting to convince a corporate target to transfer $28M in illegitimate funds.

The rise of "deepfake as a service"

Like water and electrical currents, cybercriminals tend to follow the path of least resistance. That path has introduced the world to social engineering techniques like phishing, baiting, tailgating, whaling, pretexting, smishing and more. Thanks to innovations in AI, today's bad actors are increasingly finding deepfakes to be an easy and effective path, especially as they become more skilled at generating and deploying deepfake scams.

In fact, some organized threat-actor groups have fully committed to Phase 1, operating under a "deepfake as a service" model where they generate sophisticated deepfakes for anyone willing to pay their fee.

Europol has reported that some of these criminal organizations employ Generative Adversarial Networks (GAN) to create their deepfakes. In this approach, which pairs a GenAI model with a discriminating AI model, the generative model creates the deepfake content while the discriminating model assesses how likely the deepfake content is to be synthetically generated. The discriminating model then iteratively trains the generative model until the discriminating model can no longer determine whether the generated content is authentic or altered.

Not only does this GAN-based approach give bad actors a fast feedback loop to churn out convincing deepfakes, but it also poses a more existential security question: If the best AI detection models are unable to accurately authenticate a piece of content, what chance do average humans have at separating truth from fiction?

Ways to defend against deepfakes

The good news is that there are steps CISOs, CIOs and other leaders can take to prepare for and protect against the risks posed by AI-powered deepfakes.

Authentication

Authentication is the process of verifying the identity of users, systems or devices that are attempting to gain access to an organization's network, applications or data. It ensures the right individuals get access to an organization's digital resources. Authentication is one of the four fundamental pillars of Identity and Access Management (IAM) and organizations should strive to install strong authentication mechanisms across their business.

Basic authentication efforts can help detect and prevent deepfakes. For example, organizations can baseline the authenticity of an asset or communication by using a secret passphrase, a word of the day, or rotating watermarks. The use of biometrics and multi-factor authentication (MFA) can also be effective authentication tools in some circumstances.

While these types of authentication techniques may seem relatively unsophisticated, they can still be effective if organizations commit to their use. However, because some forms of authentication are difficult to scale, they may only be appropriate as a means to secure some internal communications.

Detection

At the enterprise level, detection is the preferred means to combat deepfakes and other synthetic content. Deepfake detection methods, which generally involve monitoring and analyzing corporate data sources to flag patterns, anomalies or other indicators of compromise, are continuously evolving as detection technologies improve.

Deepfake detection methods include:

  • Visually inspecting content for signs of manipulation
  • Analyzing metadata for signs of tampering
  • Performing forensic analysis on digital artifacts left behind by deepfake creation tools
  • Training machine learning algorithms to detect deepfakes by pattern analysis
  • Audio analysis (e.g., voice recognition, audio forensics)
  • Assessing and verifying the authenticity and credibility of source content

Combining multiple deepfake detection methodologies and staying current with the latest research and advances in the field will prove critical for detecting and preventing deepfake attacks. 

Current detection tools typically strive to uncover evidence of content alteration or manipulation, and then alert an analyst to manually review the flagged content. This alert is often accompanied by a machine-learning-generated probability score appraising the likelihood that certain content has been altered. 

One major flaw with today's detection tools is they operate within a game of cat and mouse. As soon as cybersecurity operators introduce a new detection capability, malicious actors will leverage their AI-powered feedback loops to find ways around it. This limits the window of efficacy for deepfake detection tools in the push and pull to repel cyber criminals.

The need for new technology standards

The shortcomings of modern authentication and detection methods indicate we need new standards and supporting technologies to combat the proliferation of undetectable synthetic content. Without the global evolution of industry standards, organizations and the general public together will face a world increasingly plagued by misinformation, disinformation, reputation damage, fraud, political manipulation, privacy violations, complex legal and ethical questions, and sophisticated social engineering threats. 

Some progress has been made on this front. For example, the Content Authenticity Initiative (CAI) is "a community of media and tech companies, NGOs, academics and others working to promote the adoption of an open industry standard for content authenticity and provenance." CAI creates open-source tools that allow users and organizations to integrate secure provenance signals into their content. Provenance signals simply refer to evidence or information that can help validate the origin, authenticity or integrity of a piece of digital content. Examples include metadata, watermarks, digital signatures, blockchains and chain of custody tracking. Such signals can provide insights into the content creation process and applied manipulation techniques.

Other organizations with similar missions are popping up around the world. This is promising, as the collective investment in and adoption of new authenticity validations will only become more critical as deepfake schemes metastasize and we graduate to Phase 2 of AI's impact on cybersecurity.

Strategic planning and workforce training

In addition to cultivating new ways to detect manipulated assets and authenticate original content, organizations must ensure their cybersecurity strategy is robust enough to address current and future evolutions in social engineering and deepfake threats. To that end, cybersecurity teams should develop and rehearse deepfake-focused prevention and response exercises just as they would for other incidents.

An organization that experiences an AI-based social engineering or deepfake attack must be able to respond effectively. This responsibility extends to the entire workforce. To properly prepare, workforce training programs should be established or expanded to educate employees about the risks of AI-driven content manipulation and what they can do to protect the organization.

In the event of a deepfake attack, incident details should be shared among the proper stakeholders and communities, including federal partners, so the data can be consolidated, cultivated and distributed to protect others.

Conclusion

The range and variety of AI-powered cyber threats are only going to grow. Social engineering and deepfakes, which intersect in their reliance on human psychology and manipulation, have already grown more sophisticated and difficult to combat in a short amount of time. As AI's impact on the field of cybersecurity evolves beyond Phase 1, understanding and defending against this first phase of threats should remain a top focus for security teams.

Organizations can keep up with this fast-changing landscape by embracing innovations in content authentication and deepfake detection, supporting new technology standards, ensuring their cyber strategy and workforce training programs are updated, and preparing for deepfake attacks in the same way they prepare for other cyber threats.

Learn more about combatting deepfakes.
Talk to an expert