As generative AI (GenAI) continues to transform the cybersecurity industry, many organizations have been hard at work investing in AI readiness: aligning AI and data strategies, modernizing legacy infrastructure, updating data governance policies, establishing AI and cloud centers of excellence, identifying practical use cases that deliver measurable value, upskilling talent and more.

Meanwhile, the internet's underbelly has been equally hard at work. Just in its own way.

The rise of deepfake fraud

We're talking about AI-powered deepfakes. Fraudulent images, recordings, text, social media posts or even "live" videos that have been altered to convincingly misrepresent or distort what someone says or does.

Deepfakes have become a weapon of choice for cybercriminals, thanks in part to advances in GenAI technology that have made it easy to synthesize massive amounts of online audio and visual content; convert text to speech at nominal cost; and quickly create and disseminate false or manipulative messaging in service of a fraudulent objective.

At WWT, we like to think of AI security as a two-sided coin. On one side, organizations adopting AI solutions need to implement security measures to manage AI usage across the enterprise. On the flip side, to detect and combat the spread of AI-driven cyberattacks, organizations will inevitably need to leverage AI security tools to harden their security posture. The issue of combatting deepfakes falls into the latter bucket.

The stakes surrounding deepfake attacks are incredibly high, especially for sensitive industries and arenas like financial services, insurance, healthcare, politics and public infrastructure. The ability to distinguish fact from fiction in these verticals is crucial. For example, we know that financial institutions are keenly aware of the risks posed by deepfake audio and video attacks.

The cutting edge of AI security: Deepfake detection tools

In classic cat-and-mouse fashion, the tools and techniques for detecting deepfakes have matured in response to advances in nefarious GenAI usage. At this bleeding edge of AI security, an emerging class of AI SaaS vendors is working alongside established tech behemoths to develop solutions able to detect deepfakes and authenticate digital content with high accuracy.

One way to defend against this novel type of fraud is to filter all incoming media through a software program designed to detect deepfakes. Depending on a number of variables, a deepfake detection solution will typically employ an advanced AI algorithm to analyze the authentication markers of incoming media. However, each emerging deepfake tool tends to offer a unique spin on its detection techniques, capabilities and specializations.

Given the complexity of the overall AI solution landscape, including the incredible pace of change, it's understandable that IT, security and procurement departments can feel at a loss when it comes to identifying and validating the maturity of new deepfake detection tools. Without investing in multiple products, how can such organizations determine which deepfake detection tools are proven, viable, trustworthy and effective at meeting their organization's specific security needs?

Thankfully, WWT has developed a process for organizations to quickly and cost-effectively assess the maturity of different AI deepfake SaaS solutions.

Fighting deepfakes in the AI Proving Ground

In 2023, we announced our commitment to building a state-of-the-art AI lab environment to help our clients speed the adoption of AI technology and high-performance architecture (HPA)

Powered by WWT's Advanced Technology Center (ATC), the AI Proving Ground today features 14 dedicated AI labs, providing an unrivaled playground for validating, experimenting with, and innovating with the world's leading AI technologies. That makes the AI Proving Ground the ideal setting to assess the maturity of deepfake detection tools.

In fact, we recently helped one of the largest financial organizations in the world assess the maturity of several different emerging deepfake solutions designed to assess the authenticity of incoming audio calls. Given the sensitive nature of this work, we cannot go into detail about the successful proof of concept (POC) we completed for this client. But we can share how a similar engagement might look for your organization.

Proof of concept: Deepfake audio detection

Our POCs leverage existing ATC data center infrastructure, including the AI Proving Ground's massive outlay of high-performance architecture plus any relevant hardware and software from our strategic partners and OEMs. Our solution experts then use the ecosystem's advanced test automation capabilities to simulate a solution in record time, mimicking the client's own IT environment to ensure seamless downstream integration.

At a high level, a POC to test deepfake audio detection tools in the AI Proving Ground might look like this:

  • Client creates and shares with WWT a collection of both real and deepfake audio recordings (this can be thousands upon thousands of calls). Only the Client knows which calls are authentic and which have been manipulated.
  • WWT imports the audio files into the AI Proving Ground and ATC's call automation system.
  • Using the supplied files, WWT engineers leverage this system to place calls as directed by the Client.
  • At the same time, the audio recordings and associated metadata (e.g., the number of the caller and the called number) are sent to the Client's chosen AI deepfake SaaS vendors for real-time analysis to determine if the call is a real person or a deepfake recording.
  • WWT creates a daily report analyzing the test summary results for each AI SaaS solution being tested, which is shared with the Client for final analysis.

Some clients may down-select particular AI SaaS solutions for additional evaluation and testing at this time; others may go to RFP with multiple SaaS vendors, armed with a wealth of new technical requirements and understanding; or they may feel confident enough in their results to procure a deepfake detection solution to integrate into their IT environment. WWT experts, spanning the entirety of the AI and traditional data center stack, are available to consult, facilitate and troubleshoot for the Client at every stage of the testing process.

Below is a sample high-level design for a POC engagement that assesses several deepfake audio detection tools:

A diagram of a cloud computing system

Description automatically generated
Sample HLD of a generic deepfake proof of concept in the AI Proving Ground.

WWT can complete this type of deepfake POC in a matter of weeks. This turnaround is incredibly fast, efficient and affordable when compared to the alternatives (e.g., performing similar POC testing in-house). 

Organizations that have leveraged our AI Proving Ground have been able to incorporate lessons learned and validated solutions directly into their suite of always-on cyber defenses. Importantly, they leave feeling more confident in their to protect customers and employees from the growing threat of deepfake attacks.

What's next for deepfake detection at WWT?

Back inside the ATC, WWT plans to keep augmenting our deepfake-detection testing capabilities. First, we plan to add a dedicated deepfake detection instance to the AI Proving Ground's growing lab environment. We are committed to working with a number of manufacturing and software partners who have expressed interest in integrating their products into our deepfake analysis capabilities and workflows. Given the industry's pace of change, we're committed to ensuring the latest enhancements, tools and techniques are made available to help clients meet new AI regulations, requirements and breakthroughs. 

Moreover, based on client feedback, we plan to extend our deepfake detection assessment capabilities to video, chat, meetings and messaging applications in the near future.

Though deepfake detection tools and techniques are still in their infancy, they will soon play a critical role in protecting organizations across industries. Security experts must continue to thrive at this bleeding edge of AI security, engaging in the back-and-forth between AI used for offense and defense.

Contact your WWT representative today to learn more about best practices for detecting and protecting against deepfake fraud.

Explore the AI Proving Ground's unique lab environment.
Get started