This article was written and contributed by our partner, Deep Instinct.

Preventing Cyber Threats Requires Deep Learning, Not Just AI

The rapid advancement of artificial intelligence (AI) is fundamentally reshaping the cybersecurity landscape. On one hand, AI-powered tools hold tremendous potential for automating threat detection and making security teams more efficient. However, these same technologies also empower threat actors to create increasingly sophisticated and hard-to-detect attacks. Organizations now face a critical challenge: leveraging "good" AI while also defending against this new breed of "bad" AI.

On the positive side, AI can automate threat monitoring, reduce human error, and give time back to security operations (SOC) teams who are underwater chasing alerts, making prevention and detection more effective and efficient. Recent reports found that false positives from antiquated cybersecurity tools significantly strain these team members' time, accounting for over two working days of lost productivity per week.

Conversely, AI makes it much easier for bad actors to create and manipulate malware that continuously changes to evade detection from traditional cyber tools – like legacy polymorphic Anti-Virus (AV) – and steal sensitive data. According to the Verizon 2024 Data Breach Investigation Report, the success rate for attacks has escalated to more than 30% due to threat actors' use of AI. These advancements in attack methods and the harsh realities of AI are making existing security measures obsolete.

The potential for unlocking competitive advantage led 69% of organizations last year to adopt generative AI tools. However, with so much on the line, organizations need to understand the risks and limitations of AI and what's required to get ahead of adversaries and predict advanced cyber threats.

Evaluating the Challenges and Limitations of AI

With the widespread availability of generative AI, new cyber threats are emerging daily, and much faster than in previous years. Recent examples include hyper-realistic deepfakes and phishing attacks. Threat actors are using tools like ChatGPT to craft well-written and grammatically correct emails, videos, and images that are difficult for spam filters and readers/viewers to catch. We've seen popular reports of deepfakes using high-profile figures such as Kelly Clarkson, Mark Zuckerberg, and Taylor Swift. We also saw a hacker posing as a CFO trick a finance worker at a multinational firm into transferring $25 million.

Concerns with the use of large language models extend beyond phishing and social engineering techniques. There are widespread concerns about the unintended consequences of companies using these generative AI tools in typical business scenarios. For example, after a recent Samsung breach, the company banned ChatGPT and other AI-based tools among its employees to cut back on potential security gaps if the technology – and the sensitive data shared on these platforms – were to fall into the wrong hands. In the Samsung case, a crackdown occurred after an accidental leak of sensitive internal source code was uploaded to ChatGPT by an engineer.

We're seeing similar complications with threat actors using tools like FraudGPT, WormGPT, and EvilGPT opening advanced attack techniques up to a broader audience of less sophisticated users. With these tools, a cybercriminal can perform an array of tasks to write new malware code, reverse engineer existing code, and identify vulnerabilities to generate novel attack vectors. 

As threat actors use AI to make their attacks smarter, stealthier, and quicker, we've seen more and more organizations deploy new solutions that tout AI capabilities. However, not all AI is created equal, creating limitations with many solutions that claim to be powered by AI. Advanced adversarial AI can't be defeated with basic machine learning (ML), the type of technology most vendors are leveraging and promoting (and calling it AI, generically). The cybersecurity community needs to shift away from a reactive, "assume breach" approach and turn to one focused on prevention. 

Furthermore, we've reached the notable end of the Endpoint Detection and Response (EDR) honeymoon period. EDR is too outdated, reactive, and ineffective against today's sophisticated, adversarial AI-driven cyber threats. Most cybersecurity tools leverage ML models that present several shortcomings. For example, they are trained on limited subsets of available data (typically 2-5%), offer just 50-70% accuracy with unknown threats, and introduce an avalanche of false positives. ML solutions also require heavy human intervention and are trained on small data sets, exposing them to human bias and error.

We've entered a pivotal time in the evolution of cybersecurity, one that can only be resolved by fighting AI with better, more advanced AI.

The Power of Deep Learning

Due to AI advancements, we can now prevent ransomware and other cyber attacks rather than detect and respond to them afterward. Deep Learning (DL) is the most advanced form of AI and is the only way to deliver true prevention. When applied to cybersecurity, DL trains its neural networks to autonomously predict threats to prevent known and unknown malware, ransomware, and zero-day attacks.

Traditional AI can't keep pace with the sheer volume of data entering an organization. It only leverages a portion of the data to make high-level business decisions. With DL, data is never discarded, making it much more accurate and enabling it to predict and prevent known and unknown threats quickly.

There are less than a dozen major DL frameworks in the world – and only one is dedicated fully to cybersecurity: the Deep Instinct Prevention Platform. It continuously trains itself on millions of raw data points, becoming more accurate and intelligent over time. Because DL models understand the building blocks of malicious files, it makes it possible to implement and deploy a predictive prevention-based security solution. This allows savvy cybersecurity teams to shift from an outdated "assume breach" mentality to a predictive prevention approach.

Only deep learning has the power to match the speed and agility of this next-gen adversarial AI.

Learn more about AI Security and Deep Instinct Contact an expert

Technologies