Last month, we were lucky enough to travel to Las Vegas and attend the Black Hat and DEF CON 32 technical conferences with other members of the Cyber Range team in the scorching heat of Las Vegas. Throughout our time, we both attended many different briefings and heard from the keynote speakers and their important messages. Here are some of the key takeaways from both conferences.  

Secure elections

To kick off the keynote, Black Hat founder Jeff Moss brought along three leaders in the election security space, each with a unique background and approach to maintaining secure elections. One of the speakers was a gentleman named Hans de Vries, the Chief Cybersecurity and Operations Officer (COO) at the European Union (EU) Agency for Cybersecurity (ENISA). He has a unique position in which he helps strengthen the current state of cybersecurity for the EU Member States. His position is unique because he has worked with numerous countries, who all may have different concerns and issues, different populations, and so forth. Each of the keynote speakers was confident in their respective agency's ability to provide a secure election process. 

Jen Easterly, Director of the Cybersecurity and Infrastructure Security Agency (CISA) and another of the keynote speakers, reported that her agency has been monitoring a Chinese hacking group tied to China's central government known as "Volt Typhoon" that has been using "living off the land" techniques to breach American facilities and infrastructure on American soil and in other countries. These techniques are designed to hide the hackers inside systems and bypass detection. Although they have been active, there is no evidence that they have interfered with our election infrastructure.  

We've all seen the reports of foreign adversaries attempting to interfere with our secure election process, and the companies that build these machines have taken notice. Over the weekend of DEF CON 32, machines that are intended to be used in the upcoming 2024 presidential election were on-site so hackers attending the event could attempt to exploit any vulnerabilities, essentially providing vulnerability testing so the companies know what they need to remediate.

The future of election security is a critical area of focus as technology evolves and the threat landscape changes. With increasing concerns about cyberattacks, disinformation and foreign interference, safeguarding the integrity of elections is more important than ever. Additionally, there is a growing emphasis on auditing processes and ensuring the accountability of election infrastructure. As we look to the future, we are eager to see the innovations and collaborative efforts that will bolster election security, ensuring the democratic process remains resilient and trustworthy.

AI security

I recently had the opportunity to join the World Wide Technology AI Proving Ground team, focusing on AI Security. With that being said, going into Black Hat 2024, I switched my focus to briefings related to AI security. With the meteoric rise in popularity of all things AI, it should come as a surprise to no one that there were actually quite a few briefings around AI security at Black Hat. 

After the opening keynote, I had my first briefing which was named "Practical LLM Security: Takeaways From a Year in the Trenches." The talk started by level setting what an LLM is and what issues found in LLMs are actually security issues. In the briefing, the issues were separated into three separate categories: Plugin issues, which are the most serious; indirect prompt injection; and incorrect or undocumented trust boundaries. 

When discussing mitigation for these issues, the recurring message was, "Unfortunately, this is just how [X] works..." This comes from the fact that ML models often don't work the way we wish they did, which means security engineers need to design around these limitations as opposed to hoping the system will work the way they want it to. The other key takeaway was do not forget the basics; while LLM security has brought new security tools like guardrails, it's important to not forget the basic tactics and techniques like user access controls when securing LLM systems. 

If you have ventured into the world of AI security before, you may have heard the two different aspects of this: security of AI, which focuses on securing AI systems, and security with AI, which focuses on utilizing AI to secure an organization and its systems. After attending a briefing that focused on the first part, I also wanted to attend a briefing that covered the latter, so I found a briefing around threat hunting with LLMs. 

During this briefing they discussed an APT that they discovered through a suspicious file, at which point they then tried to find more related samples. At first, they did this manually by searching other files for keywords. They then decided to leverage an LLM to search these files for them and found that it was very good at identifying other malicious files for them. If you know about LLMs, you know that analyzing large amounts of text data is exactly what they are designed to do, so it is no surprise that it was successful here. While this may seem simple, this briefing definitely had me walking away wondering how AI will impact other aspects of cybersecurity, including more complex threat-hunting examples. Maybe in a year's time, AI will do all of our threat-hunting under the supervision of security engineers; who knows? 

While AI security is still very much in its infancy, there was still a ton of information at Black Hat about the different challenges and advancements being made in the world of securing AI systems. With how much AI is still exploding, I look forward to seeing even more briefings around AI security at Black Hat 2025!