Nothing pumps me up more than seeing blue teams get such a boon to their security arsenal; it makes me absolutely giddy. I remember when endpoint detection and response (EDR) first started rolling out circa 2013. You mean to tell me that I no longer need to maintain Sysmon on all my machines? I don't have to ship that Sysmon to a security information and event management (SIEM) system and build out and tune all these correlations? And I get to replace classical AV as well, furthering my preventative capabilities? One word: stoked. Fast forward to today, EDR is a mainstay on any security operations team, providing rich telemetry, high-fidelity alerting and a strong preventative capability to boot. 

With the advances in artificial intelligence (AI) and large language models (LLMs), I believe security operations is on the brink of yet another huge boon regarding their security arsenal. Advances in these technologies are enabling our incident responders in ways that have never been possible before. Detections are getting smarter, and AI can assist in triage, make recommended containment actions, and translate cumbersome queries using natural human language. These examples all enable the security operation center's (SOC's) core mission: to detect and to respond. 

Traditional pain points in security operations

Here at WWT, we get the opportunity to talk with clients every day about their pain points around security operations. These pain points are all too familiar, as they are the same ones I have faced working in and leading SOCs in the last ten years. Heck, they even align with every industry report you may read. 

  • Cost. Traditional SIEMs get beat up here the most. Their pricing models essentially penalize you for pulling in more data.
  • Too many alerts. Alerts aren't high fidelity. Teams spend too much time chasing things that aren't important.
  • Custom content creation and maintenance. From detection engineering to traditional SOAR, maintaining these has proven to be expensive in both labor costs and technology costs.
  • Lack of cross correlation and so many portals. You get a tab! You get a tab! EVERY TOOL GETS A TAB!!! Responders need to open twelve different tabs to resolve a single alert.

It shouldn't be expected that the SOC of the future will fix all these concerns overnight. However, it can be implemented in stages that make sense for your business and your pre-existing technology stack.  

The SOC of the future 

*Please note that this is not an exhaustive list of OEMs

The architecture overview of the SOC of the future heavily focuses on technology. However, all security teams must always remember it is the people and the processes that enable the technology. Where we currently stand, there isn't a tool or platform that is "set it and forget it." Let us first discuss the uses of AI within security operations. 

AI + SOC = Profit? 

I believe it is time that we do some level setting here. I feel that there could be a bit of confusion around AI within a SOC. Security operations is NOT at a point where the AI SOC is functional. The market just isn't there yet (despite what your targeted LinkedIn ads say), nor am I even sure of the demand for such a thing. AI within the SOC can best be categorized as an AI-assisted SOC analyst or augmented AI analyst. Currently, security chatbots and co-pilots provide much of this capability. 

If trends continue, I also fully expect machine learning (ML) to assist in what used to be called user and entity behavior analytics (UEBA), continuing to expand novel alerting or automatically enriching datasets for incident responders. In the last 10 years, there has been a lot of growth in this area. Traditional SIEMs used standard deviation to find anomalies within various datasets, including behavioral profiles, data exfil, etc. Then, vendors like Exabeam and Gurucul started incorporating their own ML models to address the same anomalies. Finally, many modern EDR and identity detection and response (IDR) tools have incorporated these "deviation" alerts into their platforms. These models will continue to get faster and more accurate.  

Finally, we are starting to see AI and LLMs being used even further down the technology stack in places like SOAR and hyperautomation. AI-assisted playbook creation is becoming standard across the SOAR industry. Additionally, the growth of hyperautomation plays like Torq and Tines shows great promise with assisted incident triage capabilities.  

These are the areas where engineers and responders spend a lot of their time. I believe most teams could benefit almost immediately from AI assistance, but to be clear, this is still not an autonomous or fully AI-powered SOC. We must keep these limitations in mind as we adopt the SOC of the future. 

SOC of the future: Modularity and adaptability 

This little soliloquy acts as my get-out jail-free card. I cannot possibly cover every single circumstance across every single client and piece of technology. The aim of WWT's SOC of the future is to outline the major pieces of the puzzle, highlighting where or why we make certain decisions to inform the various possibilities of a client's end goal. In the fantasy world of having an unlimited budget, equipped with the talent to maintain data pipelines, and the ability to invest in my own AI pipelines, create detections and maintain infrastructure, I have a very good idea of what that looks like. However, that has never been an option in my 15 years of experience. WWT is here to talk about the art of the possible, your specific mission-critical needs and how we can come up with a solution together. 

Enough backpedaling, let's start outlining the architecture. 

Data pipeline 

Let us start with data. Data is the fundamental component of security operations. Every event, every log, every alert is a piece of the data pipeline. Generally, the hard part is wrangling all the data to help us make informed decisions (this statement applies to many things outside of security).   

The data pipeline acts as a highly flexible processing platform. It should allow users to collect, filter, enrich, transform and route data from a magnitude of sources to various destinations.  

Storage 

Things start getting fun here, as there are countless possibilities and rabbit holes to travel down. All this data needs to go somewhere so we can enrich, process and correlate. The main use case we are seeing from WWT clients is to begin transitioning away from traditional SIEMs. There are surely other possibilities, but the two biggest options we see in the industry: cloud data lakes and next-generation (NG) SIEMs. Regardless, this storage must be highly categorized and able to address various schemas to feed our detection engine. 

  • Cloud data lakes: These data lakes are agnostic about the types of data you can feed them. They provide near-unlimited capacity and scalability for storage. Cloud data lakes excel at handling diverse types of datasets, making sense of the data for various uses in reporting and analytics.  
  • NG-SIEMs: These are closed data lakes offered by many of the leading security OEMs. Much like a traditional SIEM, you ship the data over, use the native OEM detection logic, and build out any traditional correlation searches/detections as deemed necessary. I used the term "closed" here intentionally. These next-gen SIEMs allow for little configuration when it comes to creating your own AI or ML pipelines. You rely on the OEM's intellectual property here. This is positive in many cases, as the security OEMs have decades of experience in creating these pipelines, keeping up with current threat TTPs, etc. However, you will lose some of the customization and flexibility.   

We do a deep dive down the data pipeline and storage rabbit hole here: SIEM Overload to Smart Security: The Power of Data Pipeline and Modern Storage

Analytics engine 

I like to call the analytics engine the heart of the SOC. This is what informs responders of a possible intrusion. We need higher fidelity alerts, and we need rich enrichments for those alerts. I don't believe we will ever leave the "classic correlation search" behind, but I do believe the days of "Encoded PowerShell Detected" (with zero other context or enrichment) should be long gone. The fact is, for the foreseeable future, we will still need to tune, create custom detections to fit our environment and mission, and always be on the lookout to provide more enrichments and context to the alerts that our incident responders come across.  

The analytics engine will largely depend on your storage or data platform. Those sticking with security data lakes will have a large OEM dependency. XSIAM, CrowdStrike's NG-SIEM, and SentinelOne's AI-SIEM will have rich alerting for their native data sources (EDR, IDR, Cloud, etc.).  

For those going with a traditional data lake storage medium, Anvilogic or similar technologies may make sense. Here, the detection logic will sit on top of your storage solution. Splunk, Snowflake and Azure are supported within Anvilogic. Each vendor may have various support for the many possible storage mediums available. This is something to keep in mind when evaluating options in transitioning to the SOC of the future. 

Each analytics engine will be different for each security team. Based on environment, some detections (such as those highly tuned EDR/IDR alerts using the OEM's own analytic engine) may go straight to the queue to be triaged. Others may not be real-time and require data processing before being sent to the queue. Regardless of the architecture, the engine must be responsible for handling various types of detections. 

The analytics engine will also be responsible for matching current threat intelligence to identifiers within your environment. These analytic engines do this by using a particular data model for a magnitude of data sources. This allows us to easily search across disparate data sources and eventually build out models for detecting anomalies.   

If you're interested in reading more, we do a deep dive into alerting methodology and notable OEMs for your analytics engine here: A Practitioner's Guide: Detections within Security Operations

AI in the SOC 

We have already discussed AI briefly, but now let's do a deeper dive. First, let's knock out some definitions, as AI and automation are often referred to as synonyms, and they really shouldn't be.  

  • Automation executes a predefined task automatically. Typically, Boolean logic will be used to deliver fast and reliable outcomes. However, automations cannot adapt to new scenarios automatically; a new workflow must be created, or a previous workflow must be modified for new functionality. 
  • AI is designed to perform non-deterministic tasks autonomously. It's highly adaptive to new scenarios, however it is less reliable and may produce undesired or unpredictable outcomes. 

Let's look at a typical security operations scenario: Blocking an IP on the firewall. 

  • Automation: Tipper provided (typically some type of alert). Enrichment performed: What is the reputation of the IP address? Is it on other well-known published blocked lists? Now this is where the automation may branch based on predefined variables. If the IP address has a negative reputation and is already in various block lists, then we may feel comfortable blocking the IP automatically using an API. If only one of the two conditions is met, we may only block if the reputation meets a certain threshold, say 90/100 maliciousness. Finally, if neither condition is met, no automatic blocking is to take place, kick it to a human analyst.
  • AI: Tipper provided or a model creates its own tipper (using LLMs to discover anomalies or abnormal traffic patterns). Now, the AI will take action based on its model. Generally, this model will use similar steps to determine the maliciousness of the IP in question (threat intel, WHOIS, etc). Based on the model's determination, it will take actions based on its operating configuration. The operating configuration can be configured anywhere between being completely autonomous (taking actions without any intervention) or kicking to a human for decision-making.

This is where I typically get nervous. How well do you understand the underlying AI models making malicious determination? How well do you understand the models that make remediation actions? Things get fuzzy here, and they get fuzzy fast, especially if you intend to allow the AI to be autonomous. The security vendors providing AI augmentation will have a fun liability conversation if something should ever go awry. I predict, as AI is used more and more, we as security practitioners will continue to get more and more comfortable. Humans, after all, make mistakes as well. In the end, it will all come down to a balance of risk appetite and efficiencies gained. 

We already see AI being incorporated into various logic used in detections, chatbots/security co-pilots, incident summarization and analysis, and assisted playbook creation. The truth is most major OEMs have been incorporating AI/ML into many of their products already. Often, this is black box technology with little ability to configure. Hopefully, OEMs in the coming years open up some of their models so they can be tweaked as a team sees fit. 

Currently, if you wish to use advances in ML for the various security applications above, you have two choices: completely leverage an OEM's ability or build your own. Building your own is absolutely feasible, and WWT has vast experience in doing so, but there is a technical uplift there that many organizations may not (yet) have the technical expertise to execute.  

As of March 2025, the biggest things I am seeing in this space are incident summarization, suggested remediation actions and query translation using natural human language to translate any of the many structured query languages available to a responder. These are all wonderful things. However, most of this can be categorized into the classical "Tier One" workflows. Admittedly, this is where cyber sees the most churn and burnout, thus I would expect immediate relief for most security teams. 

Automation in the SOC 

Finally, let's talk about automation within security operations. SOAR has held a presence in both the market and in the toolbox for SOCs for the last few years with security automations dating back further. However, the OEM landscape is changing quickly within the security automation space. First, let us discuss what we should expect from any automation product:

  • Scalable automation and orchestration
  • Intuitive case management
  • Simple and exhaustive API integrations for security tools

Most SOARs also double as case management solutions for security teams, which brings some unique use cases into the requirement list. Many teams will use SOAR for internal security case management and route external tickets through the rest of IT's case management such as ServiceNow or Jira. One place where both SOAR and hyperautomation lack is case management and custom reporting, though major OEMs are making strides in this area. 

To add more buzzwords to the security operations industry, now the term hyperautomation is coming up more and more. The hyperautomation players distinguish themselves from legacy SOAR with: 

  • Cloud-native architecture
  • AI assistance for triage and playbook creation
  • The ability to handle unprocessed events to automatically identify real threats (something legacy SOAR has struggled with)

The SOAR/hyperautomation market landscape is evolving fast. Legacy SOARs are already closing the gap with hyperautomation players. However, these hyperautomation players are lacking in case management maturity, where many legacy SOAR products shine. There are many strong vendors in the SOAR/hyperautomation industry right now. The best thing to do is to 

  • Decide how much AI augmentation is worth to your team and enterprise
  • Capture how important case management is within a SOAR
  • Always check available out-of-the-box integration support for your current technology stack to understand the time-to-value in selecting a specific OEM.   

More content around Automation/Hyperautomation, where we discuss building your first automation playbook and the genesis of Hyperautomation here: A Practitioner's Guide: Automation within Security Operations

Putting it all together 

The SOC of the future is a fusion of platform and methodology. It is an entire security operations methodology for improving detection and response for incident responders and analysts responsible for continuous monitoring. Is there a tooling perspective? Yes. But constant tuning and refinement are human processes. You may change out any of the critical technical components, but the end goal should always be the same:

  • Give your security operations team better alerts, faster.
  • Use tribal knowledge already in place, hyperautomation and numerous APIs to enrich any detection within your alert library.
  • Utilize AI/LLMs where it makes sense for your team and organization, starting with automatic incident triage and query translation for your Tier 1 team.
  • Utilize AI and automation to take immediate remediation efforts in specific situations, dictated by your risk acceptance.
  • Maintain a streamlined platform with less "query pivot - query pivot" for responders. 

The tangible benefits of doing so include: 

  • Cost savings from traditional SIEM "backbone"
  • Improved mean time to detect (MTTD) and mean time to respond (MTTR) 
  • Higher quality alerts resulting in less burnout for responders
  • Increased response visibility and easier triage with the assistance of security co-pilots
  • Increased quality of life for your SOC

Securing the future: Let our experts guide you 

In today's threat landscape, we must architect an entire platform and methodology that provides centralized visibility, advanced threat detection and compliance, AI-augmented triage, and reliable security automations to equip the defenders facing these threats. We are happy to assist in your cyber journey. 

If you are concerned about protecting against cyber threats or want to learn more about WWT's SOC of the future and the market around it, contact us.  GSASecOps@wwt.com