*Guest contribution from F5's Chase Abbott, Principal Technical Marketing Manager


There are enough inflated expectations and stories around artificial intelligence (AI) to make everyone feel like they'll never be able to keep pace. And it makes sense! There are many great stories about how AI is improving customer-facing products, internal operational efficiencies, and automating tasks that give us back crucial time.

However, enterprises cannot and should not adopt every use case they read about. Strategically identifying where to implement AI will drastically focus resources, help rationalize spending decisions and reduce what could be a massive expansion of complexity and associated security risks within your IT environment.

Enterprises working with well-versed technology partners should ask three primary questions:

  1. Is AI an integral part of our competitive differentiators for the business?
  2. Is AI an operational business enabler for internal workflows and processes?
  3. What is our enterprise's internal AI maturity level?

Answering these and other seemingly basic questions can help accelerate AI initiatives and provide laser-like operational focus.  

Along the way, you shouldn't be surprised to find that existing technology partners may already have invested a significant amount of time and research into making AI adoption easier. Breaking down the overall AI lifecycle framework into manageable components can help enterprises and their technology partners achieve their AI goals.

Hidden complexities of the AI lifecycle

Let's start with a high-level view of an AI lifecycle (see Figure 1 below), keeping those previous three questions in mind. This gives us an idea of where to target initial and future resources. 

Fig. 1:  Full lifecycle of AI

Each box above oversimplifies (on purpose) what can be moderate to significant investments in infrastructure and operations, applied regionally or globally depending on AI's application to your business goals. These areas of AI services are intended to be detailed discussions on how to build a high-performing and resilient AI infrastructure while reducing the unique risk AI brings to an already complex distributed modern application topology.

All of this proceeds a series of big technology discussions — discussions not enough people are having before making investments. AI requires levels of computing power, infrastructure performance and resiliency previously utilized by governments, educational sciences, and the largest enterprises dealing in highly computational fields (e.g., fluid dynamics, nuclear energy, molecular biology, etc.). AI is flooding the market with overwhelming power and cooling demands that seem foreign to traditional operational teams. 

The good news is that much of how these applications are built relies on current application practices and design:

  • APIs: AI services are primarily consumed through APIs. These APIs have more stringent performance thresholds, but this is a known quantity.
  • Containers and microservices: Modern application design (container-based/microservices) allows for flexible design considerations to place services close to the computing resources required to make AI more performant — a benefit to your  customer-facing applications.
  • Existing infrastructure: Your enterprise might already be using technology in existing applications and infrastructure that can provide the performance and services AI demands (and that your operational teams desperately need).

Complexity will increase, but how you plan for and manage this complexity is something that mature operational teams who leverage the right technology partners can handle at any scale.

Granular security and infrastructure in the AI lifecycle

OWASP's Top 10 for Large Language Models (LLMs) provides us with a great starting point to identify risks and risk mitigation practices for each of the services within the AI lifecycle. These risks do not exist at a single integration endpoint but are distributed and repeated at several service stages within an AI application. While each requires thoughtful identification and remediation, the security industry is doing a good job of keeping pace with AI evolutions. In fact, the industry is mostly aligned on where risk exists and how to mitigate it at a conceptual elevation. 

This is not the case, though, with architectural uniqueness based on a widening list of variables. Identifying concerns about how AI will impact your existing infrastructure should be done with your technology partner and the vendors you rely on for critical infrastructure decisions. Going back to our three initial questions, you can start identifying your top focus areas with targeted questions like:

  • Are my GPUs being utilized efficiently?
  • Are my control and integration interfaces bottlenecking performance?
  • How many independent tools did I need to onboard to build this AI application and what is it costing me?
  • Will our data ingestion model cause any compliance issues?
  • Are our customers close enough to our inference points to meet their expectations?

Every component of the AI lifecycle framework deserves thoughtful technology partnerships to help investments return quick results. API security might be top of mind for security teams, but today, data speed and time to compute resources may deserve equal attention. How we build and store the data that comprise enterprise AI applications is only going to get faster and more complicated.

 Leverage existing expertise

Knowledgeable technology providers will have experience architecting a complete AI lifecycle framework. Large enterprises might require heavy investment in LLM training in centralized locations. More nimble and distributed applications might only rely on distributed inference compute resources and a combination of third-party models with regional RAG services. 

Providing the right resources for AI can be intimidating, but it's a good problem to have. Enterprises can start to manage the scale by dividing and addressing the AI lifecycle framework components individually. By addressing discrete security and architectural goals with the right technology partner, the formula for duplicating success will become apparent. 

Some companies will always be at the forefront of idea generation. But for many companies, it's smarter to focus on integrating the practical lessons learned from modern AI adoption while focusing on security and customer goals as measures og success. Your partners are there to help with these goals.

Learn more about AI solutions and F5 Connect with a WWT Expert

Technologies