Today, IT/ICT departments are under pressure to deliver infrastructure to their business unit customers to handle the computational demands of modern AI workflows while meeting enterprise requirements for manageability, reliability, and cost-effectiveness. Meeting this heady challenge requires a powerful yet practical approach to infrastructure modernization at scale.

The HPE Cray XD product suite represents the evolution of supercomputing technology to enterprise-ready AI infrastructure, combining Cray's heritage of extreme performance with HPE's enterprise computing expertise.

Central to HPE Cray XD is what WWT architects call its "framemain" approach—a play on "mainframe" that reflects a fundamental shift in computing philosophy. This innovative architecture combines supercomputing performance with enterprise infrastructure flexibility, providing the foundation for advanced AI workflows while maintaining enterprise-grade manageability, agility, and scalability. 

Unlike traditional mainframes that lock organizations into proprietary ecosystems, the "framemain" concept delivers mainframe-like integration and performance while maintaining openness and interoperability. This allows businesses to start small and grow their AI capabilities according to their unique needs, budgets, and timelines, effectively bridging the gap between traditional enterprise computing and specialized AI infrastructure requirements.

The Foundation: HPC Powers Enterprise AI

Before a more in-depth discussion of how HPE Cray XD enables this flexible approach to enterprise AI, it's essential to understand the fundamental role that high-performance computing (HPC) plays in powering modern AI capabilities at scale. 

HPC serves as the essential backbone of artificial intelligence in enterprise, academic and government environments. It enables organizations to process massive data volumes and execute advanced algorithms that drive AI innovation across industries.

This relationship works both ways: as AI capabilities expand, they have become a predominant driver of enterprise HPC investments, creating opportunities and challenges for IT organizations.

With AI capabilities rapidly moving from theoretical to practical applications, organizations across industries are racing to implement strategies that deliver measurable business impact. Yet the infrastructure requirements for these AI workflows are fundamentally different from traditional enterprise computing needs.

The Enterprise AI Infrastructure: An "Hourglass" Challenge

Enterprise AI has moved beyond the adoption tipping point into strategic-imperative territory. The scale of investments reflects this shift: IDC's January 2025 Worldwide AI and Generative AI Spending Guide forecasts global AI spending will more than double by 2028, reaching $632 billion.

This growth is creating significant infrastructure and data center challenges. Traditional enterprise infrastructure, after all, wasn't designed for the unique demands of AI workflows, particularly during training phases. AI models require massive parallel processing capabilities, specialized accelerators, and high-bandwidth storage and interconnects that strain conventional IT/ICT skills and data center facilities. 

This balancing act creates what WWT sees as an hourglass challenge— business requirements at the top, technical requirements at the bottom, and a critical narrow middle where use cases and performance needs must flow through. This visualization helps illustrate why many organizations struggle to translate ambitious AI business initiatives into appropriate technical infrastructure investments. 

Industry research backs up this bottleneck issue WWT has observed in the field with customers. For example, Gartner's 2024 AI Infrastructure Market Guide says organizations building AI capabilities frequently underestimate their infrastructure requirements, leading to performance bottlenecks, cost overruns, and failed initiatives. The guide recommends organizations adopt purpose-built AI infrastructure platforms that scale incrementally while maintaining enterprise management characteristics.

The HPE Cray XD Advantage: Enterprise-Class High-Performance Computing

At this critical intersection sits the HPE Cray XD product line – an evolution of supercomputing technology designed specifically for enterprise AI environments. Drawing on Cray's heritage of building the world's most powerful supercomputers while incorporating HPE's decades of enterprise computing expertise, the Cray XD line represents a novel approach to AI infrastructure that balances raw performance with practical enterprise considerations. 

The Cray XD line achieves this balance through its innovative "framemain" architecture, which delivers supercomputing performance within a framework that integrates seamlessly with existing enterprise management systems and operational processes.

The HPE Cray Story: Leveraging Supercomputing Prowess

The Cray name has been synonymous with supercomputing for decades. Founded by Seymour Cray in 1972, the company pioneered vector processing and other technologies that pushed the boundaries of computational performance. Cray supercomputers have powered scientific breakthroughs, national security applications, and other workflows that demanded the absolute pinnacle of computing capability.

When HPE acquired Cray in 2019, it brought together two complementary technological traditions: Cray's expertise in extreme-performance computing and HPE's long history of building enterprise-class systems. The merger positioned HPE to address the growing enterprise need for AI-capable infrastructure that didn't sacrifice enterprise characteristics.

Today, HPE offers two fundamental Cray product lines: the EX and the XD. The Cray EX line continues the tradition of building authentic supercomputers – the massively parallel systems that power national laboratories, for example, and cost hundreds of millions of dollars. The Cray XD line, however, represents something new: the adaptation of supercomputing principles to enterprise environments, with a focus on manageability, scalability, and integration with existing IT/ICT investments.

Industry analysts concur with WWT's assessments. IDC states in its 2024 MarketScape for Enterprise AI Infrastructure report that "HPE's acquisition of Cray has allowed it to transfer decades of supercomputing expertise into enterprise-grade systems that maintain performance while addressing practical operational considerations that were previously secondary in HPC environments."

The Cray XD product line brings supercomputing performance capabilities into an enterprise-ready form factor. At its core, the XD line leverages the same high performance processing technologies found in world-class supercomputers but integrates them within a framework designed for enterprise IT/ICT environments.

The XD670 model, in particular, offers an exceptional balance of performance and practicality. Featuring eight high performance GPU accelerators, the XD670 delivers the computational power needed for the most demanding AI workflows while maintaining compatibility with standard enterprise management frameworks. It's highly performant and has the reliability, availability, and serviceability requirements of enterprise computing.

This enterprise integration is a crucial differentiator. Unlike solutions designed primarily for performance at the expense of operational considerations, the Cray XD systems use the same management tools that IT teams already use for their HPE ProLiant environments. This means organizations can leverage existing skills and processes rather than creating siloed management systems for their AI infrastructure. The result is that customers don't have to learn a new way to manage their infrastructure. If a customer has an HPE infrastructure, this is just as simple as another HPE server—one that allows them to leverage the skillsets their people already have.

The 2024 Gartner Critical Capabilities for Enterprise AI Infrastructure report recognizes this advantage, giving HPE high marks for operational efficiency and highlighting how HPE's integration of Cray technology with its established ILO (Integrated Lights Out) management framework provides a seamless operational experience that distinguishes it from competitors requiring specialized management tools for their AI systems.

The "Framemain" Approach: Open Integration vs. Rigid Ecosystems

Diving deeper into the "framemain" concept will help explain why this approach represents such a significant departure from traditional models. While we've established, that this concept combines supercomputing performance with enterprise flexibility, its most transformative aspect may be its commitment to openness instead of ecosystem lock-in. 

HPE Cray XD provides the integration benefits typically associated with vertically integrated systems but without forcing customers into closed ecosystems that limit future agility at scale.

Cray XD was designed with future-proofing as a foundational principle, enabling organizations to adapt and evolve their AI infrastructure as technologies advance and new requirements emerge.

With Cray XD, organizations can maintain the freedom to choose the networking technologies, storage solutions, and software frameworks that meet their specific needs. This approach allows enterprise organizations to leverage the OEM/ODM technologies and ecosystems they're already comfortable with rather than being forced to adopt unfamiliar tools and develop new skill sets. IT teams can continue using their preferred management systems and operational workflows, significantly reducing the learning curve and implementation friction. 

This flexibility also extends to the business model, with options for traditional CapEx capital purchases or consumption-based OpEx approaches through HPE GreenLake. 

Scalability and Business Model Flexibility Benefits

AI initiatives rarely remain static. As organizations move from pilot projects to production deployments, their infrastructure requirements grow and evolve. The Cray XD line was built with this evolution in mind, offering scalability in multiple dimensions.

For example, the Cray XD670 is the essential heavy-duty tool for the most demanding computational challenges—like using the right hammer when a powerful impact is required. However, organizations rarely need maximum computing power for every task. They genuinely benefit from a scalable, dynamic solution that can adapt to the full spectrum of requirements—from the most computationally intensive workloads that demand significant force to the simpler inferencing use cases that require a lighter touch. This flexibility ensures resources are appropriately matched to each specific need.

This scalability extends in multiple directions. Organizations can scale horizontally by adding more systems, vertically by upgrading components within systems, or across different compute resources based on evolving workload characteristics.

The business model flexibility provided by HPE GreenLake adds another dimension to this scalability. Organizations can adopt consumption-based pricing for their Cray XD systems, paying for what they use rather than making large upfront capital investments. This approach aligns costs with value creation and helps organizations manage the financial risks of AI investments.

This flexibility also supports the growing trend of repatriation—the movement of workflows from public cloud environments back to on-premises or co-located infrastructure. WWT sees many customers repatriating from the cloud back into their data centers for reasons such as data sovereignty, data gravity, and governance concerns.

HPE's Powering and Cooling Advantages

As AI workflows drive higher compute densities, powering and cooling are critical concerns. The computational demands of modern AI systems generate significant heat that must be effectively managed to ensure reliability and performance.

HPE's enterprise heritage gives it a distinct advantage in addressing these challenges. The company offers integrated cooling solutions designed specifically for high-density AI environments, including direct liquid cooling options for the most demanding deployments.

Today's AI infrastructure deployments also present unprecedented challenges for data center facilities—with rack densities often exceeding 30kW per rack—far beyond traditional enterprise computing requirements. 

HPE addresses these facility integration challenges with comprehensive planning tools and modular deployment options that accommodate varying power grids, floor loading capacities, and cooling infrastructures. Unlike some vendors that leave facility requirements to the customer, HPE's approach integrates rack-level power distribution, structured cabling systems, and cooling delivery into a cohesive design that minimizes retrofit requirements and accelerates deployment timelines.

This holistic view of infrastructure means organizations can avoid costly data center upgrades or the construction of specialized facilities, allowing AI initiatives to move forward without being stalled by physical infrastructure limitations. HPE delivers cooling technology alongside computing systems—a comprehensive approach that benefits customers. Having a partner that provides complete solutions for high performance computing and AI initiatives through either CapEx or OpEx models is invaluable.

HPE's integrated strategy also addresses sustainability requirements while delivering certified cooling solutions, creating a seamless experience that addresses the full spectrum of infrastructure needs. As organizations face increasing pressure to reduce their environmental impact, efficient cooling technologies are critical to responsible AI infrastructure strategies.

The Implementation Journey with WWT: From Traditional Computing to AI Excellence

"We know we need AI capabilities, but where do we begin with the infrastructure? Our existing systems weren't designed for these workflows, and we can't afford downtime or disruption to current operations."

This common sentiment captures the implementation challenge many organizations face. As companies transition from traditional computing environments to AI-powered solutions, they encounter a complex journey beyond hardware selection. WWT guides organizations through this transformation, helping them navigate unfamiliar technologies, bridge skill gaps, and integrate new capabilities without disrupting existing operations.

Organizations typically struggle with challenges such as:

  • How do we right-size our infrastructure investment to match our AI goals?
  • How can we integrate AI systems with our existing environment?
  • What skills do our teams need to develop to support these new workflows?
  • How do we validate performance before full-scale deployment?

The WWT Approach to AI Implementation

WWT's comprehensive implementation methodology begins with thoroughly assessing your current computing resources and AI objectives. This assessment considers immediate needs and future growth to ensure your selected infrastructure can adapt as requirements evolve.

Integration planning is particularly critical, as AI systems rarely operate in isolation. WWT's expertise ensures these systems connect seamlessly with data sources, interact with existing applications, and fit within established management frameworks. The enterprise design of HPE Cray XD systems, part of WWT's solutions portfolio, facilitates this integration across diverse IT environments.

The integration is especially streamlined for organizations already invested in HPE ecosystems, with familiar management interfaces that reduce learning curves and accelerate deployment. For those with different technology stacks, WWT's platform-agnostic approach ensures smooth integration with existing investments and processes, regardless of your current infrastructure vendor. This flexibility allows all organizations to leverage their existing investments while gaining the performance advantages of advanced AI infrastructure.

Organizational readiness represents another critical success factor. WWT helps teams develop the appropriate skills to deploy, manage, and optimize AI infrastructure effectively. For those in HPE environments, the familiar management interface of the Cray XD systems dramatically reduces this learning curve.

From Assessment to Optimization with WWT

Organizations transitioning to AI-powered infrastructure face challenges beyond hardware selection. WWT guides this transformation through a comprehensive methodology that addresses both technical integration and organizational readiness.

WWT's approach is built on five key principles:

  1. Clear definition of AI use cases and performance requirements - Identifying and prioritizing the most valuable AI applications with measurable success metrics
  2. Integration planning for existing systems - Ensuring new AI systems work seamlessly with current environments without disruption
  3. Skills assessment and development - Providing targeted training to prepare IT teams for managing new infrastructure
  4. Incremental implementation with validation - Using proof-of-concept testing rather than risky "big bang" deployments
  5. Continuous optimization - Fine-tuning infrastructure as AI workloads evolve to maximize performance

De-Risking Deployment Through Validation

WWT leverages its Advanced Technology Center (ATC) and North American Integration Center (NAIC) to validate performance and deliver pre-configured systems, eliminating on-site assembly challenges and reducing deployment timelines.

Future-Proofing Your AI Investment

As AI technologies rapidly evolve, HPE Cray XD's modular design, clear upgrade paths, and commitment to open standards ensure your infrastructure investments remain valuable for years. Combining supercomputing performance with enterprise flexibility provides a foundation that grows with your AI ambitions.

Learn more about High-Performance Architecture and HPE Connect with a WWT Expert

Technologies