A Brief History: The Advent and Divergence of HPC and AI 

The origins of high-performance computing (HPC) can be traced back to the 1940s, starting with the ENIAC (Electronic Numerical Integrator and Computer) development at the University of Pennsylvania, one of the first electronic general-purpose computers.  

In the 1950s, the University of Illinois pushed the future forward, further advancing the field, notably by creating the ILLIAC series (a name derived from Illinois Automatic Computer), which marked significant progress toward modern supercomputing. 

During this time, John W. Tukey, a pioneering American mathematician, explored various mathematical approaches before developing innovative algorithms and models to enhance data analysis, revolutionizing how we interpret and manage data.  

Supercomputers were introduced in the 1960s, and for several decades, the fastest was made by Seymour Cray at Control Data Corporation (CDC). Mr. Cray eventually left CDC and formed Cray Research as the premier supercomputing company on the planet. HPC emerged in the 1960s as well as part of an effort to support academic and government research. 

In the 70s, HPC expanded its usage to industries such as automotive, aerospace, financial services, pharmaceuticals, and oil and gas. This expansion was driven by the increasing complexity of problems that required advanced computational power to model and solve.  

Hewlett Packard Enterprise (HPE), combined with their ProLiant and Apollo products, acquired SGI (2016) and Cray (2019) to dominate the high-performance computing and supercomputing industry. 

Supercomputing and HPC can process data and execute calculations at a rate far exceeding that of other computers. At this juncture, supercomputing and HPC enabled different science, business, and engineering organizations to solve large problems that would otherwise be unapproachable.  

As a direct extension of these capabilities, high-performance data analytics (HPDA) leverages the same robust HPC infrastructure to manage and analyze extreme volumes of data at unprecedented speeds. This specialized application of HPC allows organizations to tackle challenges in data-driven domains, providing insights that were previously inaccessible due to technological limitations.  

As we know, AI is the acronym for artificial intelligence. I believe it would more accurately represent what it truly is: augmented intelligence, but I diverge, and no one asked me when they named it. AI was coined in the mid-50s, but it came into vogue in the early 2000s, when Google started using it. HPC and AI were artificially split (is there a pun here?) after the age of expert systems in the 90s.  

Expert systems were supposed to be smart decision-support systems back then. These systems, known as expert or rule-based systems, eventually got overshadowed by machine learning, marking a new era when AI really gained widespread recognition. 

As the field progressed, machine learning evolved into what is now known as deep learning. Though we don't hear a lot about deep learning today, it remains a vital technology underpinning modern AI applications. Deep learning is primarily carried out through neural networks, which have been fundamental in developing generative AI (GenAI) and its transformative applications, such as transformers.  

GenAI, a small yet evolving subset of AI with a big market potential, emerged more recently. GenAI runs on high performance, scalable, computing architecture that helped underpin the science for this. We call this an HPA or high-performance architecture. WWT's partnership with HPE and HP Labs continues to advance AI and HPC leadership, driving the evolution of technology and shaping the future of computational capabilities. 

HPC and AI Today 

In a way, HPC is an unsung hero. We just don't realize how much HPC goes into things that make our world safer, cleaner, and healthier. HPC has been used to run large-scale AI models in fields such as cosmic theory, astrophysics, high energy physics, and data management for unstructured data sets. It is also used in the design of everyday things such as cars, the iPhone, Siri, medicines, and so much more. 

The diverse applications of HPC highlight not only its power but also its intrinsic value in creating practical, effective solutions across a broad spectrum of industries. From enhancing product design to advancing scientific research, HPC's contributions are both foundational and transformative. 

So, when we consider the construction of any advanced tool or technology, utility is paramount—and much of this utility is powered by the vast computational capabilities of supercomputing. These high-performance computing resources not only power the processes behind the scenes but also ensure that the end products meet the essential needs of safety, efficiency, and innovation. 

In partnership with HPE's supercomputing prowess, we build soutions to have utility, and computing power and supercomputing deliver utility by enabling the creation of sophisticated tools and technologies that improve everyday life.  

Interestingly, the foundation for these advancements, HPC, was established well before the emergence of modern AI technologies. AI and HPC use highly specialized software; however, they fundamentally operate on the same high-performance architecture (HPA), which underpins the infrastructure for handling extensive computations. This shared foundation allows for the seamless integration of HPC capabilities in AI development, further enhancing their utility and effectiveness in various applications. 

With that said, two major underpinnings of AI are changing.  

First, there is a greater influx and flow of data, along with a diversification in data types. The exponential growth of information overload presents a classic HPC problem: how to effectively deal with this data deluge. This challenge is not new to HPC, where managing vast amounts of data efficiently has always been a core capability. 

Secondly, the need to build big, scalable AI systems is more pressing than ever. This requirement echoes familiar scenarios faced in the HPC world, where such scaling has been achieved repeatedly. Leveraging what we know from HPC, we can apply best practices to AI development, ensuring that the AI systems of tomorrow are both robust and adaptable. The HPC community is well-equipped to absorb this volume of data, running it on platforms that can scale up and out to meet growing demands. Furthermore, the HPC business model, which can be delivered as a service, on-premises, or in the private cloud, provides flexible and scalable solutions that can adapt to the dynamic needs of AI. 

The Fun Part: HPC and AI Are the Two Sides of the Same Coin 

If I want smart HPC, I use AI. Likewise, if I want fast AI, I leverage HPC. This symbiotic relationship underscores the interdependence of AI and HPC in modern computing environments. A tremendous amount of computing power, for instance, went into the design, fabrication, and building of GPUs to do AI. This intricate electronic design analysis (EDA) task is done on ultra-scale HPC clusters, highlighting/revealing HPC's critical role in AI development.  

Moreover, while AI can be integrated at the front or back end of workloads, it is essential to avoid creating false demarcations—such as positioning AI over here and HPC over there. Some of the greatest performance gains can be found when these technologies are optimized across multiple workloads in a unified workflow.  

To my point, HPC and AI are complementary: HPC helps you run performance-intensive workloads with high computation power and scalability, while AI drives the efficiency of processing these workloads. Together, they make a powerful duo that drives technological advancements. 

Therefore, HPC is the underpinning of AI, serving as a critical foundation for its development. The top reasons HPC helped build the AI capabilities of today are:  

  • They have complementary strengths in parallelism, performance, and infrastructure at scale. HPC provides high computational power and scalability, which is critical for running performance-intensive workloads. AI enables more efficient and intelligent processing of these workloads.
  • HPC handles the world's most massive data volumes: data lies at the heart of both HPC and AI. By applying AI data engineering techniques to existing data sources, organizations can generate deeper, data-driven insights for faster solutions, better and safer products, and more effective resources and usage.
  • Since HPC systems support the execution of advanced algorithms and architectures that are computationally demanding, this capability allows organizations to explore more complex models and frameworks, pushing the boundaries of what AI can achieve. Indeed, HPC is the backbone of the AI revolution, providing the necessary infrastructure to support the computational and data demands of modern AI research, models, and applications. The coupling of HPC and AI is paving the way for faster business realizations that drive innovations at scale. As we continue to explore this convergence, we can expect greater advancements in AI capabilities with HPC as its underpinning.

The Trouble With Looking at HPC & AI As Different Categories 

Looking at HPC and AI as two different categories or business units instead of understanding that HPC is the underpinning of AI means they're treated differently. Those who do are missing the ability to add more precision without HPC. You do not have to choose between the lower precision of AI and the high precision of HPC. With that said, you need to balance the speed of the result with precision in quality and repeatability. It's essential to understand the age-old statements of computing:  

  • Garbage in, garbage out: If you sweep the internet without checks and balances on realism and data quality, you're sweeping a cesspool.
  • Stupid is as stupid does: Just because you get more data doesn't mean it's better or you're even asking the right questions.

In light of these considerations, I predict that there will be a move from the biggest models to smaller, specific ones, and lots of them working together in tandem. This shift not only addresses the issues of data quality and relevancy but also optimizes the balance between speed and precision, reflecting a more strategic, efficient approach to harnessing AI capabilities. 

Bridging the Divide: Integrating HPC and AI for Enhanced Collaboration and Innovation 

There's an illogical divide between HPC and AI that must be resolved! Universities contribute to this divide by placing data science and computer science in one department, and electrical, electronics, and other engineering domains in another, fostering a digital divide. There's a prevailing sentiment: "To collaborate, I have to talk to those other people across campus."  

Moreover, a skills gap for both HPC and AI is also growing. Complex problems require the combined efforts of sciences, medicine, engineering, and computing capabilities. What HPC does is called interdisciplinary, a benefit that AI can also leverage—now there is a Scrabble word for you! 

HPC and AI are coming together, though, and you can help by organizing teams and business units accordingly. At the end of the day, what has not changed is that supercomputing/HPC is the underpinning of AI. Standing on the shoulders of giants like HPE and HP Labs, WWT is at the forefront of groundbreaking research and innovation in applying advanced AI to supercomputing, high-performance computing (HPC), and big data, shaping the future of technology and new application capabilities. 

How do we fix the illogical divide between HPC and AI? Understand the differences and correlations between them. Learn the history of both. Cross-pollinate. You don't have to learn 'all the other guy's skills,' or the math required—just cross-pollinate. Interdisciplinary mentors and mentees allow multiple groups to work together and learn together. So, give in to the dark side and figure it out! Get on the same language—we call this vocabulary leveling. Engage the stakeholders as soon as possible. To succeed in an HPC or AI project, make connections within the other group by ensuring you have a business champion in the organization. Know where you need to go. Make sure you have a roadmap that accounts for the obstacles, the hills, and the valleys. Get some illumination from 'the other group.' Call in an expert, such as the experts at WWT and HPE to help you succeed. 

Learn more about AI, HPA, and HPE Connect with a WWT expert

Technologies