Partner POV | Efficiently and Sustainably Transform the Enterprise with AI
In this article
Article written by Tiffany Osias, Managing Director, Xscale, Equinix & Andria Zou, Senior Director, Global AI Data Center Strategies, NVIDIA.
For modern enterprises, implementing AI can be revolutionary. For instance, one major pharmaceutical company is using AI to accelerate drug development and ultimately get safe, effective drugs to patients more quickly. But, like any other enterprise pursuing the transformative power of AI, they first had to consider how it might impact their sustainability strategy. AI workloads are often compute intensive, so companies must strategize for how to get the compute capacity they need in an efficient manner.
Implementing AI and meeting sustainability goals can sometimes be at odds with each other. Many enterprises are learning that their legacy data centers aren't able to support modern AI workloads. Not only are they missing high-performance AI hardware, but they also lack the latest efficiency improvements and access to renewable energy.
Deploying inside a high-performance data center operated by a leading colocation provider like Equinix can help enterprises address both concerns. From inside these data centers, they can easily connect with partners and service providers that share their commitment to energy-efficient operations. By deploying inside an Equinix data center, the pharmaceutical company mentioned above was able to take advantage of both:
- The energy-efficient NVIDIA accelerated computing platform
- Sustainability innovations from Equinix
Accelerated computing is sustainable computing with NVIDIA
While AI and other advanced technologies are driving increased demand for compute capacity, the performance of traditional CPU hardware isn't keeping up with that demand. Fortunately, CPUs aren't the only option.
While CPUs perform operations serially, GPUs perform parallel processing—that is, working on multiple tasks simultaneously. This makes them fundamentally faster and better suited for processing massive AI datasets. And because they're faster, GPUs are also fundamentally more energy efficient.
GPUs may consume more energy over a particular time period, but the better way to think about efficiency is in terms of work completed per unit of energy consumed. Since CPUs would take far longer to complete the same jobs, they'd consume much more energy while doing so.

GPUs consume far less energy than CPUs to complete the same job. (Source: NVIDIA)
Also, the efficiency of GPUs has improved significantly with each new generation. In fact, NVIDIA GPUs are now 100,000 times more energy efficient than they were a decade ago. If fuel efficiency in cars had improved at the same rate, we'd be able to drive for almost 200 years on a single gallon of gasoline.

GPUs also benefit from regular software updates, so you won't have to wait for the next generation of GPUs to experience efficiency improvements. Even a new GPU purchased today will grow more efficient over time as additional software updates roll out.
AI workloads demand density for higher efficiency and to drive a more sustainable approach to generative AI. NVIDIA has implemented an innovative approach to minimizing energy consumption by reducing energy loss due to data travel, maximizing bandwidth by eliminating the all-to-all collective bottleneck, leveraging the NVIDIA NVLink networking domain for optimized performance across tightly coupled workloads, and enabling 2x greater GPU density per rack. At the rack level, NVIDIA utilizes copper for GPU communication to reduce wasted energy and improve reliability. The future of AI infrastructure will require key breakthroughs in high-density power distribution, advanced liquid cooling solutions and AI-optimized rack architecture to enable efficient deployments of more than a trillion parameters.
If you want to run AI workloads efficiently, deploying GPUs with NVIDIA accelerated computing is a great first step. However, where you run your GPUs also matters. Deploying in the right data centers can help optimize your GPUs while also running them efficiently. The collaboration between Equinix and NVIDIA helps our joint customers deploy their GPUs in a way that simultaneously supports their AI roll-out and their sustainability strategies.
A sustainable approach to AI starts with data center efficiency
The data center industry measures efficiency using a metric called power usage effectiveness (PUE). This metric compares the total energy consumed by a data center with the amount used specifically for powering compute equipment. To improve PUE, a data center operator would need to reduce energy used for overhead tasks like cooling. The closer a data center's PUE is to 1, the more efficient it is.
At Equinix, we systematically measure PUE as we pursue our target average annual threshold of 1.30 or better across our global operations. As of 2023, our annualized average global PUE was 1.42. This represents an efficiency improvement of 22% over our 2019 benchmark, despite our data center portfolio growing considerably during that timeframe. We also invested $77.5 million pursuing further efficiency improvements during 2023 alone.
In 2022, we became the first colocation data center operator to commit to expanding operating temperature ranges in alignment with ASHRAE A1 Allowable (A1A) standards. These standards state that enterprise-class equipment can safely operate at temperatures as high as 80°F (27°C). We've begun transitioning our facilities to operating temperatures closer to this standard, which is significantly warmer than the typical industry average of 72°F (22°C). Expanding our range of operating temperatures allows us to expend less energy on cooling, and therefore help customers run AI workloads more efficiently. For sites that use evaporative cooling, warmer operating temperatures will also reduce water consumption for cooling purposes.
Equinix has also begun to unveil the advanced cooling technology that AI hardware demands. We've deployed liquid cooling capabilities at 100 of our Equinix IBX® colocation data centers globally. Liquid cooling provides dual benefits:
- It enables greater power density than air cooling, thus supporting next-generation hardware such as GPUs.
- It helps reduce the total amount of power dedicated to cooling across the data center, thus driving efficiency improvements.
Not just efficient, but also sustainable
Equinix is dedicated to helping our customers pursue both efficiency and sustainability. We believe these are two separate but related initiatives. For instance, we not only want our data centers to use energy efficiently, but we also want the energy they do use to come from clean, renewable sources.

To achieve this, we've pursued a multifaceted strategy to increase the renewable energy coverage of our data center portfolio. We've prioritized power purchase agreements (PPAs) in this strategy. Signing PPAs allows us to support projects such as wind and solar farms and thus add new renewable energy capacity to local grids.
By using PPAs and other initiatives, we've achieved 96% renewable energy coverage globally. This includes the 235+ Equinix IBX data centers that already have 100% coverage. Customers who deploy in these facilities will have zero market-based emissions for their data center energy consumption.
Enabling sustainability on a global scale
As you consider where to host your AI infrastructure, sustainability and efficiency must be part of the discussion. However, there are other important factors as well. Putting all your high-density AI infrastructure in a colder region just to save energy on cooling would not be feasible. AI often requires low-latency connectivity to data sources, particularly when it comes to inference workloads. This means you'll need to deploy AI infrastructure in locations near those data sources.
A global data center platform can help you balance the tradeoffs of latency and sustainability. For instance, Equinix offers a global portfolio that includes both hyperscale data centers for very large training workloads and traditional colocation data centers for smaller, more latency-sensitive AI workloads. Our Equinix IBX colocation data centers are available in 70+ strategic markets worldwide. You can deploy your distributed AI workloads wherever they need to be, while also taking advantage of our sustainability and efficiency investments.
We've learned that many of our customers are looking for an "easy button" to help them embed sustainable practices into their AI strategies. With Equinix Private AI with NVIDIA DGX, our turnkey, ready-to-run AI infrastructure platform, that's exactly what we offer them. When customers deploy it, they instantly benefit from all the hard work and resources that Equinix and NVIDIA have invested in developing solutions that enable efficiency and sustainability. They can leave the AI infrastructure to us, freeing themselves to focus on AI innovation instead.
In addition, a private approach to AI can help our customers control and protect their AI datasets. This can be particularly helpful for organizations in highly regulated industries such as healthcare, financial services and legal. Read our e-book Unleash new possibilities with private AI to learn more about how Equinix and NVIDIA are delivering groundbreaking results for our customers.