SmartNICs, Distributed Service Processing Cards and Functional Accelerator Cards: The Future of Networking
In this article
About ten years ago, I was in a room in North Carolina with a small group, and we had a special guest, a CEO of a major technology company out of Silicon Valley. An attendee asked him a question about the competition. His response was, "I don't worry about competition. I worry about industry transitions and staying in front of them."
This statement changed my career path. At that point, I quit being stuck in editing ACLs at command line and started researching and staying on top of industry transitions. I dug into stateless computing, converged and hyper-converged systems, software-defined networking (SDN), automation and orchestration, in that order. When I joined WWT, the first week, I heard someone say, "skate where the puck is going." Those words from the famous Wayne Gretzky, "I skate to where the puck is going, not where it has been," is all about seeing the transitions and staying ahead of them.
One transition that I believe will be a strong trend is SmartNICs, also known as Distributed Processing Units (DPUs) and Functional Accelerator Cards (FACs).
A SmartNIC is a Network Interface Card (NIC), a PCIe card that plugs into servers or storage within a data center. What makes it "smart" is that it has Data Processing Unit (DPU) or units onboard. SmartNICs have been around since 2012. However, today the time is ripe for implementing SmartNICs.
Increased demand on the network and compute are the result of industry transitions such as SDN, Network Function Virtualization (NFV), artificial intelligence (AI), machine learning (ML), cybersecurity risk abatement, hyperscale architecture and increased Internet demand during the pandemic. The resulting additional bandwidth and compute complexity are just a few transitions that have placed increased strains on the data center network and compute.
As SmartNICs continue to evolve, additional CPU intense processes such as cryptography, stateful filtering and container acceleration can be offloaded to the SmartNIC, freeing up the CPU for applications.
The SmartNIC contains a processing architecture that can be used to offload workloads and packet processing from the CPU of the compute. These cards boost server performance and minimize the hair pinning, which moves the packets from the host to a switch for policy and then back to the original host for delivery. Applying policy at the host or server is much more efficient for workloads that communicate between virtual machines in the same physical server. Virtual switching may consume up to 90 percent of a server's available CPU utilization, according to eweek.com.
SmartNICs vendors
The vendors we see rising to the top of the SmartNIC space are NVIDIA, which has several offerings. Pensando differentiates itself by having a controller for simple out-of-the-box configuration and an extensible and programable ASIC, and Intel which is showing a lot of promise in the enterprise and service provider space.
The three vendors listed will participate in VMware's Project Monterey. With Project Monterey, VMware ESXi hypervisor, NSX and vSAN can be moved into the SmartNIC. The SmartNIC can assist with the performance of all the DC tiers, compute, network and storage virtualization layers that typically run on top of the x86 CPU.
Gartner predicts that by 2023, one in three network interface cards shipped will be a FAC or SmartNICs.
Which name we commonly refer to them as remains to be seen — Networking SmartNICs, Distributed Service Processing Cards or Functional Accelerator Cards. However, it is time to begin testing, implementing subsets of features initially (such as tap-as-a-service or load balancing), and becoming familiar with this next industry transition. The best place to be in this industry is where the puck is going.