Overview
Explore
Labs
Services
Events
Partners
14 results found
High Performance AI/ML Networking
Today, network engineers, especially in the data center space, must acquire AI/ML infrastructure skills and be able to discuss the required infrastructure upgrades and the reasoning for the upgrades with upper management. At WWT, we are committing $500 million to help our customers with AI/ML, and we have launched a new series of Learning Paths to help the reader navigate complex AI topics. By mastering these areas, data center network engineers can effectively contribute to successfully implementing and managing advanced AI and HPC infrastructure, aligning technological capabilities with business objectives while maintaining a robust and secure network environment.
Learning Path
Cisco ACI
Cisco ACI is a policy-driven CLOS or Spine/Leaf based switching fabric utilizing layer 3 ECMP routing in the underlay and VXLAN encapsulation in the overlay to transport layer two and layer three traffic East/West across the fabric and North/South in and out of the fabric. ACI consists of the Application Policy Infrastructure Controller (APIC), a centralized controller that manages all aspects of the ACI fabric. The leaf switches are ToR switches that provide connectivity between servers and external networks, and the spine switches are Layer 3 switches that provide ECMP high-bandwidth connectivity between leaf switches. An ACI fabric can be expanded East/West by adding leafs, cabling to the spines, and registering them. ACI was designed to operate as "One Big Switch" (like a chassis-based NEXUS 7K) with the controllers acting like the Supervisors, the spines as fabric modules, and leafs acting as blades. We can decouple these elements from the chassis and place them anywhere in the data center by taking this approach. The leafs (blades) can be placed anywhere in the data center, so you are not limited to a chassis.
Learning Path
Border Gateway Protocol (BGP) Foundations
This learning path on foundational Border Gateway Protocol (BGP) provides a comprehensive introduction and deep dive into the core aspects of BGP, the backbone of the internet's routing architecture. It starts by explaining the basics of BGP, including its purpose, operation, and fundamental concepts such as Autonomous Systems (AS), BGP sessions, and the BGP routing table. The series progresses to cover more advanced topics, such as BGP path selection, route advertisement, and the implementation of various BGP attributes like Local Preference, AS Path, and MED. Through practical examples, configurations, and troubleshooting scenarios, viewers gain a thorough understanding of how BGP facilitates global data exchange and the techniques network engineers use to optimize and secure BGP networks. The series aims to equip network professionals with the knowledge needed to manage and optimize BGP in real-world environments, emphasizing best practices and common pitfalls.
Learning Path
Open Shortest Path First (OSPF) Foundations
This learning path on foundational Open Shortest Path First (OSPF) provides a comprehensive introduction and deep dive into the core aspects of OSPF. It starts by explaining the basics of OSPF, including its purpose, operation, and fundamental concepts such as OSPF Neighbors, advertising routes, authentication, OSPF network types, and many others. This learning path aims to equip network professionals with the knowledge needed to manage and optimize OSPF in real-world environments, emphasizing best practices and common pitfalls.
Learning Path
Cisco NEXUS Dashboard
Previously, the Cisco day 2 operations suite of MSO, NAE, NIR, and NIA ran separately on computing as a .ova in vSphere or on the APIC as an application. The architecture never allowed for sharing data between the apps or correlations with errors and telemetry views of packet loss. With the roadmap moving, we had applications and a shared data lake for all applications to draw from and correlate between application errors, changes to the policy, and deep flow telemetry, all visual as an epoch.
For all the applications to run and have sharable databases, Cisco NEXUS Dashboard (ND) was created. The ND platform allows all the Cisco Day 2 apps and third-party applications to run on a single appliance. Secondly, the ND has to be an expandable CPU and storage-intensive platform; today, the platform can scale with 3 master nodes and 4 worker nodes with the apps and their data residing on the ND cluster. As ND matures, more ND servers can join the cluster, and they can be separated regionally if within TTL requirements to distribute applications and provide DR strategies.
Learning Path
Cisco DCNM and NDFC
Data Center Network Manager (DCNM), now superseded by Nexus Dashboard Fabric Controller (NDFC), is Cisco's version of an EVPN fabric controller. DCNM, or NDFC, has three essential primary components. Spines that act like physical aggregation points for all of the leaves. Leaves provide endpoint aggregation and link to the spine. The last essential component is the controller, which in the case of DCNM is a single device or HA pair, and for NDFC, it is a Nexus Dashboard cluster with a loaded NDFC application.
In most cases, DCNM or NDFC will utilize OSPF for the underlay and VXLAN EVPN with MP-BGP as the overlay on Nexus 9000 switches. These fabric controllers are easier to use than similar fabric technologies on smaller to medium size networks, as most variables work out of the box without needing to change them. To what is known in the industry as a point and click your way to happiness.
Learning Path
Exploring Cisco NEXUS Dashboard
Cisco has been at the forefront of developing a suite of standalone tools for data center networking, collectively known as the Day 2 Operations Suite. Recently, Cisco has initiated the integration of these tools into a unified interface called NEXUS Dashboard, providing a consolidated view and shared data repositories for enhanced application correlation. This Learning path is designed specifically for those new to networking or new to the NEXUS Dashboard product. Future learning paths will go into more detail about operating and implementing the NEXUS Dashboard platform.
Learning Path
Arista Universal Cloud
There are several universal architectures in the Arista design portfolio. The Enterprise Universal Cloud Network can be used in the data center and in the Cognitive Campus, and the Enterprise Anycloud Network expands beyond the data center. All of these designs deliver the Arista Unified AnyCloud Architecture offering unified orchestration, management and telemetry with CloudVision.
Arista's guiding principles are Universal(Common architectures from small to huge), Simple(a single operating system for all platforms and hardware), Open(standards-based features and functions), Programable(Easy to use APIs and automation), and Visible(Full state Telemetry and flow information)
The key to these guiding design principles is the use of the Arista EOS architecture for its hardware and software-based devices. EOS is based on a Linux kernel, standard and fully open. EOS uses agent processes, that use a Publish/Subscribe model to populate the NetDB on each device which contains all device states. NetDB can then be used by Aristas CloudVision products to offer full-state telemetry and flow information for the entire Universal Architecture. Because Arista is using a single binary for all hardware and software-based devices Arista can implement hardware abstraction. Unlike some other manufacturers, Arista uses the latest Merchant Silicon and when coupled with the standard open EOS kernel allows very fast development cycles, fewer bugs, and faster adoption by customers since it is the same EOS.
Learning Path
Cisco ACI: Tenant & Fabric Connectivity
The Cisco ACI: Tenant & Fabric Connectivity learning path is the second part of our ACI fundamentals training, following the "Cisco ACI Fabric Initialization and Hypervisor Connectivity" path.
This path covers two key areas:
1) Fabric Infrastructure configurations, which involve physical fabric setup, including vPCs, VLANs, loop prevention, underlay BGP protocol, etc.
2) Tenant Configurations, defining logical constructs like application profiles, bridge domains, and EPGs.
In this Learning Path, students will learn to create Tenants, Application Profiles, Bridge domains, and EPGs, by using objects for connectivity within a physical ACI fabric. They will also discover how to connect Layers 2 and 3 to external networks and explore methods for segmentation using contracts and filters to support both Inter-EPG and Intra-EPG segmentation.
Learning Path
Cisco ACI: Fabric & Hypervisor Setup
This is part 1 of our ACI fundamentals training, with Part 2, "Cisco ACI: Tenant & Fabric Connectivity," as the next step in the series. Cisco's APIC functions like a switch supervisor, configuring the fabric as if it were a single switch. The Spine/Leaf fabric discovery and self-assembly occur seamlessly, similar to plugging in the supervisor (APIC), fabric module (Spine), and blades(Leafs). This Learning Path delves into ACI's discovery process, fabric construction, and its extension to hypervisors via VMM integration, thereby expanding ACI control to the hypervisors' vSwitch. Part 2 will dive deeper into configurations that are applied to designate ports as access, trunk, or L3 ports.
Learning Path
Juniper Apstra
Apstra is an orchestrator that behaves much like Cisco DCNM/CNDFC or Arista CVP. The ultimate differentiator is that Apstra works well with all the major switch manufacturers. And in many cases, the Apstra on-box agent enhances the native abilities of the switch error and misconfiguration reporting via deduplication and with depth and granularity of telemetry. In addition, the on-box agent aids in scaling complex expansive networks.
With Intent-Based Networking (IBN), throughout a network's design, deployment, and operational phases, Apstra maintains its user's blueprint by converting it to the vendor-specific instructions each switch needs without requiring the user to have even a low level of knowledge about a particular vendor's switch. The Apstra flow across these phases ensures a closed-loop, single source of truth system.
Learning Path
Cisco ACI Multisite
Cisco ACI is a policy-driven CLOS or Spine/Leaf based switching fabric utilizing layer 3 ECMP routing in the underlay and VXLAN encapsulation in the overlay to transport layer two and layer three traffic East/West across the fabric and North/South in and out of the fabric. ACI consists of the Application Policy Infrastructure Controller (APIC), a centralized controller that manages all aspects of the ACI fabric. The leaf switches are ToR switches that provide connectivity between servers and external networks, and the spine switches are Layer 3 switches that provide ECMP high-bandwidth connectivity between leaf switches. An ACI fabric can be expanded East/West by adding leafs, cabling to the spines, and registering them. ACI was designed to operate as "One Big Switch" (like a chassis-based NEXUS 7K) with the controllers acting like the Supervisors, the spines as fabric modules, and leafs acting as blades. This approach allows us to decouple these elements from the chassis and place them anywhere in the data center. The leafs (blades) can be placed anywhere in the data center, so you are not limited to a chassis.
We can take this decoupling one step further and put a leaf in a remote data center (remote leaf), place a spine and leaf fabric extension into a second data center (multi-pod), or a new spine-leaf fabric in a data center (multi-site). Using the NEXUS Dashboard Orchestrator (NDO), we can treat multiple fabrics as one entity from a policy standpoint and manage and perform day two operations from a single pane of glass. The power of ACI allows us to stretch layer two and layer three across multiple fabrics and use a single policy for forwarding traffic in the data center.
This Learning Module will guide you through basics and implementation skills.
Learning Path