300-170, 300-175, 300-180, 300-206, 300-208, Cisco Exams

UCS Identity Pools – Cisco Unified Computing Systems Overview

The Cisco UCS Manager can classify servers into resource pools based on criteria including physical attributes (such as processor, memory, and disk capacity) and location (for example, blade chassis slot). Server pools can help automate configuration by identifying servers that can be configured to assume a particular role (such as web server or database server) and automatically configuring them when they are added to a pool.

Resource pools are collections of logical resources that can be accessed when configuring a server. These resources include universally unique IDs (UUIDs), MAC addresses, and WWNs.

The Cisco UCS platform utilizes a dynamic identity instead of hardware burned-in identities. A unique identity is assigned from identity and resource pools. Computers and peripherals extract these identities from service profiles. A service profile has all the server identities including UUIDs, MACs, WWNNs, firmware versions, BIOS settings, policies, and other server settings. A service profile is associated with the physical server that assigns all the settings in a service profile to the physical server.

In case of server failure, the failed server needs to be removed and the replacement server needs to be associated with the existing service profile of the failed server. In this service profile association process, the new server automatically picks up all the identities of the failed server, and the operating system or applications that depend on these identities do not observe any change in the hardware. In case of peripheral failure, the replacement peripheral automatically acquires the identities of the failed components. This significantly improves the system recovery time in case of a failure. Server profiles include many identity pools:

UUID suffix pools

MAC pools

IP pools

Server pools

Universally Unique Identifier Suffix Pools

A universally unique identifier suffix pool is a collection of System Management BIOS (SMBIOS) UUIDs that are available to be assigned to servers. The first number of digits that constitute the prefix of the UUID is fixed. The remaining digits, the UUID suffix, are variable. A UUID suffix pool ensures that these variable values are unique for each server associated with a service profile which uses that particular pool to avoid conflicts.

If you use UUID suffix pools in service profiles, you do not have to manually configure the UUID of the server associated with the service profile.

An example of creating UUID pools is as follows:

Step 1. In the Navigation pane, click Servers.

Step 2. Expand Servers > Pools.

Step 3. Expand the node for the organization where you want to create the pool. If the system does not include multitenancy, expand the root node.

Step 4. Right-click UUID Suffix Pools and select Create UUID Suffix Pool.

Step 5. In the Define Name and Description page of the Create UUID Suffix Pool wizard, complete the following fields (see Figure 12-46):

Figure 12-46 Creating UUID Suffix Pool

Step 6. Click Next.

Step 7. In the Add UUID Blocks page of the Create UUID Suffix Pool wizard, click Add.

Step 8. In the Create a Block of UUID Suffixes dialog box, complete the following fields:

Step 9. Click OK.

Step 10. Click Finish to complete the wizard.

You need to assign the UUID suffix pool to a service profile and/or template.

300-180, 300-208, Cisco Exams

Cisco UCS-X System

The Cisco UCS X-Series Modular system, shown Figure 12-22, is the latest generation of the Cisco UCS. It is a modular system managed from the Cisco Intersight cloud. Here are the major new features:

The system operates in Intersight Managed Mode (IMM), as it is managed from the Cisco Intersight.

The new Cisco UCS X9508 chassis has a midplane-free design. The I/O connectivity for the X9508 chassis is accomplished via frontloading, with vertically oriented compute nodes intersecting with horizontally oriented I/O connectivity modules in the rear of the chassis.

Cisco UCS 9108 Intelligent Fabric modules provide connectivity to the upstream Cisco UCS 6400/6500 Fabric Interconnects.

Figure 12-22 Cisco UCS X-System with Cisco Intersight

The new Cisco UCS X9508 chassis provides a new and adaptable substitute for the first generation of the UCS chassis. It is designed to be expandable in the future. As proof of this, the X-Fabric slots are intended for future use. It has optimized cooling flows to support reliable operation for longer times. The major features are as follows:

A seven-rack-unit (7RU) chassis has 8 front-facing flexible slots. These can house a combination of compute nodes and a pool of future I/O resources, which may include GPU accelerators, disk storage, and nonvolatile memory.

2x Cisco UCS 9108 Intelligent Fabric Modules (IFMs) at the top of the chassis that connect the chassis to upstream Cisco UCS 6400/6500 Series Fabric Interconnects. Each IFM has the following features:

• Up to 200Gbps of unified fabric connectivity per compute node.

• The Cisco UCS 9108 25G IFM supports 8x 25Gbps SFP28 uplink ports, while the 100G option supports 8x 100-Gbps QSFP uplink ports. The unified fabric carries management traffic to the Cisco Intersight cloud-operations platform, Fibre Channel over Ethernet (FCoE) traffic, and production Ethernet traffic to the fabric interconnects.

At the bottom are slots, ready to house future I/O modules that can flexibly connect the compute modules with I/O devices. This connectivity is called Cisco UCS X-Fabric technology” because X is a variable that can evolve with new technology developments.

6x 2800W power supply units (PSUs) provide 54V power to the chassis with N, N+1, and N+N redundancy. A higher voltage allows efficient power delivery with less copper and reduced power loss.

Efficient, 4x 100mm, dual counter-rotating fans deliver industry-leading airflow and power efficiency. Optimized thermal algorithms enable different cooling modes to best support the network environment. Cooling is modular so that future enhancements can potentially handle open- or closed-loop liquid cooling to support even higher-power processors.

The X-Fabric Technology supports 32 lanes of PCIe Gen 4 connectivity to each compute node. Using the Cisco UCS X9416 X-Fabric Modules, each blade can access PCIe devices including up to four GPUs in a Cisco UCS X440p PCIe Node. Combined with two onboard cards, the compute nodes can accelerate workloads with up to six GPUs per node.

The available compute nodes for the Cisco UCS X-System are

Cisco UCS X210x M6 Compute Node:

• CPU: Up to 2x 3rd Gen Intel® Xeon® Scalable Processors with up to 40 cores per processor and 1.5 MB Level 3 cache per core

• Memory: Up to 32x 256 GB DDR4-3200 DIMMs for up to 8 TB of main memory. Configuring up to 16x 512-GB Intel Optane persistent memory DIMMs can yield up to 12 TB of memory.

Cisco UCS X210x M7 Compute Node:

• CPU: Up to 2x 4th Gen Intel Xeon Scalable Processors with up to 60 cores per processor and up to 2.625 MB Level 3 cache per core and up to 112.5 MB per CPU

• Memory: Up to 32x 256 GB DDR5-4800 DIMMs for up to 8 TB of main memory

Cisco UCS X410x M7 Compute Node

• CPU: 4x 4th Gen Intel Xeon Scalable Processors with up to 60 cores per processor and up to 2.625 MB Level 3 cache per core and up to 112.5 MB per CPU

• Memory: Up to 64x 256 GB DDR5-4800 DIMMs for up to 16 TB of main memory

300-208, Cisco Exams

Fabric Interconnect Connectivity and Configurations – Cisco Unified Computing Systems Overview

A fully redundant Cisco Unified Computing System consists of two independent fabric planes: Fabric A and Fabric B. Each plane consists of a central fabric interconnect connected to an I/O module (Fabric Extender) in each blade chassis. The two fabric interconnects are completely independent from the perspective of the data plane; the Cisco UCS can function with a single fabric interconnect if the other fabric is offline or not provisioned (see Figure 12-24).

Figure 12-24 UCS Fabric Interconnect (FI) Status

The following steps show how to determine the primary fabric interconnect:

Step 1. In the Navigation pane, click Equipment.

Step 2. Expand Equipment > Fabric Interconnects.

Step 3. Click the fabric interconnect for which you want to identify the role.

Step 4. In the Work pane, click the General tab.

Step 5. In the General tab, click the down arrows on the High Availability Details bar to expand that area.

Step 6. View the Leadership field to determine whether the fabric interconnect is primary or subordinate.

Note

If the admin password is lost, you can determine the primary and secondary roles of the fabric interconnects in a cluster by opening the Cisco UCS Manager GUI from the IP addresses of both fabric interconnects. The subordinate fabric interconnect fails with the following message: “UCSM GUI is not available on secondary node.”

The fabric interconnect is the core component of the Cisco UCS. Cisco UCS Fabric Interconnects provide uplink access to LAN, SAN, and out-of-band management segments, as shown in Figure 12-25. Cisco UCS infrastructure management is handled through the embedded management software, the Cisco UCS Manager, for both hardware and software management. The Cisco UCS Fabric Interconnects are top-of-rack devices and provide unified access to the Cisco UCS domain.

Figure 12-25 Cisco UCS Components Logical Connectivity

All network endpoints, such as host bus adapters (HBAs) and management entities such as Cisco Integrated Management Controllers (CIMCs), are dual-connected to both fabric planes and thus can work in an active-active configuration.

Virtual port channels (vPCs) are not supported on the fabric interconnects, although the upstream LAN switches to which they connect can be vPC or Virtual Switching System (VSS) peers.

Cisco UCS Fabric Interconnects provide network connectivity and management for the connected servers. They run the Cisco UCS Manager control software and consist of expansion modules for the Cisco UCS Manager software.

300-170, 300-208, Cisco Exams

Cisco UCS Virtualization Infrastructure

The Cisco UCS is a single integrated system with switches, cables, adapters, and servers all tied together and managed by unified management software. Thus, you are able to virtualize every component of the system at every level. The switch port, cables, adapter, and servers can all be virtualized.

Because of the virtualization capabilities at every component of the system, you have the unique ability to provide rapid provisioning of any service on any server on any blade through a system that is wired once. Figure 12-20 illustrates these virtualization capabilities.

The Cisco UCS Virtual Interface Card 1400/14000 Series (Figure 12-20) extends the network fabric directly to both servers and virtual machines so that a single connectivity mechanism can be used to connect both physical and virtual servers with the same level of visibility and control. Cisco VICs provide complete programmability of the Cisco UCS I/O infrastructure, with the number and type of I/O interfaces configurable on demand with a zero-touch model.

Cisco VICs support Cisco Single Connect technology, which provides an easy, intelligent, and efficient way to connect and manage computing in the data center. Cisco Single Connect unifies LAN, SAN, and systems management into one simplified link for rack servers, blade servers, and virtual machines. This technology reduces the number of network adapters, cables, and switches needed and radically simplifies the network, reducing complexity. Cisco VICs can support 256 PCI Express (PCIe) virtual devices, either virtual network interface cards (vNICs) or virtual host bus adapters (vHBAs), with a high rate of I/O operations per second (IOPS), support for lossless Ethernet, and 10/25/40/100-Gbps connection to servers. The PCIe Generation 3 16 interface helps ensure optimal bandwidth to the host for network-intensive applications, with a redundant path to the fabric interconnect. Cisco VICs support NIC teaming with fabric failover for increased reliability and availability. In addition, it provides a policy-based, stateless, agile server infrastructure for your data center.

Figure 12-20 UCS Virtualization Infrastructure

The VIC 1400/14000 Series is designed exclusively for the M5 generation of UCS B-Series blade servers, C-Series rack servers, and S-Series storage servers. The adapters are capable of supporting 10/25/40/100-Gigabit Ethernet and Fibre Channel over Ethernet. It incorporates Cisco’s next-generation converged network adapter (CNA) technology and offers a comprehensive feature set. In addition, the VIC supports Cisco’s Data Center Virtual Machine Fabric Extender (VM-FEX) technology. This technology extends the Cisco UCS Fabric Interconnect ports to virtual machines, simplifying server virtualization deployment.

The Cisco UCS VIC 1400/14000 Series provides the following features and benefits (see Figure 12-21):

Stateless and agile platform: The personality of the card is determined dynamically at boot time using the service profile associated with the server. The number, type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, bandwidth, and quality of service (QoS) policies of the PCIe interfaces are all determined using the service profile. The capability to define, create, and use interfaces on demand provides a stateless and agile server infrastructure.

Network interface virtualization: Each PCIe interface created on the VIC is associated with an interface on the Cisco UCS Fabric Interconnect, providing complete network separation for each virtual cable between a PCIe device on the VIC and the interface on the fabric interconnect.

Figure 12-21 Cisco UCS 1400 Virtual Interface Cards (VICs)

UCS M5 B-Series VIC: