300-170, Cisco Exams

Shut down or not shut down – Cisco Unified Computing Systems Overview

When you delete a specified VLAN, the ports associated with that VLAN are shut down and no traffic flows. However, the system retains all of the VLAN-to-port mappings for that VLAN. When you re-enable or re-create the specified VLAN, the system automatically reinstates all of the original ports to that VLAN.

If a VLAN group is used on a vNIC and also on a port channel assigned to an uplink, you cannot delete and add VLANs in the same transaction. The act of deleting and adding VLANs in the same transaction causes ENM pinning failure on the vNIC. vNIC configurations are done first, so the VLAN is deleted from the vNIC and a new VLAN is added, but this VLAN is not yet configured on the uplink. Hence, the transaction causes a pinning failure. You must add and delete a VLAN from a VLAN group in separate transactions.

Access ports only send untagged frames and belong to and carry the traffic of only one VLAN. Traffic is received and sent in native formats with no VLAN tagging. Anything arriving on an access port is assumed to belong to the VLAN assigned to the port.

You can configure a port in access mode and specify the VLAN to carry the traffic for that interface. If you do not configure the VLAN for a port in access mode or an access port, the interface carries the traffic for the default VLAN, which is VLAN 1.

You can change the access port membership in a VLAN by configuring it. You must create the VLAN before you can assign it as an access VLAN for an access port. If you change the access VLAN on an access port to a VLAN that is not yet created, the Cisco UCS Manager shuts down that access port.

If an access port receives a packet with an 802.1Q tag in the header other than the access VLAN value, that port drops the packet without learning its MAC source address. If you assign an access VLAN that is also a primary VLAN for a private VLAN, all access ports with that access VLAN receive all the broadcast traffic for the primary VLAN in the private VLAN mode.

Trunk ports allow multiple VLANs to transport between switches over that trunk link. A trunk port can carry untagged packets simultaneously with the 802.1Q tagged packets. When you assign a default port VLAN ID to the trunk port, all untagged traffic travels on the default port VLAN ID for the trunk port, and all untagged traffic is assumed to belong to this VLAN. This VLAN is referred to as the native VLAN ID for a trunk port. The native VLAN ID is the VLAN that carries untagged traffic on trunk ports.

The trunk port sends an egressing packet with a VLAN that is equal to the default port VLAN ID as untagged; all the other egressing packets are tagged by the trunk port. If you do not configure a native VLAN ID, the trunk port uses the default VLAN.

Note

Changing the native VLAN on a trunk port or an access VLAN of an access port flaps the switch interface.

300-180, 300-208, Cisco Exams

Cisco UCS-X System

The Cisco UCS X-Series Modular system, shown Figure 12-22, is the latest generation of the Cisco UCS. It is a modular system managed from the Cisco Intersight cloud. Here are the major new features:

The system operates in Intersight Managed Mode (IMM), as it is managed from the Cisco Intersight.

The new Cisco UCS X9508 chassis has a midplane-free design. The I/O connectivity for the X9508 chassis is accomplished via frontloading, with vertically oriented compute nodes intersecting with horizontally oriented I/O connectivity modules in the rear of the chassis.

Cisco UCS 9108 Intelligent Fabric modules provide connectivity to the upstream Cisco UCS 6400/6500 Fabric Interconnects.

Figure 12-22 Cisco UCS X-System with Cisco Intersight

The new Cisco UCS X9508 chassis provides a new and adaptable substitute for the first generation of the UCS chassis. It is designed to be expandable in the future. As proof of this, the X-Fabric slots are intended for future use. It has optimized cooling flows to support reliable operation for longer times. The major features are as follows:

A seven-rack-unit (7RU) chassis has 8 front-facing flexible slots. These can house a combination of compute nodes and a pool of future I/O resources, which may include GPU accelerators, disk storage, and nonvolatile memory.

2x Cisco UCS 9108 Intelligent Fabric Modules (IFMs) at the top of the chassis that connect the chassis to upstream Cisco UCS 6400/6500 Series Fabric Interconnects. Each IFM has the following features:

• Up to 200Gbps of unified fabric connectivity per compute node.

• The Cisco UCS 9108 25G IFM supports 8x 25Gbps SFP28 uplink ports, while the 100G option supports 8x 100-Gbps QSFP uplink ports. The unified fabric carries management traffic to the Cisco Intersight cloud-operations platform, Fibre Channel over Ethernet (FCoE) traffic, and production Ethernet traffic to the fabric interconnects.

At the bottom are slots, ready to house future I/O modules that can flexibly connect the compute modules with I/O devices. This connectivity is called Cisco UCS X-Fabric technology” because X is a variable that can evolve with new technology developments.

6x 2800W power supply units (PSUs) provide 54V power to the chassis with N, N+1, and N+N redundancy. A higher voltage allows efficient power delivery with less copper and reduced power loss.

Efficient, 4x 100mm, dual counter-rotating fans deliver industry-leading airflow and power efficiency. Optimized thermal algorithms enable different cooling modes to best support the network environment. Cooling is modular so that future enhancements can potentially handle open- or closed-loop liquid cooling to support even higher-power processors.

The X-Fabric Technology supports 32 lanes of PCIe Gen 4 connectivity to each compute node. Using the Cisco UCS X9416 X-Fabric Modules, each blade can access PCIe devices including up to four GPUs in a Cisco UCS X440p PCIe Node. Combined with two onboard cards, the compute nodes can accelerate workloads with up to six GPUs per node.

The available compute nodes for the Cisco UCS X-System are

Cisco UCS X210x M6 Compute Node:

• CPU: Up to 2x 3rd Gen Intel® Xeon® Scalable Processors with up to 40 cores per processor and 1.5 MB Level 3 cache per core

• Memory: Up to 32x 256 GB DDR4-3200 DIMMs for up to 8 TB of main memory. Configuring up to 16x 512-GB Intel Optane persistent memory DIMMs can yield up to 12 TB of memory.

Cisco UCS X210x M7 Compute Node:

• CPU: Up to 2x 4th Gen Intel Xeon Scalable Processors with up to 60 cores per processor and up to 2.625 MB Level 3 cache per core and up to 112.5 MB per CPU

• Memory: Up to 32x 256 GB DDR5-4800 DIMMs for up to 8 TB of main memory

Cisco UCS X410x M7 Compute Node

• CPU: 4x 4th Gen Intel Xeon Scalable Processors with up to 60 cores per processor and up to 2.625 MB Level 3 cache per core and up to 112.5 MB per CPU

• Memory: Up to 64x 256 GB DDR5-4800 DIMMs for up to 16 TB of main memory