300-180, 300-208, Cisco Exams

Cisco UCS-X System

The Cisco UCS X-Series Modular system, shown Figure 12-22, is the latest generation of the Cisco UCS. It is a modular system managed from the Cisco Intersight cloud. Here are the major new features:

The system operates in Intersight Managed Mode (IMM), as it is managed from the Cisco Intersight.

The new Cisco UCS X9508 chassis has a midplane-free design. The I/O connectivity for the X9508 chassis is accomplished via frontloading, with vertically oriented compute nodes intersecting with horizontally oriented I/O connectivity modules in the rear of the chassis.

Cisco UCS 9108 Intelligent Fabric modules provide connectivity to the upstream Cisco UCS 6400/6500 Fabric Interconnects.

Figure 12-22 Cisco UCS X-System with Cisco Intersight

The new Cisco UCS X9508 chassis provides a new and adaptable substitute for the first generation of the UCS chassis. It is designed to be expandable in the future. As proof of this, the X-Fabric slots are intended for future use. It has optimized cooling flows to support reliable operation for longer times. The major features are as follows:

A seven-rack-unit (7RU) chassis has 8 front-facing flexible slots. These can house a combination of compute nodes and a pool of future I/O resources, which may include GPU accelerators, disk storage, and nonvolatile memory.

2x Cisco UCS 9108 Intelligent Fabric Modules (IFMs) at the top of the chassis that connect the chassis to upstream Cisco UCS 6400/6500 Series Fabric Interconnects. Each IFM has the following features:

• Up to 200Gbps of unified fabric connectivity per compute node.

• The Cisco UCS 9108 25G IFM supports 8x 25Gbps SFP28 uplink ports, while the 100G option supports 8x 100-Gbps QSFP uplink ports. The unified fabric carries management traffic to the Cisco Intersight cloud-operations platform, Fibre Channel over Ethernet (FCoE) traffic, and production Ethernet traffic to the fabric interconnects.

At the bottom are slots, ready to house future I/O modules that can flexibly connect the compute modules with I/O devices. This connectivity is called Cisco UCS X-Fabric technology” because X is a variable that can evolve with new technology developments.

6x 2800W power supply units (PSUs) provide 54V power to the chassis with N, N+1, and N+N redundancy. A higher voltage allows efficient power delivery with less copper and reduced power loss.

Efficient, 4x 100mm, dual counter-rotating fans deliver industry-leading airflow and power efficiency. Optimized thermal algorithms enable different cooling modes to best support the network environment. Cooling is modular so that future enhancements can potentially handle open- or closed-loop liquid cooling to support even higher-power processors.

The X-Fabric Technology supports 32 lanes of PCIe Gen 4 connectivity to each compute node. Using the Cisco UCS X9416 X-Fabric Modules, each blade can access PCIe devices including up to four GPUs in a Cisco UCS X440p PCIe Node. Combined with two onboard cards, the compute nodes can accelerate workloads with up to six GPUs per node.

The available compute nodes for the Cisco UCS X-System are

Cisco UCS X210x M6 Compute Node:

• CPU: Up to 2x 3rd Gen Intel® Xeon® Scalable Processors with up to 40 cores per processor and 1.5 MB Level 3 cache per core

• Memory: Up to 32x 256 GB DDR4-3200 DIMMs for up to 8 TB of main memory. Configuring up to 16x 512-GB Intel Optane persistent memory DIMMs can yield up to 12 TB of memory.

Cisco UCS X210x M7 Compute Node:

• CPU: Up to 2x 4th Gen Intel Xeon Scalable Processors with up to 60 cores per processor and up to 2.625 MB Level 3 cache per core and up to 112.5 MB per CPU

• Memory: Up to 32x 256 GB DDR5-4800 DIMMs for up to 8 TB of main memory

Cisco UCS X410x M7 Compute Node

• CPU: 4x 4th Gen Intel Xeon Scalable Processors with up to 60 cores per processor and up to 2.625 MB Level 3 cache per core and up to 112.5 MB per CPU

• Memory: Up to 64x 256 GB DDR5-4800 DIMMs for up to 16 TB of main memory

300-175, Cisco Exams

Cisco UCS Network Management

The Cisco UCS Fabric Interconnect behaves as a switching device between the servers and the network, and the Cisco UCS Manager is embedded in the fabric interconnect, providing server hardware state abstraction. This section covers switching and server network profile configurations.

UCS Virtual LAN

A virtual LAN (VLAN) is a switched network that is logically segmented by function, project team, or application, without regard to the physical locations of the users. VLANs have the same attributes as physical LANs, but you can group end stations even if they are not physically located on the same LAN segment.

Any switch port can belong to a VLAN. Unicast, broadcast, and multicast packets are forwarded and flooded only to end stations in the VLAN. Each VLAN is considered a logical network, and packets destined for stations that do not belong to the VLAN must be forwarded through a router or bridge.

VLANs are typically associated with IP subnetworks. For example, all of the end stations in a particular IP subnet belong to the same VLAN. To communicate between VLANs, you must route the traffic. By default, a newly created VLAN is operational. Additionally, you can configure VLANs to be in the active state, which is passing traffic, or in the suspended state, in which the VLANs are not passing packets. By default, the VLANs are in the active state and pass traffic.

You can use the Cisco UCS Manager to manage VLANs by doing the following:

Configure named VLANs

Assign VLANS to an access or trunk port.

Create, delete, and modify VLANs.

VLANs are numbered from 1 to 4094. All configured ports belong to the default VLAN when you first bring up a switch. The default VLAN (VLAN 1) uses only default values. You cannot create, delete, or suspend activity in the default VLAN.

The native VLAN and the default VLAN are not the same. Native refers to VLAN traffic without an 802.1q header and can be assigned or not. The native VLAN is the only VLAN that is not tagged in a trunk, and the frames are transmitted unchanged.

You can tag everything and not use a native VLAN throughout your network, and the VLAN or devices are reachable because switches use VLAN 1 as the native by default.

The UCS Manager – LAN Uplink Manager configuration page enables you to configure VLANs and to change the native VLAN setting. Changing the native VLAN setting requires a port flap for the change to take effect; otherwise, the port flap is continuous. When you change the native VLAN, there is a loss of connectivity for approximately 20–40 seconds.

Native VLAN guidelines are as follows:

You can configure native VLANs only on trunk ports.

You can change the native VLAN on a UCS vNIC; however, the port flaps and can lead to traffic interruptions.

Cisco recommends using the native VLAN 1 setting to prevent traffic interruptions if using the Cisco Nexus 1000v switches. The native VLAN must be the same for the Nexus 1000v port profiles and your UCS vNIC definition.

If the native VLAN 1 setting is configured, and traffic routes to an incorrect interface, there is an outage, or the switch interface flaps continuously, your disjoint Layer 2 network configuration might have incorrect settings.

Using the native VLAN 1 for management access to all of your devices can potentially cause problems if someone connects another switch on the same VLAN as your management devices.

You configure a VLAN by assigning a number to it. You can delete VLANs or move them from the active operational state to the suspended operational state. If you attempt to create a VLAN with an existing VLAN ID, the switch goes into the VLAN sub-mode but does not create the same VLAN again. Newly created VLANs remain unused until you assign ports to the specific VLAN. All of the ports are assigned to VLAN 1 by default. Depending on the range of the VLAN, you can configure the following parameters for VLANs (except for the default VLAN):

300-206, Cisco Exams

UCS Device Discovery – Cisco Unified Computing Systems Overview

The chassis connectivity policy determines whether a specific chassis is included in a fabric port channel after chassis discovery. This policy is helpful for users who want to configure one or more chassis differently from what is specified in the global chassis discovery policy. The chassis connectivity policy also allows for different connectivity modes per fabric interconnect, further expanding the level of control offered with regards to chassis connectivity.

By default, the chassis connectivity policy is set to global. This means that connectivity control is configured when the chassis is newly discovered, using the settings configured in the chassis discovery policy. Once the chassis is discovered, the chassis connectivity policy controls whether the connectivity control is set to none or port channel.

Chassis /FEX Discovery

The chassis discovery policy determines how the system reacts when you add a new chassis. The Cisco UCS Manager uses the settings in the chassis discovery policy to determine the minimum threshold for the number of links between the chassis and the fabric interconnect and whether to group links from the IOM to the fabric interconnect in a fabric port channel. In a Cisco UCS Mini setup, chassis discovery policy is supported only on the extended chassis.

The Cisco UCS Manager cannot discover any chassis that is wired for fewer links than are configured in the chassis/FEX discovery policy. For example, if the chassis/FEX discovery policy is configured for four links, the Cisco UCS Manager cannot discover any chassis that is wired for one link or two links. Reacknowledgement of the chassis resolves this issue.

Rack Server Discovery Policy

The rack server discovery policy determines how the system reacts when you add a new rack-mount server. The Cisco UCS Manager uses the settings in the rack server discovery policy to determine whether any data on the hard disks is scrubbed and whether server discovery occurs immediately or needs to wait for explicit user acknowledgment.

The Cisco UCS Manager cannot discover any rack-mount server that has not been correctly cabled and connected to the fabric interconnects. The steps to configure rack server discovery are as follows:

Step 1. In the Navigation pane, click Equipment.

Step 2. Click the Equipment node. In the Work pane, click the Policies tab.

Step 3. Click the Global Policies subtab.

Step 4. In the Rack Server Discovery Policy area, specify the action that you want to occur when a new rack server is added and specify the scrub policy. Then click Save Changes.

Initial Server Setup for Standalone UCS C-Series

Use the following procedure to perform initial setup on a UCS C-Series server:

Step 1. Power up the server. Wait for approximately two minutes to let the server boot in standby power during the first bootup. You can verify power status by looking at the Power Status LED:

Off: There is no AC power present in the server.

Amber: The server is in standby power mode. Power is supplied only to the CIMC and some motherboard functions.

Green: The server is in main power mode. Power is supplied to all server components.

Note

Verify server power requirements because some servers (UCS C-240, for example) require 220V instead of 110V.

Note

During bootup, the server beeps once for each USB device that is attached to the server. Even if no external USB devices are attached, there is a short beep for each virtual USB device, such as a virtual floppy drive, CD/DVD drive, keyboard, or mouse. A beep is also emitted if a USB device is hot-plugged or hot-unplugged during the BIOS power-on self-test (POST), or while you are accessing the BIOS Setup utility or the EFI shell.

Step 2. Connect a USB keyboard and VGA monitor by using the supplied Kernel-based Virtual Machine (KVM) cable connected to the KVM connector on the front panel. You can use the VGA and USB ports on the rear panel. However, you cannot use the front-panel VGA and the rear-panel VGA at the same time. If you are connected to one VGA connector and you then connect a video device to the other connector, the first VGA connector is disabled.

Step 3. Open the Cisco IMC Configuration Utility as follows:

Press the Power button to boot the server. Watch for the prompt to press F8.

During bootup, press F8 when prompted to open the Cisco IMC Configuration Utility, as shown in Figure 12-36.

Figure 12-36 Standalone UCS CIMC Configuration Utility

Note

The first time that you enter the Cisco IMC Configuration Utility, you are prompted to change the default password. The default password is password.

300-208, Cisco Exams

Fabric Interconnect Connectivity and Configurations – Cisco Unified Computing Systems Overview

A fully redundant Cisco Unified Computing System consists of two independent fabric planes: Fabric A and Fabric B. Each plane consists of a central fabric interconnect connected to an I/O module (Fabric Extender) in each blade chassis. The two fabric interconnects are completely independent from the perspective of the data plane; the Cisco UCS can function with a single fabric interconnect if the other fabric is offline or not provisioned (see Figure 12-24).

Figure 12-24 UCS Fabric Interconnect (FI) Status

The following steps show how to determine the primary fabric interconnect:

Step 1. In the Navigation pane, click Equipment.

Step 2. Expand Equipment > Fabric Interconnects.

Step 3. Click the fabric interconnect for which you want to identify the role.

Step 4. In the Work pane, click the General tab.

Step 5. In the General tab, click the down arrows on the High Availability Details bar to expand that area.

Step 6. View the Leadership field to determine whether the fabric interconnect is primary or subordinate.

Note

If the admin password is lost, you can determine the primary and secondary roles of the fabric interconnects in a cluster by opening the Cisco UCS Manager GUI from the IP addresses of both fabric interconnects. The subordinate fabric interconnect fails with the following message: “UCSM GUI is not available on secondary node.”

The fabric interconnect is the core component of the Cisco UCS. Cisco UCS Fabric Interconnects provide uplink access to LAN, SAN, and out-of-band management segments, as shown in Figure 12-25. Cisco UCS infrastructure management is handled through the embedded management software, the Cisco UCS Manager, for both hardware and software management. The Cisco UCS Fabric Interconnects are top-of-rack devices and provide unified access to the Cisco UCS domain.

Figure 12-25 Cisco UCS Components Logical Connectivity

All network endpoints, such as host bus adapters (HBAs) and management entities such as Cisco Integrated Management Controllers (CIMCs), are dual-connected to both fabric planes and thus can work in an active-active configuration.

Virtual port channels (vPCs) are not supported on the fabric interconnects, although the upstream LAN switches to which they connect can be vPC or Virtual Switching System (VSS) peers.

Cisco UCS Fabric Interconnects provide network connectivity and management for the connected servers. They run the Cisco UCS Manager control software and consist of expansion modules for the Cisco UCS Manager software.

300-206, Cisco Exams

Uplink Connectivity – Cisco Unified Computing Systems Overview

Fabric interconnect ports configured as uplink ports are used to connect to upstream network switches. You can connect these uplink ports to upstream switch ports as individual links or as links configured as port channels. Port channel configurations provide bandwidth aggregation as well as link redundancy.

You can achieve northbound connectivity from the fabric interconnect through a standard uplink, a port channel, or a virtual port channel configuration. The port channel name and ID configured on the fabric interconnect should match the name and ID configuration on the upstream Ethernet switch.

It is also possible to configure a port channel as a vPC, where port channel uplink ports from a fabric interconnect are connected to different upstream switches. After all uplink ports are configured, you can create a port channel for these ports.

Downlink Connectivity

Each fabric interconnect is connected to I/O modules in the Cisco UCS chassis, which provides connectivity to each blade server. Internal connectivity from blade servers to IOMs is transparently provided by the Cisco UCS Manager using the 10BASE-KR Ethernet standard for backplane implementations, and no additional configuration is required. You must configure the connectivity between the fabric interconnect server ports and IOMs. Each IOM, when connected with the fabric interconnect server port, behaves as a line card to fabric interconnect; hence, IOMs should never be cross-connected to the fabric interconnect. Each IOM is connected directly to a single fabric interconnect.

The Fabric Extender (also referred to as the IOM, or FEX) logically extends the fabric interconnects to the blade server. The best analogy is to think of it as a remote line card that’s embedded in the blade server chassis, allowing connectivity to the external world. IOM settings are pushed via the Cisco UCS Manager and are not managed directly. The primary functions of this module are to facilitate blade server I/O connectivity (internal and external), multiplex all I/O traffic up to the fabric interconnects, and help monitor and manage the Cisco UCS infrastructure. You should configure fabric interconnect ports that should be connected to downlink IOM cards as server ports. You need to make sure there is physical connectivity between the fabric interconnect and IOMs. You must also configure the IOM ports and the global chassis discovery policy.

300-170, Cisco Exams

Ethernet Switching Mode – Cisco Unified Computing Systems Overview

The Ethernet switching mode determines how the fabric interconnect behaves as a switching device between the servers and the network. The fabric interconnect operates in either of the following Ethernet switching modes:

End-host mode

Switching mode

In end-host mode, the Cisco UCS presents an end host to an external Ethernet network. The external LAN sees the Cisco UCS Fabric Interconnect as an end host with multiple adapters (see Figure 12-29).

Figure 12-29 UCS FI End-Host Mode Ethernet

End-host mode allows the fabric interconnect to act as an end host to the network, representing all servers (hosts) connected to it through vNICs. This behavior is achieved by pinning (either dynamically pinning or hard pinning) vNICs to uplink ports, which provides redundancy to the network, and makes the uplink ports appear as server ports to the rest of the fabric.

In end-host mode, the fabric interconnect does not run the Spanning Tree Protocol (STP), but it avoids loops by denying uplink ports from forwarding traffic to each other and by denying egress server traffic on more than one uplink port at a time. End-host mode is the default Ethernet switching mode and should be used if either of the following is used upstream:

Layer 2 switching for Layer 2 aggregation

vPC or VSS aggregation layer

Note

When you enable end-host mode, if a vNIC is hard pinned to an uplink port and this uplink port goes down, the system cannot repin the vNIC, and the vNIC remains down.

Server links (vNICs on the blades) are associated with a single uplink port, which may also be a port channel. This association process is called pinning, and the selected external interface is called a pinned uplink port. The pinning process can be statically configured when the vNIC is defined or dynamically configured by the system. In end-host mode, pinning is required for traffic flow to a server.

Static pinning is performed by defining a pin group and associating the pin group with a vNIC. Static pinning should be used in scenarios in which a deterministic path is required. When the target (as shown on Figure 12-30) on Fabric Interconnect A goes down, the corresponding failover mechanism of the vNIC goes into effect, and traffic is redirected to the target port on Fabric Interconnect B.

Figure 12-30 UCS LAN Pinning Group Configuration

If the pinning is not static, the vNIC is pinned to an operational uplink port on the same fabric interconnect, and the vNIC failover mechanisms are not invoked until all uplink ports on that fabric interconnect fail. In the absence of Spanning Tree Protocol, the fabric interconnect uses various mechanisms for loop prevention while preserving an active-active topology.

In the Cisco UCS, two types of Ethernet traffic paths will have different characteristics—Unicast and Multicast/Broadcast.

Unicast traffic paths in the Cisco UCS are shown in Figure 12-31. Characteristics of unicast traffic in the Cisco UCS include the following:

Each server link is pinned to exactly one uplink port (or port channel).

Server-to-server Layer 2 traffic is locally switched.Images Server-to-network traffic goes out on its pinned uplink port.

300-175, Cisco Exams

Fabric Failover for Ethernet: High-Availability vNIC – Cisco Unified Computing Systems Overview

To understand the switching mode behavior, you need to understand the fabric-based failover feature for Ethernet in the Cisco UCS. Each adapter in the Cisco UCS is a dual-port adapter that connects to both fabrics (A and B). The two fabrics in the Cisco UCS provide failover protection in the event of planned or unplanned component downtime in one of the fabrics. Typically, host software—such as NIC teaming for Ethernet and PowerPath or multipath I/O (MPIO) for Fibre Channel—provides failover across the two fabrics (see Figure 12-28).

Figure 12-28 UCS Fabric Traffic Failover Example

A vNIC in the Cisco UCS is a host-presented PCI device that is centrally managed by the Cisco UCS Manager. The fabric-based failover feature, which you enable by selecting the high-availability vNIC option in the service profile definition, allows network interface virtualization (NIV)-capable adapters (Cisco virtual interface card, or VIC) and the fabric interconnects to provide active-standby failover for Ethernet vNICs without any NIC-teaming software on the host.

For unicast traffic failover, the fabric interconnect in the new path sends gratuitous Address Resolution Protocols (gARPs). This process refreshes the forwarding tables on the upstream switches.

For multicast traffic, the new active fabric interconnect sends an Internet Group Management Protocol (IGMP) Global Leave message to the upstream multicast router. The upstream multicast router responds by sending an IGMP query that is flooded to all vNICs. The host OS responds to these IGMP queries by rejoining all relevant multicast groups. This process forces the hosts to refresh the multicast state in the network in a timely manner.

Cisco UCS fabric failover is an important feature because it reduces the complexity of defining NIC teaming software for failover on the host. It does this transparently in the fabric based on the network property that is defined in the service profile.

300-180, Cisco Exams

Fabric Interconnect and Fabric Extender Connectivity – Cisco Unified Computing Systems Overview

Fabric Extenders (FEs) are extensions of the fabric interconnects (FIs) and act as remote line cards to form a distributed modular fabric system. The fabric extension is accomplished through the FEX fabric link, which is the connection between the fabric interconnect and the FEX. A minimum of one connection between the FI and FEX is required to provide server connectivity. Depending on the FEX model, subsequent connections can be up to eight links, which provides added bandwidth to the servers.

The latest generation of the Cisco UCS Fabric extenders is the Cisco UCS 2408 FEX. It is used in the Cisco UCS 5108 chassis and allows for the connectivity to the Cisco 6454, 64108 and 6536 Fabric interconnects. The external connectivity is provided by 8x 25-Gbps FcoE SFP28 ports. This allows for up to 200-Gbps of bandwidth between the Cisco UCS 2408 FEX and the Cisco UCS 6400 and 6500 series fabric interconnect. As in a Cisco UCS 5108 blade chassis there are always two FEXs, one for the connectivity to each of the fabric interconnects, the combined bandwidth available to the chassis will be 400-Gbps.

The internal connectivity is supported by 32x 10-Gbps ports, which through the mid-plane provide 4x 10-Gbps bandwidth per server slot, per Cisco UCS 2408 FEX. Again, looking at the redundant connectivity of the Cisco UCS 5108, this secures a total of 80-Gbps of redundant bandwidth for each blade server in the chassis. The internal to external communication is delivered by the 1.04-Tbps of hardware forwarding capability of the FEX.

The Cisco UCS 2304 IOM (Fabric Extender) is an I/O module with 8x 40-Gigabit backplane ports and 4 40-Gigabit uplink ports (see Figure 12-11). It can be hot-plugged into the rear of a Cisco UCS 5108 blade server chassis. A maximum of two UCS 2304 IOMs can be installed in a chassis. The Cisco UCS 2304 IOM provides chassis management control and blade management control, including control of the chassis, fan trays, power supply units, and blades. It also multiplexes and forwards all traffic from the blade servers in the chassis to the 10-Gigabit Ethernet uplink network ports that connect to the fabric interconnect. The IOM can also connect to a peer IOM to form a cluster interconnect.

Figure 12-11 Cisco UCS 2300 IOM

Figure 12-12 shows how the FEX modules in the blade chassis connect to the FIs. The 5108 chassis accommodates the following FEXs:

Cisco UCS 2408

Cisco UCS 2304

Note

The Cisco UCS 2304 Fabric Extender is not compatible with the Cisco UCS 6200 Fabric Interconnect series.

Cisco UCS 2208XP

Cisco UCS 2204XP

300-170, 300-208, Cisco Exams

Cisco UCS Virtualization Infrastructure

The Cisco UCS is a single integrated system with switches, cables, adapters, and servers all tied together and managed by unified management software. Thus, you are able to virtualize every component of the system at every level. The switch port, cables, adapter, and servers can all be virtualized.

Because of the virtualization capabilities at every component of the system, you have the unique ability to provide rapid provisioning of any service on any server on any blade through a system that is wired once. Figure 12-20 illustrates these virtualization capabilities.

The Cisco UCS Virtual Interface Card 1400/14000 Series (Figure 12-20) extends the network fabric directly to both servers and virtual machines so that a single connectivity mechanism can be used to connect both physical and virtual servers with the same level of visibility and control. Cisco VICs provide complete programmability of the Cisco UCS I/O infrastructure, with the number and type of I/O interfaces configurable on demand with a zero-touch model.

Cisco VICs support Cisco Single Connect technology, which provides an easy, intelligent, and efficient way to connect and manage computing in the data center. Cisco Single Connect unifies LAN, SAN, and systems management into one simplified link for rack servers, blade servers, and virtual machines. This technology reduces the number of network adapters, cables, and switches needed and radically simplifies the network, reducing complexity. Cisco VICs can support 256 PCI Express (PCIe) virtual devices, either virtual network interface cards (vNICs) or virtual host bus adapters (vHBAs), with a high rate of I/O operations per second (IOPS), support for lossless Ethernet, and 10/25/40/100-Gbps connection to servers. The PCIe Generation 3 16 interface helps ensure optimal bandwidth to the host for network-intensive applications, with a redundant path to the fabric interconnect. Cisco VICs support NIC teaming with fabric failover for increased reliability and availability. In addition, it provides a policy-based, stateless, agile server infrastructure for your data center.

Figure 12-20 UCS Virtualization Infrastructure

The VIC 1400/14000 Series is designed exclusively for the M5 generation of UCS B-Series blade servers, C-Series rack servers, and S-Series storage servers. The adapters are capable of supporting 10/25/40/100-Gigabit Ethernet and Fibre Channel over Ethernet. It incorporates Cisco’s next-generation converged network adapter (CNA) technology and offers a comprehensive feature set. In addition, the VIC supports Cisco’s Data Center Virtual Machine Fabric Extender (VM-FEX) technology. This technology extends the Cisco UCS Fabric Interconnect ports to virtual machines, simplifying server virtualization deployment.

The Cisco UCS VIC 1400/14000 Series provides the following features and benefits (see Figure 12-21):

Stateless and agile platform: The personality of the card is determined dynamically at boot time using the service profile associated with the server. The number, type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, bandwidth, and quality of service (QoS) policies of the PCIe interfaces are all determined using the service profile. The capability to define, create, and use interfaces on demand provides a stateless and agile server infrastructure.

Network interface virtualization: Each PCIe interface created on the VIC is associated with an interface on the Cisco UCS Fabric Interconnect, providing complete network separation for each virtual cable between a PCIe device on the VIC and the interface on the fabric interconnect.

Figure 12-21 Cisco UCS 1400 Virtual Interface Cards (VICs)

UCS M5 B-Series VIC:

300-175, Cisco Exams

Cisco UCS Initial Setup and Management

The Cisco UCS Manager enables you to manage general and complex server deployments. For example, you can manage a general deployment with a pair of fabric interconnects, which is the redundant server access layer that you get with the first chassis that can scale up to 20 chassis and up to 160 physical servers. This can be a combination of blades and rack-mount servers to support the workload in your environment. As you add more servers, you can continue to perform server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, and auditing.

Beginning with release 4.0(2a), the Cisco UCS Manager extends support for all existing features on the following Cisco UCS hardware unless specifically noted:

Cisco UCS C480 M5 ML Server

Cisco UCS VIC 1495

Cisco UCS VIC 1497

Cisco UCS 6454 Fabric Interconnect

Cisco UCS VIC 1455

Cisco UCS VIC 1457

Cisco UCS C125 M5 Server

By default, the Cisco UCS 6454 Fabric Interconnect, the Cisco UCS 6332 FIs, the Cisco UCS Mini 6324 FIs, and the UCS 6200 Series FIs include centralized management. You can manage the Cisco UCS blade servers and rack-mount servers that are in the same domain from one console. You can also manage the Cisco UCS Mini from the Cisco UCS Manager.

To ensure optimum server performance, you can configure the amount of power that you allocate to servers. You can also set the server boot policy, the location from which the server boots, and the order in which the boot devices are invoked. You can create service profiles for the Cisco UCS B-Series blade servers and the Cisco UCS Mini to assign to servers. Service profiles enable you to assign BIOS settings, security settings, the number of vNICs and vHBAs, and anything else that you want to apply to a server. Initial configuration of fabric interconnects is performed using the console connection. It is essential to maintain symmetric Cisco UCS Manager versions between the fabric interconnects in a domain.

Follow these steps to perform the initial configuration for the Cisco UCS Manager:

Step 1. Power on the fabric interconnect. You see the power-on self-test messages as the fabric interconnect boots.

Step 2. If the system obtains a lease IPv4 or IPv6 address, go to step 6; otherwise, continue to the next step.

Step 3. Connect to the console port.

Step 4. At the installation method prompt, enter GUI.

Step 5. If the system cannot access a DHCP server, you are prompted to enter the following information:

IPv4 or IPv6 address for the management port on the fabric interconnect

IPv4 subnet mask or IPv6 prefix for the management port on the fabric interconnect

IPv4 or IPv6 address for the default gateway assigned to the fabric interconnect

Note

In a cluster configuration, both fabric interconnects must be assigned the same management interface address type during setup.

Step 6. Copy the web link from the prompt into a web browser and go to the Cisco UCS Manager GUI launch page.

Step 7. On the Cisco UCS Manager GUI launch page, select Express Setup.

Step 8. On the Express Setup page, select Initial Setup and click Submit.

Step 9. In the Cluster and Fabric Setup area, do the following: