Cisco UCS B-Series blade servers are based on Intel Xeon processors (see Figure 12-4). They work with virtualized and nonvirtualized applications to increase performance, energy efficiency, flexibility, and administrator productivity.
With the Cisco UCS blade server, you can quickly deploy stateless physical and virtual workloads with the programmability that the Cisco UCS Manager and Cisco Single Connect technology enables.
Cisco UCS B480 M5 is a full-width server that uses second-generation Intel Xeon Scalable processors or Intel Xeon Scalable processors with up to 12 TB of memory, or up to 18 TB of Intel Optane DC persistent memory; up to four SAS, SATA, and NVMe drives; M.2 storage; up to four GPUs; and 160-Gigabit Ethernet connectivity. It offers exceptional levels of performance, flexibility, and I/O throughput to run the most demanding applications.
Figure 12-4 UCS B200 M5 and B480 M5 Blade Servers
Cisco UCS B200 M5 is a half-width server that uses second-generation Intel Xeon Scalable processors or Intel Xeon Scalable processors with up to 3 TB of memory or 6 TB of Intel Optane DC persistent memory; up to two SAS, SATA, and NVMe drives; plus M.2 storage; up to two GPUs; and up to 80-Gigabit Ethernet. The Cisco UCS B200 M5 blade server offers exceptional levels of performance, flexibility, and I/O throughput to run applications.
A new addition to the Cisco B-series servers is the 6th generation of blade servers, which comes in the form of the B200 M6 blade server. Building on the legacy of the B200 servers, it provides extended support for up to 12 TB of memory, up to 40 cores per socket, four M.2 drives with RAID support, two Cisco VIC 1400. The sixth generation B200 servers have two CPU sockets and support for the 3rd generation of Intel Scalable CPUs.
Note
The central processing unit (CPU) is designed to control all computer parts, improve performance, and support parallel processing. The current CPU is a multicore processor. A graphic processing unit (GPU) is used in computer graphic cards and image processing. The GPU can be used as a coprocessor to accelerate CPUs. In today’s IT world, distributed applications (such as artificial intelligence, or AI) or deep learning applications require high-speed and parallel processing. GPUs are the best solution for distributed applications because GPUs contain high-core density (256 cores or more) compared to CPUs that contain 8 or 16 or a maximum of 32 cores. CPUs can offload some of the compute-intensive and time-consuming portions of the code to the GPU.