Hardware Behind the WowerEdge Dell EMC PowerEdge MX Innovation
At STH, we like to get into a level of depth and evaluate whether products live up to their hype. We first saw the Dell EMC PowerEdge MX a few months ago at Dell EMC World 2018. We then had a briefing where we came out dubbing the PowerEdge MX the WowerEdge. Admittedly, I edited Cliff’s title with that piece WowerEdge The Dell EMC PowerEdge MX Launch.
At VMworld 2018, we had the opportunity to poke and prod a PowerEdge MX. I do not think Dell EMC will send one our way to do our normal in-depth coverage, so we took some time on the show floor to go into depth on what makes this the WowerEdge. Here is a hint, at STH, we believe this chassis is designed to take advantage of 400GbE and next-generation interconnects like Gen-Z even though Dell EMC has not officially announced support at this juncture.
Poking and Prodding the Dell EMC PowerEdge MX
Cliff did a great job of covering the Dell EMC PowerEdge MX design goals in philosophy in the launch piece we ran. One afternoon at VMworld 2018 when the crowds died down, I was able to get some time with the side-by-side PowerEdge MX setup. The company had two versions of the chassis, a standard chassis and one made of acrylic so you can see through. A few minutes later, it clicked as to why this is a powerful platform.
First, taking a look at the front of the platform, we can see Dell EMC is showing off three main types of blades. There is a traditional dual socket node, a storage node, and another that is a quad socket server node.
The system is designed to hold a lot of power. For example, the show floor version has six hot-swap 3kW power supplies. These swap from the front but the power inputs are actually on the rear of the chassis.
Moving to the rear of the chassis you can see the power plugs at the bottom of the unit. Above them, you can see a fairly massive I/O array. Dell EMC has 25GbE in SFP28, QSFP28 suggested to be 100GbE and even standard RJ-45 networking installed.
There are two chassis management modules as well. There is another fan wall with five large fans. If you saw on the front of the chassis, the center partition is also comprised of a fan and duct system running through the chassis. These are hot-swappable units. With this X or “+” pattern of cooling, plus 18kW worth of power supplies, we can infer that this chassis is designed not for today’s ~205W maximum TDP CPUs, but also higher-power GPUs and higher power CPUs that will come in future generations.
The PowerEdge MX Midplane-less Design
When we viewed the acrylic chassis, we could finally see a touted feature: the lack of a midplane PCB. Midplane PCB’s are notorious for a few reasons. First, they often represent a single point of failure. They are often a pain to replace in the rare event that they do fail. The other reason is that they can limit the life of a chassis.
We have lived in a generation where PCIe 3.0 infrastructure has dominated for five generations of Intel Xeon, and will live on with the next-generation of Cascade Lake Xeon. That reign is ending as AMD’s “Rome” generation of CPUs will have PCIe 4.0 in the first half of 2019. IBM POWER supports PCIe Gen4. Even Arm based designs such as Mellanox BlueField and NVIDIA Xavier support PCIe Gen4 connectivity. PCIe 4.0, and the fast following PCIe Gen5 will demand higher speed signaling and therefore higher quality PCB materials or cabling to be used. Current PCIe 3.0 PCB is unlikely to support PCIe Gen4 traces and will need to be replaced. Removing the midplane means that a difficult to service part will not be required for moving to PCIe Gen4 and Gen5.
Beyond PCIe, the same concerns for PCB designers will happen with 400GbE and to some extent 200GbE networking. Since blade chassis include networking, removing a part that will not scale to the next-generation technologies will speed adoption.
Looking forward, we believe Dell EMC is thinking beyond the next 4-5 years of PCIe and networking upgrades. Dell EMC has been active showing its Gen-Z support as a founding consortium member. We have seen Gen-Z demoed at Dell EMC World 2018 and again at Flash Memory Summit 2018. The company is investing in early Gen-Z adoption. Gen-Z is a memory semantic interconnect that can be used in point-to-point or switched fabric scenarios. The consortium’s goal is to have CPUs, accelerators like FPGAs, memory, and other devices use Gen-Z for memory-centric computing.
At STH, we believe designing the PowerEdge MX without the midplane is a forward-looking design choice that will enable high-speed PCIe, networking, Gen-Z interconnects, and other future technologies as they become available. Removing the midplane removes a major PCB and connector limitation on the system.
Dell EMC PowerEdge MX Modules
As the last part of this piece, we wanted to show off three modules that we saw at the show. The dual socket, quad socket, and storage nodes were present at the Dell EMC booth.
The Dell MX740c is a traditional compute module for Intel Xeon Scalable dual-socket nodes. There are six 2.5″ drive bays in the front. The rear of the sled shows the mezzanine adapter that provides fabric connectivity.
Dell EMC supports double-width quad socket nodes in the Dell EMC MX840c compute node. This compute node stacks two dual socket motherboards atop one another with a custom connector between the two. In this photo, you can see the lever mechanism towards the rear of the node. The connector sits in between the sockets.
Finally is the MX5016s storage sled. This unit supports 2.5″ storage drives and pulls out from the chassis during operation to allow hot swap capabilities.
We have a feeling that this is just the beginning in terms of Dell EMC PowerEdge MX modules.
To some, the PowerEdge MX will look like just another blade server. After getting the briefing and then getting some hands-on time with the hardware, we feel affirmed in affixing the WowerEdge label. To those of us who deal with data center hardware on a daily basis, this kind of design is a departure for the norm. One can get by just fine with today’s compute, networking and storage infrastructure using traditional midplane designs. It takes courage to introduce a design like the Dell EMC PowerEdge MX, without a midplane, today. Dell EMC is giving its customers a glimpse into the future of shared chassis nodes with the PowerEdge MX. It is calling that future Kinetic infrastructure. We see it as forward-thinking design and the future of where the data center is headed.