1 Comment

Composability has been around in some form or fashion for a long time. Consider the Egenera BladeFrame, HP c-Class with Virtual Connect, and Cisco UCS. All had the ability to reallocate entire physical servers to different user personalities (MACs, WWPNs, etc.), creating a kind of physical cloud of compute. Need a high I/O, high memory configured physical server for a batch job from midnight to 3 am, no problem. Need to turn your daytime VDI servers into night time analytical servers, not problem.

CXL offers something similar, but more granular. Instead of swinging entire servers, you just swing the CXL attached GPUs from the VDI servers to the Deep Learning servers at 5pm, then back at 8am.

But what kind of server owner benefits the most from that? Cloud providers. They can use pricing to steer GPU accelerated VDI users to the peak VDI usage times, then offer reserved instances for Deep Learning during the overnight hours. And they are not constrained to the CPU and RAM configurations of the VDI or the DL servers.

I think the primary use case for CXL in traditional, enterprise, premises environments will be large memory configurations. These will use CXL to attach large amounts of memory, but will not really be composable in the sense of changing.

Having lived through the promise of InfiniBand, first with the Sun Microsystems proposed "Blue Moon" InfiniBand backplane blade servers (trying to do an Egenera or HP Virtual Connect style virtualized I/O), through the TopSpin (later Cisco) InfiniBand based "Multi-Fabric I/O" and "VFrame" software to manage it, I realize these esoteric fabrics rarely become mainstream. But they often find an almost embedded use case, such as the back-end memory replication fabric on EMC XtremIO arrays.

Expand full comment