Posted on

The Evolution of Cloud Storage and Memory

By Gary Kotzur, CTO, Storage Products Group, Marvell

and Jon Haswell, SVP, Firmware, Marvell

The nature of storage is changing much more rapidly than it ever has historically. This evolution is being driven by expanding amounts of enterprise data and the inexorable need for greater flexibility and scale to meet ever-higher performance demands.

If you look back 10 or 20 years, there used to be a one-size-fits-all approach to storage. Today, however, there is the public cloud, the private cloud, and the hybrid cloud, which is a combination of both. All these clouds have different storage and infrastructure requirements. What’s more, the data center infrastructure of every hyperscaler and cloud provider is architecturally different and is moving towards a more composable architecture. All of this is driving the need for highly customized cloud storage solutions as well as demanding the need for a comparable solution in the memory domain.

The rise of CXL

This solution is Compute Express Link (CXL), a new industry standard technology for connecting processors, accelerators, I/O devices and memory that runs on top of PCIe. Silicon components based on CXL are now helping to facilitate new cloud data center architectures with significant performance, scaling and efficiency benefits.

This is critical because existing server architectures face multiple memory scaling challenges and lack the ability to locate and share memory resources efficiently. At the same time, workloads such as artificial intelligence (AI), machine learning (ML), analytics, and large-scale search, along with an emergence of the metaverse, are demanding increased memory performance, capacity and composability in the cloud. CXL adds memory semantics to PCIe, such as load / store and cache coherency, while also enabling devices that are connected to CXL to achieve much lower latency for data accesses.

In addition, at long last, it adds memory disaggregation. Now, customers can implement composable infrastructures in either the storage and/or memory domain, providing them a high-level of flexibility to optimize requirements for their solutions.

CXL vs. NVMe

One of the first questions people ask is whether CXL is complementary to NVMe, or whether it’s competing against it.

A key difference between the two is that NVMe runs in the storage domain, while CXL is in the memory domain. Because of its superior performance and dramatically lower latency, CXL is unlikely to be displaced by NVMe in the memory domain.

Conversely, it’s fair to ask whether CXL can displace NVMe. After all, there are some scenarios where it’s possible to use CXL for storage applications. Overall, however, there will always be a place for NVMe in the storage domain. NVMe is well suited for storage media, it strikes a balance where its performance benefits enabled it to dominate incumbents such as SATA and SAS for NAND-based drives, enable sharing and scalability with NVMe-oF, and drive lower solution costs as the industry converges to its use as the single storage interface. While CXL is a great solution for memory-based infrastructure and could be used for storage applications, NVMe will be used to deliver storage solutions optimized for performance and cost.

The CXL product pipeline

So, what CXL products will come first? At Marvell, our roadmap plan is to address all the major use cases of CXL including Expanders, Pooling Devices, and Accelerators, and we also see it being leveraged into the rest of our product portfolio.

A key use case for CXL is memory expansion. When we talk to customers, the biggest pain point they always identify is memory. Not only is memory expensive, but there are cases where customers need additional memory for either capacity or for performance expansion, that cannot be accommodated by existing server designs. A CXL memory expander is perfectly suited to address these issues.

Another key use case for CXL is memory pooling, where many-to-many connections between hosts and memory devices are possible. Today, many customers’ memory usage is inefficient. Platforms are provisioned with memory to support their most intensive workloads.  As workloads vary, stranded memory cannot be re-balanced, resulting in inefficient memory utilization. CXL memory pooling allows memory to be shared across CPUs or platforms. This enables the pooled memory to be distributed to meet varying workload needs, resulting in lower memory requirements and saving costs.

A recent paper authored by researchers at Microsoft and Carnegie Mellon found that up to 25 percent of memory in a data center is stranded, with memory disaggregation resulting in a 9-10 percent reduction in overall DRAM. A CXL memory pooling solution can help customers overcome this challenge by increasing the overall utilization of memory by allowing memory to be dynamically attached to servers as required —and potentially saving significant dollars in the datacenter.

Looking to the future, the cloud environment will continue to rapidly change and grow. Marvell is committed to creating solutions that can evolve with it.

For more insights on this topic, listen to our podcast here.

Comments are closed.