Latest Articles

Posted on

No-Compromise 5G Open RAN: Compute Architecture

By Peter Carson, Senior Director Solutions Marketing, Marvell

Introduction 

5G networks are evolving to a cloud-native architecture with Open RAN at the center. This explainer series is aimed at de-mystifying the challenges and complexity in scaling these emerging open and virtualized radio access networks. Let’s start with the compute architecture.

The Problem 

Open RAN systems based on legacy compute architectures utilize an excessively high number of CPU cores and energy to support 5G Layer 1 (L1) and other data-centric processing, like security, networking and storage virtualization. As illustrated in the diagram below, this leaves very few host compute resources available for the tasks the server was originally designed to support. These systems typically offload a small subset of 5G L1 functions, such as forward error correction (FEC), from the host to an external FPGA-based accelerator but execute the processing offline. This kind of look-aside (offline) processing of time-critical L1 functions outside the data path adds latency that degrades system performance.

Image:  Limitations of Open RAN systems based on general purpose processors

Read More

Posted on

Next Evolution for Storage Networking: Self-driving SANs

By Todd Owens, Technical Marketing Manager, Marvell

and Jacqueline Nguyen, Marvell Field Marketing Manager

Storage area network (SAN) administrators know they play a pivotal role in ensuring mission-critical workloads stay up and running. The workloads and applications that run on the infrastructure they manage are key to overall business success for the company.

Like any infrastructure, issues do arise from time to time, and the ability to identify transient links or address SAN congestion quickly and efficiently is paramount. Today, SAN administrators typically rely on proprietary tools and software from the Fibre Channel (FC) switch vendors to monitor the SAN traffic. When SAN performance issues arise, they rely on their years of experience to troubleshoot the issues.

What creates congestion in a SAN anyway?

Refresh cycles for servers and storage are typically shorter and more frequent than that of SAN infrastructure. This results in servers and storage arrays that run at different speeds being connected to the SAN. Legacy servers and storage arrays may connect to the SAN at 16GFC bandwidth while newer servers and storage are connected at 32GFC.

Fibre Channel SANs use buffer credits to manage the prioritization of the traffic flow in the SAN. When a slower device intermixes with faster devices on the SAN, there can be situations where response times to buffer credit requests slow down, causing what is called “Slow Drain” congestion. This is a well-known issue in FC SANs that can be time consuming to troubleshoot and, with newer FC-NVMe arrays, this problem can be magnified. But these days are soon coming to an end with the introduction of what we can refer to as the self-driving SAN.

Read More

Posted on

Optical Technologies for 5G Access Networks

By Matt Bolig, Director, Product Marketing, Networking Interconnect, Marvell

There’s been a lot written about 5G wireless networks in recent years.  It’s easy to see why; 5G technology supports game-changing applications like autonomous driving and smart city infrastructure.  Infrastructure investment in bringing this new reality to fruition will take many years and 100’s of billions of dollars globally, as figure 1 below illustrates.

Figure 1: Cumulative Global 5G RAN Capex in $B (source: Dell’Oro, July 2021)

When considering where capital is invested in 5G, one underappreciated aspect is just how much wired infrastructure is required to move massive amounts of data through these wireless networks. 

Read More

Posted on

Marvell and Ingrasys Collaborate to Power Ceph Cluster with EBOF in Data Centers

By Khurram Malik, Senior Manager, Technical Marketing, Marvell

A massive amount of data is being generated at the edge, data center and in the cloud, driving scale out Software-Defined Storage (SDS) which, in turn, is enabling the industry to modernize data centers for large scale deployments. Ceph is an open-source, distributed object storage and massively scalable SDS platform, contributed to by a wide range of major high-performance computing (HPC) and storage vendors. Ceph BlueStore back-end storage removes the Ceph cluster performance bottleneck, allowing users to store objects directly on raw block devices and bypass the file system layer, which is specifically critical in boosting the adoption of NVMe SSDs in the Ceph cluster. Ceph cluster with EBOF provides a scalable, high-performance and cost-optimized solution and is a perfect use case for many HPC applications. Traditional data storage technology leverages special-purpose compute, networking, and storage hardware to optimize performance and requires proprietary software for management and administration. As a result, IT organizations neither scale-out nor make it feasible to deploy petabyte or exabyte data storage from a CAPEX and OPEX perspective.
Ingrasys (subsidiary of Foxconn) is collaborating with Marvell to introduce an Ethernet Bunch of Flash (EBOF) storage solution which truly enables scale-out architecture for data center deployments. EBOF architecture disaggregates storage from compute and provides limitless scalability, better utilization of NVMe SSDs, and deploys single-ported NVMe SSDs in a high-availability configuration on an enclosure level with no single point of failure.

Power Ceph Cluster with EBOF in Data Centers image 1

Ceph is deployed on commodity hardware and built on multi-petabyte storage clusters. It is highly flexible due to its distributed nature. EBOF use in a Ceph cluster enables added storage capacity to scale up and scale out at an optimized cost and facilitates high-bandwidth utilization of SSDs. A typical rack-level Ceph solution includes a networking switch for client, and cluster connectivity; a minimum of 3 monitor nodes per cluster for high availability and resiliency; and Object Storage Daemon (OSD) host for data storage, replication, and data recovery operations. Traditionally, Ceph recommends 3 replicas at a minimum to distribute copies of the data and assure that the copies are stored on different storage nodes for replication, but this results in lower usable capacity and consumes higher bandwidth. Another challenge is that data redundancy and replication are compute-intensive and add significant latency. To overcome all these challenges, Ingrasys has introduced a more efficient Ceph cluster rack developed with management software – Ingrasys Composable Disaggregate Infrastructure (CDI) Director.

Read More

Posted on

Still the One: Why Fibre Channel Will Remain the Gold Standard for Storage Connectivity

By Todd Owens, Technical Marketing Manager, Marvell

For the past two decades, Fibre Channel has been the gold standard protocol in Storage Area Networking (SAN) and has been a mainstay in the data center for mission-critical workloads, providing high-availability connectivity between servers, storage arrays and backup devices. If you’re new to this market, you may have wondered if the technology’s origin has some kind of British backstory. Actually, the spelling of “Fibre” simply reflects the fact that the protocol supports not only optical fiber but also copper cabling; though the latter is for much shorter distances.

During this same period, servers matured into multicore, high-performance machines with significant amounts of virtualization. Storage arrays have moved away from rotating disks to flash and NVMe storage devices that deliver higher performance at much lower latencies. New storage solutions based on hyperconverged infrastructure have come to market to allow applications to move out of the data center and closer to the edge of the network. Ethernet networks have gone from 10Mbps to 100Gbps and beyond. Given these changes, one would assume that Fibre Channel’s best days are in the past.

The reality is that Fibre Channel technology remains the gold standard for server to storage connectivity because it has not stood still and continues to evolve to meet the demands of today’s most advanced compute and storage environments. There are several reasons Fibre Channel is still favored over other protocols like Ethernet or InfiniBand for server to storage connectivity.

Read More