-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Latest Articles

October 27th, 2020

Unleashing a Better Gaming Experience with NVMe RAID

By Shahar Noy, Senior Director, Product Marketing

You are an avid gamer. You spend countless hours in forums to decide between the ASUS TUF components and researching Radeon RX 500 or GeForce RTX 20, to ensure games would show at their best on your hard-earned PC gaming rig. You made your selection and can’t stop bragging about your system’s ray tracing capabilities and how realistic is the “Forza Motorsport 7” view from your McLaren F1 GT cockpit when you drive through the legendary Le Mans circuit at dusk. You are very proud of your machine and the year 2020 is turning out to be good: Microsoft finally launched the gorgeous looking “Flight Simulator 2020,” and CD Projekt just announced that the beloved and award-winning “The Witcher 3” is about to get an upgrade to take advantage of the myriad of hardware updates available to serious gamers like you. You have your dream system in hand and life can’t be better.

You are frustrated though because something doesn’t feel right. Booting your PC takes forever, and on top of this, loading new advanced texture-rich games is very, very long, but the worst of it – some games have a prolonged “no action” period between scenes. You are puzzled about the latter, but you just learned that game developers use many tricks to mask slow scene (“asset”) load time. In advanced games, in order to load two rich environments where enough textures and models are required to fill the memory, the developers will have to add a long staircase or elevator ride or a windy corridor, to buy enough time (sometimes up to 30 seconds!) to ditch the old assets and load new ones instead. You start realizing that upgrading your HDD or SSD might be all it takes to get your system to the next level and provide the ultimate gaming experience. A faster SSD that can match the CPU and GPU capabilities of your system will ensure storage is no longer the bottleneck, enabling improved system boot-time, faster game-loading, and quicker world and new scene changes to keep an active and engaging experience.

You consult with your friends and they show off their new polished Ryzen 9 but you are a diehard Intel fan and won’t even consider giving up your i5. You are worried, because they have a new platform with PCIe Gen4 and you are a generation behind with your Gen3 system. Your best buddy, who just got an offer to join a pro Overwatch Team gunning for next year’s OWL Grand Finals, urges you to invest in a new system with Gen4 interfaces to boost your storage performance but you want to enjoy your existing rig for longer and your budget is limited. We know 2020 has been tough on many of us, but every cloud has a silver lining and, in this case, it is Marvell’s collaboration with Western Digital that has introduced one of the highest performing SSD solutions to unlock more of your existing system potential.

Benefits of NVMe RAID

Marvell and Western Digital have been listening to gamers and collaborated on the introduction of Western Digital’s WD_BLACKTM AN1500 NVMeTM SSD Add-In-Card to provide a better storage solution for gamers who are on PCIe Gen3 platforms (or even PCIe Gen4) and need more performance. By using Marvell’s advanced native hardware NVMe RAID engine, Western Digital can now double the capacity and significantly improve the performance over a single WD_BlackTM SN750 NVMeTM SSD without making any compromises. Marvell’s NVMe RAID technology was designed to connect two Gen3 SSDs through an extremely low latency engine and combine them into one bigger and faster SSD. The benefit of NVMe RAID is that the throughput of read and write operations to any file is multiplied by the number of drives since reads and writes are done concurrently. The two SSDs run in parallel and are exposed to the host as a single drive, unlocking higher capacity and throughput capabilities. In some ways you can think of this technology as combining Mario and Luigi into one mega character with greater powers who can easily take down Bowser in “Super Mario 3”!

Here is a quick comparison between the new WD_BLACK AN1500 NVMe SSD Add-In-Card and WD_BLACK SN750 NVMe SSD at similar capacity:

 

WD_BLACK AN1500

WD_BLACK SN750

PCIe Interface

Gen3x8

Gen3x4

Capacity

2TB

2TB

Performance (Spec)

 

 

Sequential Read

6.5GB/s

3.4GB/s

Random Write

780,000 IOPS

560,000 IOPS


When you consider what SSD will work best for gaming there are two critical metrics which provide good “real life” performance indicators – the sustained read performance and the mixed read/write performance.

The sustained read will indicate how fast the game can load and how effective is the new assets loading time during a game; the faster it is, the quicker your GPU can process the data and render it to the screen, providing better overall gaming experience.

AN1500 (red) peak above 5GB/s and even beat Sabrent PCI Gen4
Source: https://www.tomshardware.com/reviews/wd-black-an1500-nvme-aic-ssd-review/2
AN1500 (red) peak above 5GB/s and even beat Sabrent PCI Gen4
Single SN750 hit 3GB/s with big blocks
Source: https://www.tomshardware.com/reviews/wd-black-sn750-ssd,5957-2.html
Single SN750 hit 3GB/s with big blocks

In the chart above WD_BLACK AN1500 is 66% faster than WD_BLACK SN750 testing the same 2TB overall SSD capacity. It is amazing to see WD_BLACK AN1500 Gen3 performance beat other Gen4 SSDs and get closer to Samsung’s latest Gen4 SSD! The WD_BLACK AN1500 performance advantage, coupled with the ever-growing file size of games, can improve game load time by minutes and make new scenes available seconds earlier.

Mixed read/write, on the other hand, is important when your OS or game starts saving fragments of data. The OS constantly updates small fragments of the meta-data in case your system crashes and you need to recover your boot with minimum disruptions. The game itself can autosave the details of your progress and although it requires only small infrequent writes, it is extremely important that those be serviced quickly by the SSD. This enables the SSD to focus back on its main task – sustaining fast read performance in order to transfer big chunks of rich graphics data to the GPU and ensure quick game and scene load times.

In Boot, the WD_BLACK AN1500 came in first by a wide margin yet again with 174,143 IOPS with a latency of 183.8µs
Source: https://www.storagereview.com/review/wd_black-an1500-aic-ssd-review
In Boot, the WD_BLACK AN1500 came in first by a wide margin yet again with 174,143 IOPS with a latency of 183.8µs
The WD_BLACK SN750 went on to peak in third place with 115,170 IOPS at a latency of 282.5μs.
Source: https://www.storagereview.com/review/wd-black-sn750-nvme-ssd-review
The WD_BLACK SN750 went on to peak in third place with 115,170 IOPS at a latency of 282.5μs.

VDI Boot is a good indicator of mixed read/write transactions during an enterprise grade boot process. In this test the SSD is taxed with mixed operations and the results indicate real word random (IOPS) and latency measurements. In this test WD_BLACK AN1500 IOPS were higher by 51% compared to WD_BLACK SN750 and the only Gen3 drive to perform at sub 200us latency, therefore clearing up small fragmented writes very quickly, minimizing SSD bottleneck, and going back to service the intense read requests.

Summary

It is 2020 and it is not all gloomy. Marvell and Western Digital collaborated to provide these superhero-like storage speeds to elevate the gaming experience of your system to a whole new level. With the WD_BLACK AN1500 NVMe SSD Add-In-Card, fully equipped with an integrated heatsink and RGB lighting, you will be able to spend more time playing and less time waiting! Enjoy circling the Eiffel Tower piloting your 787 in “Flight Simulator” with quicker game and map loading times. Immerse yourself in “The Witcher 3”, controlling Geralt of Rivia as he combs quicker through the open world of Continent for monsters. Lastly, with your faster machine you can practice your skills more often and rise through the ranks in Overwatch Open and Contenders Divisions, so next time you team with your friends in a 6v6 match, there is no question who is heading into the finals!

October 20th, 2020

Network Visibility in the Borderless Enterprise – White Paper

By Gidi Navon, Principal System Architect, Marvell

Enterprise networks are changing, adapting and expanding to become a borderless enterprise. Visibility tools must evolve to meet the new requirements of an enterprise that now extends beyond the traditional campus — across multi-cloud environments to the edge.

Much like a brain needs eyes and sensors to function, smart networks in the borderless enterprise need visibility to “see” into the network. Network visibility is now more important than ever to help drive smart networking infrastructure that is intent-based, automated and self-healing. As the borderless enterprise grows, the amount and type of network users as well as the complexity of networks are continuously evolving. Visibility tools are pivotal to supporting these transitions.

To be predictive and to safely navigate through this digital transformation, networks need to be built from switches that look beyond the obvious and provide intelligent telemetry information. Such information can be analyzed and provide proactive infrastructure automation, forensic analytics and mitigations.

To learn about the state-of-the-art visibility tools and how they are evolving to address the new smart borderless enterprise networks, download the white paper:

October 9th, 2020

Matt Murphy Joins CNBC’s Jim Cramer for a Post Investor Day Discussion on Mad Money

By Stacey Keegan, Vice President, Corporate Marketing, Marvell

Following the company’s 2020 Investor Day, Marvell President and CEO, Matt Murphy, joined Jim Cramer on CNBC’s Mad Money to discuss yesterday’s event highlights. Calling out significant growth opportunities across Marvell’s key market segments – including #5G, #DataCenter #Cloud and #Automotive – Murphy noted that adoption of both 5G and Cloud remain in the early innings and that Marvell is well positioned to see continued benefits from these long-term growth markets.

As working from home accelerates digital transformation, Marvell is building the next generation data infrastructure semiconductor technology that will power the world’s progress.



Watch more videos on the Marvell YouTube Channel & Subscribe:  (Click Here)

October 7th, 2020

Ethernet Advanced Features for Automotive Applications

By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell

Ethernet standards comprise a long list of features and solutions that have been developed over the years to resolve real network needs as well as resolve security threats. Now, developers of Ethernet In-Vehicle-Networks (IVN) can easily balance between functionality and cost by choosing the specific features they would like to have in their car’s network.

The roots of Ethernet technology began in 1973, when Bob Metcalfe, a researcher at Xerox Research Center (who later founded 3COM), wrote a memo entitled “Alto Ethernet,” which described how to connect computers over short-distance copper cable. With the explosion of PC-based Local Area Networks (LAN) in businesses and corporations in the 1980s, the growth of client/server LAN architectures continued, and Ethernet started to become the connectivity technology of choice for these networks. However, the Ethernet advancement that made it the most successful networking technology ever was when standardization efforts began for it under the IEEE 802.3 group.

The 10 Mb/s derivative of Ethernet was first approved by the IEEE Standards Board in 1983, and subsequently published in 1985 as IEEE Std 802.3-1985.  The process of standardization of Ethernet, and the subsequent membership in IEEE 802 standards, has been extremely beneficial to Ethernet’s growth, enabling multi-vendor support and interoperability as a wide variety of physical layers have been added.  Since the original Ethernet standard began, data rates from 1 Mb/s to 400 Gb/s have been added on a wide variety of media and reaches (cable lengths), all under a seamless architecture in IEEE 802. 

Recently, IEEE 802.3 added automotive reaches and rates to its application base, enabling lightweight, high-speed single-pair connectivity for automobiles. Building on the already mature base of LAN technologies in the IT space, automotive networks have rapidly scaled to include speeds from 10 Mb/s to 10 Gb/s and are currently working on reaching speeds beyond 10 Gb/s. These networks are expected to fulfill a variety of applications currently served by mixed networks with proprietary protocols.

Ethernet Advanced Features for Automotive Applications

1)    Switching

The essence of a network is addressing and switching – the capability to send data between specific nodes that share the same network. One of the most important attributes of Ethernet network/switching is the capability to send the traffic between two nodes over different routes in the network.

Addressing devices and switching through multiple routes provides redundancy that is critical for the functionality and reliability of the IVN. The switching architecture of the Ethernet LAN is based on the IEEE 802.1 standard. It defines the link security, overall network management, and the higher protocol layers above the Media Access Control (MAC).

Ethernet switching naturally creates another very important benefit for the IVN: the ability to support a wide range of network topologies including mesh, star, ring, daisy-chain, tree and bus (as shown in Figure 2). This allows system and domain developers to choose the optimal topology for each domain, while leveraging the same basic components.

Ethernet Advanced Features for Automotive Applications

Figure 2

The payload size of data packets sent over Ethernet is variable, allowing maximum flexibility for carrying different types of application loads. In addition, Ethernet’s native support of broadcast and multicast allows high efficiency, with low latency for each of these topologies.

2)    Ethernet PHY Speeds

The first IEEE standard Automotive Ethernet PHY published was the 100BASE-T1 that was developed under 802.3bw. This standard was ratified in 2015, and specified 100Mbps Ethernet on single-pair, unshielded automotive cable. Today, 100BASE-T1 has been adopted by many original equipment manufacturers (OEMs), and most luxury and mid-end cars use 100M Ethernet networking.

100BASE-T1 specifications were not alone for long, and in 2016, the next generation of automotive Ethernet, 1000BASE-T1, was published. Developed simultaneously with 100BASE-T1, the 1000BASE-T1 PHY specification known as 802.3b provided gigabit networking. The 1000BASE-T1 PHY products were introduced to the market in 2017 and are now getting into mass production.

In 2019 and 2020, Automotive Ethernet added both lower speeds (10 Mb/s) and multigigabit speeds. The latest Automotive Ethernet PHY standard development for 2.5 Gbps, 5 Gbps, and 10Gbps, called IEEE 802.3ch, was completed in early 2020.

Currently, Automotive Ethernet PHY standards are in progress for speeds higher than 10Gbps. The first effort to develop a pre-standard set of specifications is done in the NAV (Networking for Autonomous Vehicles) Alliance(www.nav-alliance.org), under Technical Working Group 1 (TWG1). In addition, a new task force, called IEEE 802.3cy for “Greater than 10 Gb/s Automotive Ethernet Electrical PHYs” began its activities in July 2020, with an objective to develop an automotive PHY for data rates of 25 Gbps, 50 Gbps and 100 Gbps.

3)    Ethernet MAC speeds

IEEE 802.3 developed standards for MAC at rates ranging from 10Mbps all the way up to 100Gbps (200Gbps and 400Gbps were also developed, but these rates today require multiple channels of 100Gbps). These standards had previously been developed and proven for LAN and data center applications, and today they also find applications in automotive networking.

Specifically, Ethernet supports rates of 10Mbps, 100Mbps, 1Gbps, 2.5Gbps, 5Gbps, 10Gbps, 25Gbps, 50Gbps and 100Gbps. These MAC rates open the door for future automotive network speeds beyond 10Gbps, for high speed backbone.

4)    Asymmetrical Ethernet

Automotive Ethernet is capable of symmetric traffic rates, meaning it transports data at the same speed in both directions on a single-pair automotive cable. This capability makes it the preferred technology for the network backbone. However, Ethernet can also operate in an asymmetrical mode when needed. In 2009, the Ethernet standards group developed a set of protocols for efficiently handling asymmetric and time-varying traffic loads known as Energy Efficient Ethernet (EEE).

EEE provides a method to reduce power consumption during periods of low data activity. In its normal mode of operation, an Ethernet link consumes power in both directions, even when a link is idle and no data is being transmitted. Based on the IEEE 802.3az standard, EEE uses a Low Power Idle (LPI) mode to reduce the energy consumption of a link when no packets/data are being sent.

The standard specifies a signaling protocol to achieve power saving during idle time by exchanging LPI indications to signal the transition to low-power mode when there is no traffic. LPI indicates when a link can go idle, and when the link needs to resume after a predefined delay.

The asymmetrical mode is useful for camera and sensors links. On these links, data (video) is sent at high speed (multi gigabits per second) from the camera to the SoC/GPU. On the other direction (from SoC to camera), there are only control signals that need to be sent at much lower speeds (megabit per second) – these can leverage the EEE mode for power saving.

The 100BASE-T1 automotive Ethernet PHY did not specify a low-power mode, and it was added in 1000BASE-T1.  As it became clear that supporting energy efficient asymmetric traffic would be important in the automotive networking world, 2.5G, 5G, and 10Gb/s Ethernet improved the concepts of energy efficiency in 802.3ch by allowing a slow wake. This mode works with a longer delay to re-establish traffic, and is especially useful on asymmetric links.

5)    Virtual Local Area Network (VLAN)

VLANs work by applying identifiers (known as 802.1Q tags) to network packets and handling these tags at switching nodes, creating the appearance and functionality of network traffic that is physically on a single network but that acts as if it is split between separate networks. This way, VLANs keep traffic from different applications separate, despite being connected to the same physical network. VLANs also allow grouping of nodes and data sources together, even if they are not directly connected to the same switch. Because VLANs can be easily configured, system design and data source deployment are greatly simplified. In Automotive, VLAN is used to isolate traffic from different applications or domains, and can route video from different sources over the same physical link and/or isolate traffic that requires higher priority. 

VLAN traffic can be routed, multicast and broadcast. In addition, VLANs also support Quality of Service and traffic prioritization using the 802.1P standard, allowing for efficient bandwidth utilization, which can be utilized in advanced IVN.

6)    Precision Time Protocol

The vision analysis algorithm in a car requires either simultaneous sampling of multiple sensors or knowing the time that a measurement was taken. As these measurements are taken by different sensors and cameras, and carried through different routes, (cables, repeaters, hubs and switches), time synchronization needs to be done between all the nodes in the car down to very short intervals.

The IEEE 802.1AS (Timing and Synchronization for Time-Sensitive Applications in Bridged Local Area Networks) standard allows for synchronization of timing. This standard leverages the IEEE 1588 v2 and uses a special profile called “PTP Profile” to select the best clock source in the system as the master clock for all nodes. Additionally, clock redundancy and rapid failover is easily supported using these protocols.

7)    Audio Video Bridging (AVB/TSN)

Audio Video Bridging (AVB) is a method to transport audio and video (AV) streams over Ethernet-based networks, to ensure the highest Quality-of-Service (QoS). QoS guarantees the ability to dependably run high-priority applications and traffic on a network with a limited capacity. This is handled by the 802.1 suite of standards originally known as AVB, and now known as Time Sensitive Networking (TSN). The Advance Driver Assistance Systems (ADAS) rely on AVB to get data from cameras and sensors in a timely manner at a low, controlled latency with guaranteed bandwidth. The IEEE 802.1 AVB Task Group is working on standards to meet these requirements, including 802.1Qat Stream Reservation, as well as 802.1Qav Queuing and Forwarding for AV Bridges.

AVB IEEE standards define signaling, transport, and synchronization of the audio and video streams. In essence, AVB works by reserving a fraction of the available Ethernet bandwidth for AVB traffic. Its main attributes for Ethernet networks include:

  • Precise timing of streaming in conjunction with PTP. Support of low-jitter media clocks and accurate synchronization of multiple streams.
  • A reservation protocol that enables the endpoint device to notify the various network elements to reserve resources necessary to support its stream.
  • Queuing-and-forwarding defined rules to ensure that an AV stream will pass through the network within the delay specified in the reservation.

In November 2012, AVB was renamed “Time-Sensitive Networking Task Group” (TSN), which is an enhancement of AVB that added specifications to expand the range, functionality and applications of the standard.

The TSN suite of standards also rely on IEEE standards outside of the 802 family, such as IEEE 1722.  IEEE 1722 – Layer 2 Audio/Video Transport Protocol (AVTP) for Time Sensitive Applications in a Bridged Local Area Network – sets the presentation time (time-stamping) for each AV stream and manages latencies.

The AVnu Alliance(www.avnu.org ) is an industry forum dedicated to the advancement of AV transport through the adoption of IEEE 802.1 AVB/TSN and the related IEEE 1722 standard. The Alliance is used by most OEM and Tier-1 companies to define a complete Ethernet-based solution for audio and video in IVNs.

8)    MAC-PHY Security

Media Access Control Security (MACSec) is an 802.1AE IEEE industry standard security technology that secures data transmissions over Ethernet networks. MACSec is used for authentication, encryption and validation of the integrity of packets that are sent between peer nodes and provides point-to-point security on Ethernet links.

MACSec is capable of identifying and preventing security threats, such as intrusion, man-in-the-middle, masquerading, passive wiretapping and playback attacks in the IVN.

Additionally, an Ethernet MACSec root node can be used as the security center for all domains in the car, including lower-speed CAN, LIN, USB, and others. This can be achieved by using one or more trunking ports from an Ethernet switch to an Ethernet supporting gateway, bridging those legacy networks.

9)    Power Over Cable

One great advantage of copper-cabled Ethernet for automotive networking is the ability to deliver power over the same wires as data, which in turn saves weight in the vehicle. This is especially important in the case of cameras and sensors that are mounted all around the vehicle.

The IEEE 802.3bu standard, which was ratified in 2016, defines specifications and parameters for adding standardized power to single-pair cabling. The standard defines a power delivery protocol that supports multiple voltages and classes of power delivery for each voltage. It includes assured fault protection and detection capabilities for identifying device signatures, as well as direct communication with devices to determine accurate and safe power delivery. Total power delivery over the automotive cable ranges from 0.5W up to 50W.


Marvell’s automotive Ethernet products roadmap for IVN includes the most comprehensive set of solutions in the market, which enable our customers to build vehicle networks for low, mid and high-end cars, all the way to fully autonomous cars. This roadmap includes a broad range of switches, PHYs, controllers and bridges (at Ethernet speeds from 10Mbps up to 10Gbps and higher), advanced security features, and the support for the latest industry requirements for AVB/TSN features in IVN.   

The latest addition to the Marvell Ethernet PHY roadmap is the 88Q2220 and 88Q2221, the first 1000BASE-T1 Automotive PHY family of products for secured network, with the support of IEEE 802.1AE MACSec. In addition, these ultra-low power Gigabit PHY products support the latest TC10 standard of OPEN Alliance for 1000BASE-T1 Sleep and Wake-up modes.

In our next blog, we will discuss Ethernet QoS for IVN, the related standards and features, as well as the AvNU certification of Marvell Automotive Switch products.

August 31st, 2020

Arm processors in the Data Center

By Raghib Hussain, Chief Strategy Officer and Executive Vice President, Networking and Processors Group

Last week, Marvell announced a change in our strategy for ThunderX, our Arm-based server-class processor product line. I’d like to take the opportunity to put some more context around that announcement, and our future plans in the data center market.

ThunderX is a product line that we started at Cavium, prior to our merger with Marvell in 2018. At Cavium, we had built many generations of successful processors for infrastructure applications, including our Nitrox security processor and OCTEON infrastructure processor. These processors have been deployed in the world’s most demanding data-plane applications such as firewalls, routers, SSL-acceleration, cellular base stations, and Smart NICs. Today, OCTEON is the most scalable and widely deployed multicore processor in the market.

As co-founder of Cavium, I had a strong belief that Arm-based processors also had a role to play in next generation data centers. One size simply doesn’t fit all anymore, so we started the ThunderX product line for the server market. It was a bold move, and we knew it would take significant time and investment to come to fruition. In fact, we have spent six years now building multiple generations of products, developing the ecosystem, the software, and working with customers to qualify systems for production deployment in large data centers. ThunderX2 was the industry’s first Arm-based processor capable of powering dual socket servers that could go toe-to-toe with x86-based solutions, and clearly established the performance credentials for Arm in the server market. We moved the bar higher yet again with ThunderX3, as we discussed at Hot Chips 32.

Today, we see strong ecosystem support and a significant opportunity for Arm-based processors in the data center. But the real market opportunity for server-class Arm processors is in customized solutions, optimized for the use cases at hyperscale data center operators. This should be no surprise, as the power of the Arm architecture has always been in its ability to be integrated into highly optimized designs tailored for specific use cases, and we see hyperscale datacenter applications as no different.

Our rich IP portfolio, decades of processor expertise with Nitrox, OCTEON, and ThunderX, combined with our new custom ASIC capability, and investment in the latest TSMC 5nm process node, puts Marvell in a unique position to address this market opportunity. So to us, this market driven change just makes sense. We look forward to partnering with our customers and helping to deliver highly optimized solutions tailored to their unique needs.

August 28th, 2020

Matt Murphy Talks Marvell’s Market Traction on CNBC’s Squawk Alley

By Stacey Keegan, Vice President, Corporate Marketing, Marvell

Marvell President and CEO, Matt Murphy, discussed Marvell’s second quarter Earnings beat this morning with the CNBC Squawk Alley team. 

Marvell’s growth is being driven by our success in our key data infrastructure end markets. Particularly, in looking at 5G wireless infrastructure we have seen 4 consecutive quarters of sequential growth. Right now, this is particularly pronounced in China, where 5G is being rolled out. But with other countries working on rollout plans, and 4 of the top 5 base station vendors as Marvell customers, the growth from 5G is just beginning.

Marvell also has a large and growing data center business, with both enterprise on-prem datacenters and now the cloud. We announced last quarter that cloud is now over 10% of our revenue and growing fast. And the reason we are seeing strong growth is that we are producing the key storage and security products for cloud. This includes chips for huge multi-terabyte hard drives, where all cloud data is stored. It also includes our networking products, which doubled year-over-year. And finally, growth in this area includes Marvell’s custom products that came to us through a recent acquisition. This is how several of the larger datacenter operators like to buy chips. Where we build exactly what they want. 

Watch the full interview here.

August 27th, 2020

How to Reap the Benefits of NVMe over Fabric in 2020

By Todd Owens, Technical Marketing Manager, Marvell

As native Non-volatile Memory Express (NVMe®) share-storage arrays continue enhancing our ability to store and access more information faster across a much bigger network, customers of all sizes – enterprise, mid-market and SMBs – confront a common question: what is required to take advantage of this quantum leap forward in speed and capacity?

Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and — offering 64,000 command queues with 64,000 commands per queue — can provide much more scalability than other storage protocols.

A screenshot of a cell phone

Description automatically generated

Unfortunately, most of the NVMe in use today is held captive in the system in which it is installed. While there are a few storage vendors offering NVMe arrays on the market today, the vast majority of enterprise datacenter and mid-market customers are still using traditional storage area networks, running SCSI protocol over either Fibre Channel or Ethernet Storage Area Networks (SAN).

The newest storage networks, however, will be enabled by what we call NVMe over Fabric (NVMe-oF) networks. As with SCSI today, NVMe-oF will offer users a choice of transport protocols. Today, there are three standard protocols that will likely make significant headway into the marketplace. These include:

  • NVMe over Fibre Channel (FC-NVMe)
  • NVMe over RoCE RDMA (NVMe/RoCE)
  • NVMe over TCP (NVMe/TCP)

If NVMe over Fabrics are to achieve their true potential, however, there are three major elements that need to align. First, users will need an NVMe-capable storage network infrastructure in place. Second, all of the major operating system (O/S) vendors will need to provide support for NVMe-oF. Third, customers will need disk array systems that feature native NVMe. Let’s look at each of these in order.

  1. NVMe Storage Network Infrastructure

In addition to Marvell, several leading network and SAN connectivity vendors support one or more varieties of NVMe-oF infrastructure today. This storage network infrastructure (also called the storage fabric), is made up of two main components: the host adapter that provides server connectivity to the storage fabric; and the switch infrastructure that provides all the traffic routing, monitoring and congestion management.

For FC-NVMe, today’s enhanced 16Gb Fibre Channel (FC) host bus adapters (HBA) and 32Gb FC HBAs already support FC-NVMe. This includes the Marvell® QLogic® 2690 series Enhanced 16GFC, 2740 series 32GFC and 2770 Series Enhanced 32GFC HBAs.

On the Fibre Channel switch side, no significant changes are needed to transition from SCSI-based connectivity to NVMe technology, as the FC switch is agnostic about the payload data. The job of the FC switch is to just route FC frames from point to point and deliver them in order, with the lowest latency required. That means any 16GFC or greater FC switch is fully FC-NVMe compatible.

A key decision regarding FC-NVMe infrastructure, however, is whether or not to support both legacy SCSI and next-generation NVMe protocols simultaneously. When customers eventually deploy new NVMe-based storage arrays (and many will over the next three years), they are not going to simply discard their existing SCSI-based systems. In most cases, customers will want individual ports on individual server HBAs that can communicate using both SCSI and NVMe, concurrently. Fortunately, Marvell’s QLogic 16GFC/32GFC portfolio does support concurrent SCSI and NVMe, all with the same firmware and a single driver. This use of a single driver greatly reduces complexity compared to alternative solutions, which typically require two (one for FC running SCSI and another for FC-NVMe).

If we look at Ethernet, which is the other popular transport protocol for storage networks, there is one option for NVMe-oF connectivity today and a second option on the horizon. Currently, customers can already deploy NVMe/RoCE infrastructure to support NVMe connectivity to shared storage. This requires RoCE RDMA-enabled Ethernet adapters in the host, and Ethernet switching that is configured to support a lossless Ethernet environment. There are a variety of 10/25/50/100GbE network adapters on the market today that support RoCE RDMA, including the Marvell FastLinQ® 41000 Series and the 45000 Series adapters. 

On the switching side, most 10/25/100GbE switches that have shipped in the past 2-3 years support data center bridging (DCB) and priority flow control (PFC), and can support the lossless Ethernet environment needed to support a low-latency, high-performance NVMe/RoCE fabric.

While customers may have to reconfigure their networks to enable these features and set up the lossless fabric, these features will likely be supported in any newer Ethernet switch or director. One point of caution: with lossless Ethernet networks, scalability is typically limited to only 1 or 2 hops. For high scalability environments, consider alternative approaches to the NVMe storage fabric.

One such alternative is NVMe/TCP. This is a relatively new protocol (NVM Express Group ratification in late 2018), and as such is not widely available today. However, the advantage of NVMe/TCP is that it runs on today’s TCP stack, leveraging TCP’s congestion control mechanisms. That means there’s no need for a tuned environment (like that required with NVMe/RoCE), and NVMe/TCP can scale right along with your network. Think of NVMe/TCP in the same way as you do iSCSI today. Like iSCSI, NVMe/TCP will provide good performance, work with existing infrastructure, and be highly scalable. For those customers seeking the best mix of performance and ease of implementation, NVMe/TCP will be the best bet.

Because there is limited operating system (O/S) support for NVMe/TCP (more on this below), I/O vendors are not currently shipping firmware and drivers that support NVMe/TCP. But a few, like Marvell, have adapters that, from a hardware standpoint, are NVMe/TCP-ready; all that will be required is a firmware update in the future to enable the functionality. Notably, Marvell will support NVMe over TCP with full hardware offload on its FastLinQ adapters in the future. This will enable our NVMe/TCP adapters to deliver high performance and low latency that rivals NVMe/RoCE implementations.

A screenshot of a cell phone

Description automatically generated
  • Operating System Support

While it’s great that there is already infrastructure to support NVMe-oF implementations, that’s only the first part of the equation. Next comes O/S support. When it comes to support for NVMe-oF, the major O/S vendors are all in different places – see the table below for a current (August 2020) summary. The major Linux distributions from RHEL and SUSE support both FC-NVMe and NVMe/RoCE and have limited support for NVMe/TCP. VMware, beginning with ESXi 7.0, supports both FC-NVMe and NVMe/RoCE but does not yet support NVMe/TCP. Microsoft Windows Server currently uses an SMB-direct network protocol and offers no support for any NVMe-oF technology today.

With VMware ESXi 7.0, be aware of a couple of caveats: VMware does not currently support FC-NVMe or NVMe/RoCE in vSAN or with vVols implementations. However, support for these configurations, along with support for NVMe/TCP, is expected in future releases.

  • Storage Array Support

A few storage array vendors have released mid-range and enterprise class storage arrays that are NVMe-native. NetApp sells arrays that support both NVMe/RoCE and FC-NVMe, and are available today. Pure Storage offers NVMe arrays that support NVMe/RoCE, with plans to support FC-NVMe and NVMe/TCP in the future. In late 2019, Dell EMC introduced its PowerMax line of flash storage that supports FC-NVMe. This year and next, other storage vendors will be bringing arrays to market that will support both NVMe/RoCE and FC-NMVe. We expect storage arrays that support NVMe/TCP will become available in the same time frame.

Future-proof your investments by anticipating NVMe-oF tomorrow

Altogether, we are not too far away from having all the elements in place to make NVMe-oF a reality in the data center. If you expect the servers you are deploying today to operate for the next five years, there is no doubt they will need to connect to NVMe-native storage during that time. So plan ahead.

The key from an I/O and infrastructure perspective is to make sure you are laying the groundwork today to be able to implement NVMe-oF tomorrow. Whether that’s Fibre Channel or Ethernet, customers should be deploying I/O technology that supports NVMe-oF today. Specifically, that means deploying 16GFC enhanced or 32GFC HBAs and switching infrastructure for Fibre Channel SAN connectivity. This includes the Marvell QLogic 2690, 2740 or 2770-series Fibre Channel HBAs. For Ethernet, this includes Marvell’s FastLinQ 41000/45000 series Ethernet adapter technology.

These advances represent a big leap forward and will deliver great benefits to customers. The sooner we build industry consensus around the leading protocols, the faster these benefits can be realized.

For more information on Marvell Fibre Channel and Ethernet technology, go to www.marvell.com. For technology specific to our OEM customer servers and storage, go to www.marvell.com/hpe or www.marvell.com/dell.