-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Archive for the ‘Ethernet Adapters and Controllers’ Category

August 27th, 2020

How to Reap the Benefits of NVMe over Fabric in 2020

By Todd Owens, Technical Marketing Manager, Marvell

As native Non-volatile Memory Express (NVMe®) share-storage arrays continue enhancing our ability to store and access more information faster across a much bigger network, customers of all sizes – enterprise, mid-market and SMBs – confront a common question: what is required to take advantage of this quantum leap forward in speed and capacity?

Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and — offering 64,000 command queues with 64,000 commands per queue — can provide much more scalability than other storage protocols.

A screenshot of a cell phone

Description automatically generated

Unfortunately, most of the NVMe in use today is held captive in the system in which it is installed. While there are a few storage vendors offering NVMe arrays on the market today, the vast majority of enterprise datacenter and mid-market customers are still using traditional storage area networks, running SCSI protocol over either Fibre Channel or Ethernet Storage Area Networks (SAN).

The newest storage networks, however, will be enabled by what we call NVMe over Fabric (NVMe-oF) networks. As with SCSI today, NVMe-oF will offer users a choice of transport protocols. Today, there are three standard protocols that will likely make significant headway into the marketplace. These include:

  • NVMe over Fibre Channel (FC-NVMe)
  • NVMe over RoCE RDMA (NVMe/RoCE)
  • NVMe over TCP (NVMe/TCP)

If NVMe over Fabrics are to achieve their true potential, however, there are three major elements that need to align. First, users will need an NVMe-capable storage network infrastructure in place. Second, all of the major operating system (O/S) vendors will need to provide support for NVMe-oF. Third, customers will need disk array systems that feature native NVMe. Let’s look at each of these in order.

  1. NVMe Storage Network Infrastructure

In addition to Marvell, several leading network and SAN connectivity vendors support one or more varieties of NVMe-oF infrastructure today. This storage network infrastructure (also called the storage fabric), is made up of two main components: the host adapter that provides server connectivity to the storage fabric; and the switch infrastructure that provides all the traffic routing, monitoring and congestion management.

For FC-NVMe, today’s enhanced 16Gb Fibre Channel (FC) host bus adapters (HBA) and 32Gb FC HBAs already support FC-NVMe. This includes the Marvell® QLogic® 2690 series Enhanced 16GFC, 2740 series 32GFC and 2770 Series Enhanced 32GFC HBAs.

On the Fibre Channel switch side, no significant changes are needed to transition from SCSI-based connectivity to NVMe technology, as the FC switch is agnostic about the payload data. The job of the FC switch is to just route FC frames from point to point and deliver them in order, with the lowest latency required. That means any 16GFC or greater FC switch is fully FC-NVMe compatible.

A key decision regarding FC-NVMe infrastructure, however, is whether or not to support both legacy SCSI and next-generation NVMe protocols simultaneously. When customers eventually deploy new NVMe-based storage arrays (and many will over the next three years), they are not going to simply discard their existing SCSI-based systems. In most cases, customers will want individual ports on individual server HBAs that can communicate using both SCSI and NVMe, concurrently. Fortunately, Marvell’s QLogic 16GFC/32GFC portfolio does support concurrent SCSI and NVMe, all with the same firmware and a single driver. This use of a single driver greatly reduces complexity compared to alternative solutions, which typically require two (one for FC running SCSI and another for FC-NVMe).

If we look at Ethernet, which is the other popular transport protocol for storage networks, there is one option for NVMe-oF connectivity today and a second option on the horizon. Currently, customers can already deploy NVMe/RoCE infrastructure to support NVMe connectivity to shared storage. This requires RoCE RDMA-enabled Ethernet adapters in the host, and Ethernet switching that is configured to support a lossless Ethernet environment. There are a variety of 10/25/50/100GbE network adapters on the market today that support RoCE RDMA, including the Marvell FastLinQ® 41000 Series and the 45000 Series adapters. 

On the switching side, most 10/25/100GbE switches that have shipped in the past 2-3 years support data center bridging (DCB) and priority flow control (PFC), and can support the lossless Ethernet environment needed to support a low-latency, high-performance NVMe/RoCE fabric.

While customers may have to reconfigure their networks to enable these features and set up the lossless fabric, these features will likely be supported in any newer Ethernet switch or director. One point of caution: with lossless Ethernet networks, scalability is typically limited to only 1 or 2 hops. For high scalability environments, consider alternative approaches to the NVMe storage fabric.

One such alternative is NVMe/TCP. This is a relatively new protocol (NVM Express Group ratification in late 2018), and as such is not widely available today. However, the advantage of NVMe/TCP is that it runs on today’s TCP stack, leveraging TCP’s congestion control mechanisms. That means there’s no need for a tuned environment (like that required with NVMe/RoCE), and NVMe/TCP can scale right along with your network. Think of NVMe/TCP in the same way as you do iSCSI today. Like iSCSI, NVMe/TCP will provide good performance, work with existing infrastructure, and be highly scalable. For those customers seeking the best mix of performance and ease of implementation, NVMe/TCP will be the best bet.

Because there is limited operating system (O/S) support for NVMe/TCP (more on this below), I/O vendors are not currently shipping firmware and drivers that support NVMe/TCP. But a few, like Marvell, have adapters that, from a hardware standpoint, are NVMe/TCP-ready; all that will be required is a firmware update in the future to enable the functionality. Notably, Marvell will support NVMe over TCP with full hardware offload on its FastLinQ adapters in the future. This will enable our NVMe/TCP adapters to deliver high performance and low latency that rivals NVMe/RoCE implementations.

A screenshot of a cell phone

Description automatically generated
  • Operating System Support

While it’s great that there is already infrastructure to support NVMe-oF implementations, that’s only the first part of the equation. Next comes O/S support. When it comes to support for NVMe-oF, the major O/S vendors are all in different places – see the table below for a current (August 2020) summary. The major Linux distributions from RHEL and SUSE support both FC-NVMe and NVMe/RoCE and have limited support for NVMe/TCP. VMware, beginning with ESXi 7.0, supports both FC-NVMe and NVMe/RoCE but does not yet support NVMe/TCP. Microsoft Windows Server currently uses an SMB-direct network protocol and offers no support for any NVMe-oF technology today.

With VMware ESXi 7.0, be aware of a couple of caveats: VMware does not currently support FC-NVMe or NVMe/RoCE in vSAN or with vVols implementations. However, support for these configurations, along with support for NVMe/TCP, is expected in future releases.

  • Storage Array Support

A few storage array vendors have released mid-range and enterprise class storage arrays that are NVMe-native. NetApp sells arrays that support both NVMe/RoCE and FC-NVMe, and are available today. Pure Storage offers NVMe arrays that support NVMe/RoCE, with plans to support FC-NVMe and NVMe/TCP in the future. In late 2019, Dell EMC introduced its PowerMax line of flash storage that supports FC-NVMe. This year and next, other storage vendors will be bringing arrays to market that will support both NVMe/RoCE and FC-NMVe. We expect storage arrays that support NVMe/TCP will become available in the same time frame.

Future-proof your investments by anticipating NVMe-oF tomorrow

Altogether, we are not too far away from having all the elements in place to make NVMe-oF a reality in the data center. If you expect the servers you are deploying today to operate for the next five years, there is no doubt they will need to connect to NVMe-native storage during that time. So plan ahead.

The key from an I/O and infrastructure perspective is to make sure you are laying the groundwork today to be able to implement NVMe-oF tomorrow. Whether that’s Fibre Channel or Ethernet, customers should be deploying I/O technology that supports NVMe-oF today. Specifically, that means deploying 16GFC enhanced or 32GFC HBAs and switching infrastructure for Fibre Channel SAN connectivity. This includes the Marvell QLogic 2690, 2740 or 2770-series Fibre Channel HBAs. For Ethernet, this includes Marvell’s FastLinQ 41000/45000 series Ethernet adapter technology.

These advances represent a big leap forward and will deliver great benefits to customers. The sooner we build industry consensus around the leading protocols, the faster these benefits can be realized.

For more information on Marvell Fibre Channel and Ethernet technology, go to www.marvell.com. For technology specific to our OEM customer servers and storage, go to www.marvell.com/hpe or www.marvell.com/dell.

August 20th, 2020

Navigating Product Name Changes for Marvell Ethernet Adapters at HPE

By Todd Owens, Technical Marketing Manager, Marvell

Hewlett Packard Enterprise (HPE) recently updated its product naming protocol for the Ethernet adapters in its HPE ProLiant and HPE Apollo servers. Its new approach is to include the ASIC model vendor’s name in the HPE adapter’s product name. This commonsense approach eliminates the need for model number decoder rings on the part of Channel Partners and the HPE Field team and provides everyone with more visibility and clarity. This change also aligns more with the approach HPE has been taking with their “Open” adapters on HPE ProLiant Gen10 Plus servers. All of this is good news for everyone in the server sales ecosystem, including the end user. The products’ core SKU numbers remain the same, too, which is also good.

For HPE Ethernet adapters for HPE ProLiant Gen10 Plus and HPE Apollo Gen10 Plus servers, the name changes were fairly basic. Under this new naming protocol, HPE moved the name of the adapter’s manufacturer to the front and added “for HPE” to the end. For example, what was previously named “HPE Ethernet 10/25Gb 2-port SFP28 QL41232HLCU Adapter” is now “Marvell QL41232HLCU Ethernet 10/25Gb 2-port SFP28 Adapter for HPE”. The model number, QL41232HLCU, did not change.

The table below shows the new naming for the HPE adapters using Marvell FastLinQ I/O technology and makes it very easy to match up ASIC technology, connection type and form factor across the different products.

HPE SKU ORIGINAL HPE MODEL NEW SKU DESCRIPTION

867707-B21

521T

HPE Ethernet 10Gb 2-port BASE-T QL41401-A2G Adapter

P08446-B21

524SFP+

HPE Ethernet 10Gb 2-port SFP+ QL41401-A2G Adapter

652503-B21

530SFP+

HPE Ethernet 10Gb 2-port SFP+ 57810S Adapter

656596-B21

530T

HPE Ethernet 10Gb 2-port BASE-T 57810S Adapter

700759-B21

533FLR-T

HPE FlexFabric 10Gb 2-port FLR-T 57810S Adapter

700751-B21

534FLR-SFP+

HPE FlexFabric 10Gb 2-port FLR-SFP+ 57810S Adapter

764302-B21

536FLR-T

HPE FlexFabric 10Gb 4-port FLR-T 57840S Adapter

867328-B21

621SFP28

HPE Ethernet 10/25Gb 2-port SFP28 QL41401-A2G Adapter

867334-B21

622FLR-SFP28

HPE Ethernet 10/25Gb 2-port FLR-SFP28 QL41401-A2G CNA


Inevitably, there are a few challenges with the new approach, especially for the adapters used in Gen10 servers. The first is that the firmware in the adapters is not changing. So, when a customer boots up the server, the old model information, such as 524SFP+, will be displayed on the system management screens. The same applies to information passed from the adapter to other management software, such as HPE Network Orchestrator. However, in HPE’s configuration tools – One Config Advanced (OCA) – only the new names and model numbers appear, with no mention of the original numbers. This could create confusion when you’re configuring a system and it boots up, displaying a different model number than the one you are actually using.

Additionally, it is going to take some time for operating system vendors like VMware and Microsoft to update their hardware compatibility listings. Today, you can go to the VMware Compatibility Guide (VCG) and search on a 621SFP28 with no problem. But search on a QL41401 or QL41401-A2G, and you will come up empty. HPE is also working on updating its QuickSpec documents with the new naming, and that will take some time as well.

So, while the model number decoder rings are no longer required, you will need to have easy to access cross references to match the new name to the old model. To support you on this, we have updated all our key collateral for HPE-specific Marvell® FastLinQ® Ethernet adapters on the Marvell HPE Microsite. These documents were updated to include not only the new product names that HPE has implemented, but the original model number references as well.

Here are some links to the updated collateral:

Why Marvell FastLinQ for HPE? First, we are a strategic supplier to HPE for I/O technology. In fact, HPE Synergy I/O is based on Marvell FastLinQ technology. Value-add features like storage offload for iSCSI and FCoE and network partitioning are key to enabling HPE to deliver composable network connectivity on their flagship blade solutions.

In addition to storage offload, Marvell provides HPE with unique features such as Universal RDMA and SmartAN® technology. Universal RDMA provides the HPE customer with the ability to run either RoCE RDMA or iWARP RDMA protocols on a single adapter. So, as their needs for implementing RDMA protocols change, there is no need to change adapters. SmartAN technology automatically configures the adapter ports for the proper 10GbE or 25GbE bandwidth, and – based on the type of switch the adapter is connected to and the physical cabling connection – adjusts the forward error correction settings. FastLinQ adapters also support a variety of other offloads including SR-IOV, DPDK and tunneling. This minimizes the impact I/O traffic management has on the host CPU, freeing up CPU resources to do more important work.

Our team of I/O experts stands ready to help you differentiate your solutions based on industry leading I/O technology and features for HPE servers. If you need help selecting the right I/O technology for your HPE customer, contact our field sales and application engineering experts using the Contacts link on our Marvell HPE Microsite.

July 28th, 2020

Living on the Network Edge: Security

By Alik Fishman, Senior Product Marketing Manager, Marvell

Living on the Network Edge: Security

In our series Living on the Network Edge, we have looked at the trends driving Intelligence, Performance and Telemetry to the network edge. In this installment, let’s look at the changing role of network security and the ways integrating security capabilities in network access can assist in effectively streamlining policy enforcement, protection, and remediation across the infrastructure.

Cybersecurity threats are now a daily struggle for businesses experiencing a huge increase in hacked and breached data from sources increasingly common in the workplace like mobile and IoT devices. Not only are the number of security breaches going up, they are also increasing in severity and duration, with the average lifecycle from breach to containment lasting nearly a year1 and presenting expensive operational challenges. With the digital transformation and emerging technology landscape (remote access, cloud-native models, proliferation of IoT devices, etc.) dramatically impacting networking architectures and operations, new security risks are introduced. To address this, enterprise infrastructure is on the verge of a remarkable change, elevating network intelligence, performance, visibility and security2.

COVID-19 has been a wake-up call for accelerating digital transformation – as companies with greater digital presences show more resiliency3. The workforce is expected to transform post-COVID-19 with 20-45%4 becoming distributed and working remotely, either from home or from smaller distributed office spaces. The change in the working environment and accelerated migration to hybrid-cloud and multi-cloud drives a new normal, and the borderless enterprise is now a reality – driving network infrastructure to add end-to-end management, automation and security functionalities needed to support businesses in this new digital era. As mobility and cloud applications extend traditional boundaries and this borderless enterprise becomes increasingly vulnerable, a broader attack surface is no longer contained within well-defined and defended perimeters. Cracks are showing. Remote workers’ identities and devices are the new security perimeter with 70% of all breaches originating at endpoints, according to IDC research5.

This is where embedded security in network access provides essential frontline protection from malicious attacks entry points by enforcing zero-trust access policies. No traffic is trusted from the outset, and the traffic isn’t in the clear within networking devices throughout the infrastructure. Network telemetry and integrated security safeguards capable of inspecting workloads at line-rate team up with security appliances and AI-analytic tools to intelligently flag suspicious traffic and rapidly detect threats. Segmentation of security zones and agile group policy enforcement limits areas of exposure, prevents lateral movement, and enables quick remediation. IEEE 802.1AE MACSec encryption on all ports secure data throughout the network and prevent intrusion. Monitoring control protocol exceptions and activating rate limiters add layers of protection to control and management planes, preventing DDOS attacks. Integrated secure boot and secured storage provide the protection from counterfeit attempts to compromise network hardware and software.

Cybersecurity is now the dominate priorities of every organization, as each adapts to a post-COVID 19 world. Network-embedded security is on the rise to become a powerful ally in fighting the battle against ever evolving security threats. In this dynamic world, what can your network do to secure your assets?

Living on the Network Edge

What steps are you taking to bolster your network for living on the edge? Telemetry, Intelligence, Performance and Security are critical technologies for the growing borderless campus as mobility and cloud applications proliferate and drive networking functions. Learn more at: https://www.marvell.com/solutions/enterprise.html.

###

1 https://www.varonis.com/blog/cybersecurity-statistics
2 Cisco 2019 Global Networking Trends Survey
3 Morgan Stanley, 2Q20 CIO Survey: IT Hardware Takeaways
4 Dell’Oro Group Ethernet Switch – Campus five-year forecast, 2020-2024
5 Forbes 2020 Roundup Of Cybersecurity Forecasts And Market Estimates

May 13th, 2019

FastLinQ® NICs + RedHat SDN

By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

A bit of validation once in a while is good for all of us – that’s pretty true whether you are the one providing it or, conversely, the one receiving it.  Most of the time it seems to be me that is giving out validation rather than getting it.  Like the other day when my wife tried on a new dress and asked me, “How do I look?”  Now, of course, we all know there is only one way to answer a question like that – if they want to avoid sleeping on the couch at least.

Recently, the Marvell team received some well-deserved validation for its efforts.  The FastLinQ 45000/41000 high performance Ethernet Network Interface Controllers (NICs) series that we supply to the industry, which support 10/25/50/100GbE operation, are now fully qualified by Red Hat for Fast Data Path (FDP) 19.B.

Figure 1: The FastLinQ 45000 and 41000 Ethernet Adapter Series from Marvell

Red Hat FDP is employed in an extensive array of the products found within the Red Hat portfolio – such as the Red Hat OpenStack Platform (RHOSP), as well as the Red Hat OpenShift Container Platform and Red Hat Virtualization (RHV).  Having FDP-qualification means that FastLinQ can now address a far broader scope of the open-source Software Defined Networking (SDN) use cases – including Open vSwitch (OVS), Open vSwitch with the Data Plane Development Kit (OVS-DPDK), Single Root Input/Output Virtualization (SR-IOV) and Network Functions Virtualization (NFV).

The engineers at Marvell worked closely with our counterparts at Red Hat on this project, in order to ensure that the FastLinQ feature set would operate in conjunction with the FDP production channel. This involved many hours of complex, in-depth testing.  By being FDP 19.B qualified, Marvell FastLinQ Ethernet Adapters can enable seamless SDN deployments with RHOSP 14, RHEL 8.0, RHEV 4.3 and OpenShift 3.11.

Being widely recognized as the data networking ‘Swiss Army Knife,’ our FastLinQ 45000/41000 Ethernet adapters benefit from a highly flexible programmable architecture. This architecture is capable of delivering up to 68 million small packet per second performance levels, plus 240 SR-IOV virtual functions and supports tunneling while maintaining stateless offloads. As a result, customers have the hardware they need to seamlessly implement and manage even the most challenging of network workloads in what is becoming an increasingly virtualized landscape. Supporting Universal RDMA (concurrent RoCE, RoCEv2 and iWARP operation), unlike most competing NICs, they offer a highly scalable and flexible solution.  Learn more here.

 

Validation feels good. Thank you to the RedHat and Marvell team!

February 20th, 2019

NVMe/TCP – Simplicity is the Key to Innovation

By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

Whether it is the aesthetics of the iPhone or a work of art like Monet’s ‘Water Lillies’, simplicity is often a very attractive trait. I hear this resonate in everyday examples from my own life – with my boss at work, whose mantra is “make it simple”, and my wife of 15 years telling my teenage daughter “beauty lies in simplicity”. For the record, both of these statements generally fall upon deaf ears.

The Non-Volatile Memory over PCIe Express (NVMe) technology that is now driving the progression of data storage is another place where the value of simplicity is starting to be recognized. In particular with the advent of the NVMe-over-Fabrics (NVMe-oF) topology that is just about to start seeing deployment. The simplest and most trusted of Ethernet fabrics, namely Transmission Control Protocol (TCP), has now been confirmed as an approved NVMe-oF standard by the NVMe Group[1].


Figure 1: All the NVMe fabrics currently available

Just to give a bit of background information here, NVMe basically enables the efficient utilization of flash-based Solid State Drives (SSDs) by accessing it over a high-speed interface, like PCIe, and using a streamlined command set that is specifically designed for flash implementations. Now, by definition, NVMe is limited to the confines of a single server, which presents a challenge when looking to scale out NVMe and access it from any element within the data center. This is where NVMe-oF comes in. All Flash Arrays (AFAs), Just a Bunch of Flash (JBOF) or Fabric-Attached Bunch of Flash (FBOF) and Software Defined Storage (SDS) architectures will each be able to incorporate a front end that has NVMe-oF connectivity as its foundation. As a result, the effectiveness with which servers, clients and applications are able to access external storage resources will be significantly enhanced.

A series of ‘fabrics’ have now emerged for scaling out NVMe. The first of these being Ethernet Remote Direct Memory Access (RDMA) – in both its RDMA over Converged Ethernet (RoCE) and Internet Wide-Area RDMA Protocol (iWARP) derivatives. It has been followed soon after by NVMe-over-Fiber Channel (FC-NVMe), and then ones based on FCoE, Infiniband and OmniPath.

But with so many fabric options already out there, why is it necessary to come up with another one? Do we really need NVMe-over-TCP (NVMe/TCP) too? Well RDMA (whether it is RoCE or iWARP) based NVMe fabrics are supposed to deliver the extremely low level latency that NVMe requires via a myriad of different technologies – like zero copy and kernel bypass – driven by specialized Network Interface Controller (NICs). However, there are several factors which hamper this, and these need to be taken into account.

  • Firstly, most of the earlier fabrics (like RoCE/iWARP) have no existing install base for storage networking to speak of (Fiber Channel is the only notable exception to this). For example, of the 12 million 10GbE+ NIC ports currently in operation within enterprise data centers, less than 5% have any RDMA capability (according to my quick back of the envelope calculations).
  • The most popular RDMA protocol (RoCE) mandates a lossless network on which to run (and this in turn requires highly skilled network engineers that command higher salaries). Even then, this protocol is prone to congestion problems, adding to further frustration.
  • Finally, and perhaps most telling, the two RDMA protocols (RoCE and iWARP) are mutually incompatible.

Unlike any other NVMe fabric, the pervasiveness of TCP is huge – it is absolutely everywhere. TCP/IP is the fundamental foundation of the Internet, every single Ethernet NIC/network out there supports the TCP protocol. With TCP, availability and reliability are just not issues to that need to be worried about. Extending the scale of NVMe over a TCP fabric seems like the logical thing to do.

NVMe/TCP is fast (especially if using Marvell FastLinQ 10/25/50/100GbE NICs – as they have a build-in full offload for NVMe/TCP), it leverages existing infrastructure and keeps things inherently simple. That is beautiful prospect for any technologist and is also attractive to company CIOs worried about budgets too.

So, once again, simplicity wins in the long run!

[1] https://nvmexpress.org/welcome-nvme-tcp-to-the-nvme-of-family-of-transports/

October 18th, 2018

Looking to Converge? HPE Launches Next Gen Marvell FastLinQ CNAs

By Todd Owens, Technical Marketing Manager, Marvell

Converging network and storage I/O onto a single wire can drive significant cost reductions in the small to mid-size data center by reducing the number of connections required. Fewer adapter ports means fewer cables, optics and switch ports consumed, all of which reduce OPEX in the data center. Customers can take advantage of converged I/O by deploying Converged Network Adapters (CNA) that provide not only networking connectivity, but also provide storage offloads for iSCSI and FCoE as well.

Just recently, HPE has introduced two new CNAs based on Marvell® FastLinQ® 41000 Series technology. The HPE StoreFabric CN1200R 10GBASE-T Converged Network Adapter and HPE StoreFabric CN1300R 10/25Gb Converged Network Adapter are the latest additions in HPE’s CNA portfolio. These are the only HPE StoreFabric CNAs to also support Remote Direct Memory Access (RDMA) technology (concurrently with storage offloads).

As we all know, the amount of data being generated continues to increase and that data needs to be stored somewhere. Recently, we are seeing an increase in the number of iSCSI connected storage devices in mid-market, branch and campus environments. iSCSI is great for these environments because it is easy to deploy, it can run on standard Ethernet, and there are a variety of new iSCSI storage offerings available, like Nimble and MSA all flash storage arrays (AFAs).

One challenge with iSCSI is the load it puts on the Server CPU for storage traffic processing when using software initiators –  a common approach to storage connectivity. To combat this, Storage Administrators can turn to CNAs with full iSCSI protocol offload. Offloading transfers the burden of processing the storage I/O from the CPU to the adapter.

Figure 1: Benefits of Adapter Offloads

As Figure 1 shows, Marvell driven testing shows that CPU utilization using H/W offload in FastLinQ 10/25GbE adapters can reduce CPU utilization by as much as 50% compared to an Ethernet NIC with software initiators. This means less burden on the CPU, allowing you to add more virtual machines per server and potentially reducing the number of physical servers required. A small item like an intelligent I/O adapter from Marvell can provide a significant TCO savings.

Another challenge has been the latency associated with Ethernet connectivity. This can now be addressed with RDMA technology. iWARP, RDMA over Converged Ethernet (RoCE) and iSCSI over Ethernet with RDMA (iSER) all allow for I/O transactions to be performed directly from the memory to the adapter, bypassing the software kernel in the user space of the O/S. This speeds transactions and reduces the overall I/O latency. The result is better performance and faster applications.

The new HPE StoreFabric CNAs become the ideal devices for converging network and iSCSI storage traffic for HPE ProLiant and Apollo customers. The HPE StoreFabric CN1300R 10/25GbE CNA supports plenty of bandwidth that can be allocated to both the network and storage traffic. In addition, with support for Universal RDMA (support for both iWARP and RoCE) as well as iSER, this adapter provides significantly lower latency than standard network adapters for both the network and storage traffic.

The HPE StoreFabric 1300R also supports a technology Marvell calls SmartAN™, which allows the adapter to automatically configure itself when transitioning between 10GbE and 25GbE networks. This is key because at 25GbE speeds, Forward Error Correction (FEC) can be required, depending on the cabling used. To make things more complex, there are two different types of FEC that can be implemented. To eliminate all the complexity, SmartAN automatically configures the adapter to match the FEC, cabling and switch settings for either 10GbE or 25GbE connections, with no user intervention required.

When budget is the key concern, the HPE StoreFabric CN1200R is the perfect choice. Supporting 10GBASE-T connectivity, this adapter connects to existing CAT6A copper cabling using RJ-45 connections. This eliminates the need for more expensive DAC cables or optical transceivers. The StoreFabric CN1200R also supports RDMA protocols (iWARP, RoCE and iSER) for lower overall latency.

Why converge though? It’s all about a tradeoff between cost and performance. If we do the math to compare the cost of deploying separate LAN and storage networks versus a converged network, we can see that converging I/O greatly reduces the complexity of the infrastructure and can reduce acquisition costs by half. There are additional long-term cost savings also, associated with managing one network versus two.

Figure 2: Eight Server Network Infrastructure Comparison

In this pricing scenario, we are looking at eight servers connecting to separate LAN and SAN environments versus connecting to a single converged environment as shown in figure 2.

Table 1: LAN/SAN versus Converged Infrastructure Price Comparison

The converged environment price is 55% lower than the separate network approach. The downside is the potential storage performance impact of moving from a Fibre Channel SAN in the separate network environment to a converged iSCSI environment. The iSCSI performance can be increased by implementing a lossless Ethernet environment using Data Center Bridging and Priority Flow Control along with RoCE RDMA. This does add significant networking complexity but will improve the iSCSI performance by reducing the number of interrupts for storage traffic.

One additional scenario for these new adapters is in Hyper-Converged Infrastructure (HCI) implementations. With HCI, software defined storage is used. This means storage within the servers is shared across the network. Common implementations include Windows Storage Spaces Direct (S2D) and VMware vSAN Ready Node deployments. Both the HPE StoreFabric CN1200R BASE-T and CN1300R 10/25GbE CNAs are certified for use in either of these HCI implementations.

Figure 3: FastLinQ Technology Certified for Microsoft WSSD and VMware vSAN Ready Node

In summary, the new CNAs from the HPE StoreFabric group provide high performance, low cost connectivity for converged environments. With support for 10Gb and 25Gb Ethernet bandwidths, iWARP and RoCE RDMA and the ability to automatically negotiate changes between 10GbE and 25GbE connections with SmartAN™ technology, these are the ideal I/O connectivity options for small to mid-size server and storage networks.  To get the most out over your server investments, choose Marvell FastLinQ Ethernet I/O technology which is engineered from the start with performance, total cost of ownership, flexibility and scalability in mind.

For more information on converged networking, contact one our HPE experts in the field to talk through your requirements. Just use the HPE Contact Information link on our HPE Microsite at www.marvell.com/hpe.

August 3rd, 2018

Infrastructure Powerhouse: Marvell and Cavium become one!

By Todd Owens, Technical Marketing Manager, Marvell

Marvell’s acquisition of Cavium closed on July 6th, 2018 and the integration is well under way. Cavium becomes a wholly-owned subsidiary of Marvell.  Our combined mission as Marvell is to develop and deliver semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else. The combination of the two companies makes for an infrastructure powerhouse, serving a variety of customers in the Cloud/Data Center, Enterprise/Campus, Service Providers, SMB/SOHO, Industrial and Automotive industries.

For our business with HPE, the first thing you need to know is it is business as usual. The folks you engaged with on I/O and processor technology we provided to HPE before the acquisition are the same you engage with now.  Marvell is a leading provider of storage technologies, including ultra-fast read channels, high performance processors and transceivers that are found in the vast majority of hard disk drive (HDD) and solid-state drive (SDD) modules used in HPE ProLiant and HPE Storage products today.

Our industry leading QLogic® 8/16/32Gb Fibre Channel and FastLinQ® 10/20/25/50Gb Ethernet I/O technology will continue to provide connectivity for HPE Server and Storage solutions. The focus for these products will continue to be the intelligent I/O of choice for HPE, with the performance, flexibility, and reliability we are known for.

Marvell’s Portfolio of FastLinQ Ethernet and QLogic Fibre Channel I/O Adapters

We will continue to provide ThunderX2® Arm® processor technology for HPC servers like the HPE Apollo 70 for high-performance compute applications. We will also continue to provide Ethernet networking technology that is embedded into HPE Servers and Storage today and Marvell ASIC technology used for the iLO5 baseboard management controller (BMC) in all HPE ProLiant and HPE Synergy Gen10 servers.

iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs

That sounds great, but what’s going to change over time?
The combined company now has a much broader portfolio of technology to help HPE deliver best-in-class solutions at the edge, in the network and in the data center.

Marvell has industry-leading switching technology from 1GbE to 100GbE and beyond. This enables us to deliver connectivity from the IoT edge, to the data center and the cloud. Our Intelligent NIC technology provides compression, encryption and more to enable customers to analyze network traffic faster and more intelligently than ever before. Our security solutions and enhanced SoC and Processor capabilities will help our HPE design-in team collaborate with HPE to innovate next-generation server and storage solutions.

Down the road, you’ll see a shift in our branding and where you access info over time as well. While our product-specific brands, like ThunderX2 for Arm, or QLogic for Fibre Channel and FastLinQ for Ethernet will remain, many things will transition from Cavium to Marvell. Our web-based resources will start to change as will our email addresses. For example, you can now access our HPE Microsite at www.marvell.com/hpe . Soon, you’ll be able to contact us at “hpesolutions@marvell.com” as well. The collateral you leverage today will be updated over time. In fact, this has already started with updates to our HPE-specific Line Card, our HPE Ethernet Quick Reference Guide, our Fibre Channel Quick Reference Guides and our presentation materials. Updates will continue over the next few months.

In summary, we are bigger and better. We are one team that is more focused than ever to help HPE, their partners and customers thrive with world-class technology we can bring to bear. If you want to learn more, engage with us today. Our field contact details are here. We are all excited for this new beginning to make “I/O and Infrastructure Matter!” each and every day.