Archive for the ‘Ethernet Adapters and Controllers’ Category

Posted on

FastLinQ® NICs + RedHat SDN

By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

A bit of validation once in a while is good for all of us – that’s pretty true whether you are the one providing it or, conversely, the one receiving it.  Most of the time it seems to be me that is giving out validation rather than getting it.  Like the other day when my wife tried on a new dress and asked me, “How do I look?”  Now, of course, we all know there is only one way to answer a question like that – if they want to avoid sleeping on the couch at least.

Recently, the Marvell team received some well-deserved validation for its efforts.  The FastLinQ 45000/41000 high performance Ethernet Network Interface Controllers (NICs) series that we supply to the industry, which support 10/25/50/100GbE operation, are now fully qualified by Red Hat for Fast Data Path (FDP) 19.B.

The FastLinQ 45000 and 41000 Ethernet Adapter Series from Marvell

Figure 1: The FastLinQ 45000 and 41000 Ethernet Adapter Series from Marvell

Red Hat FDP is employed in an extensive array of the products found within the Red Hat portfolio – such as the Red Hat OpenStack Platform (RHOSP), as well as the Red Hat OpenShift Container Platform and Red Hat Virtualization (RHV).  Having FDP-qualification means that FastLinQ can now address a far broader scope of the open-source Software Defined Networking (SDN) use cases – including Open vSwitch (OVS), Open vSwitch with the Data Plane Development Kit (OVS-DPDK), Single Root Input/Output Virtualization (SR-IOV) and Network Functions Virtualization (NFV).

Red hat logos

The engineers at Marvell worked closely with our counterparts at Red Hat on this project, in order to ensure that the FastLinQ feature set would operate in conjunction with the FDP production channel. This involved many hours of complex, in-depth testing.  By being FDP 19.B qualified, Marvell FastLinQ Ethernet Adapters can enable seamless SDN deployments with RHOSP 14, RHEL 8.0, RHEV 4.3 and OpenShift 3.11.

Being widely recognized as the data networking ‘Swiss Army Knife,’ our FastLinQ 45000/41000 Ethernet adapters benefit from a highly flexible programmable architecture. This architecture is capable of delivering up to 68 million small packet per second performance levels, plus 240 SR-IOV virtual functions and supports tunneling while maintaining stateless offloads. As a result, customers have the hardware they need to seamlessly implement and manage even the most challenging of network workloads in what is becoming an increasingly virtualized landscape. Supporting Universal RDMA (concurrent RoCE, RoCEv2 and iWARP operation), unlike most competing NICs, they offer a highly scalable and flexible solution.  Learn more here.

SDN powered by FastLinQ NIC packet processing engine

 

Validation feels good. Thank you to the RedHat and Marvell team!

Posted on

NVMe/TCP – Simplicity is the Key to Innovation

By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

Whether it is the aesthetics of the iPhone or a work of art like Monet’s ‘Water Lillies’, simplicity is often a very attractive trait. I hear this resonate in everyday examples from my own life – with my boss at work, whose mantra is “make it simple”, and my wife of 15 years telling my teenage daughter “beauty lies in simplicity”. For the record, both of these statements generally fall upon deaf ears.

The Non-Volatile Memory over PCIe Express (NVMe) technology that is now driving the progression of data storage is another place where the value of simplicity is starting to be recognized. In particular with the advent of the NVMe-over-Fabrics (NVMe-oF) topology that is just about to start seeing deployment. The simplest and most trusted of Ethernet fabrics, namely Transmission Control Protocol (TCP), has now been confirmed as an approved NVMe-oF standard by the NVMe Group[1].

All the NVMe fabrics currently available
Figure 1: All the NVMe fabrics currently available

Just to give a bit of background information here, NVMe basically enables the efficient utilization of flash-based Solid State Drives (SSDs) by accessing it over a high-speed interface, like PCIe, and using a streamlined command set that is specifically designed for flash implementations. Now, by definition, NVMe is limited to the confines of a single server, which presents a challenge when looking to scale out NVMe and access it from any element within the data center. This is where NVMe-oF comes in. All Flash Arrays (AFAs), Just a Bunch of Flash (JBOF) or Fabric-Attached Bunch of Flash (FBOF) and Software Defined Storage (SDS) architectures will each be able to incorporate a front end that has NVMe-oF connectivity as its foundation. As a result, the effectiveness with which servers, clients and applications are able to access external storage resources will be significantly enhanced.

A series of ‘fabrics’ have now emerged for scaling out NVMe. The first of these being Ethernet Remote Direct Memory Access (RDMA) – in both its RDMA over Converged Ethernet (RoCE) and Internet Wide-Area RDMA Protocol (iWARP) derivatives. It has been followed soon after by NVMe-over-Fiber Channel (FC-NVMe), and then ones based on FCoE, Infiniband and OmniPath.

But with so many fabric options already out there, why is it necessary to come up with another one? Do we really need NVMe-over-TCP (NVMe/TCP) too? Well RDMA (whether it is RoCE or iWARP) based NVMe fabrics are supposed to deliver the extremely low level latency that NVMe requires via a myriad of different technologies – like zero copy and kernel bypass – driven by specialized Network Interface Controller (NICs). However, there are several factors which hamper this, and these need to be taken into account.

  • Firstly, most of the earlier fabrics (like RoCE/iWARP) have no existing install base for storage networking to speak of (Fiber Channel is the only notable exception to this). For example, of the 12 million 10GbE+ NIC ports currently in operation within enterprise data centers, less than 5% have any RDMA capability (according to my quick back of the envelope calculations).
  • The most popular RDMA protocol (RoCE) mandates a lossless network on which to run (and this in turn requires highly skilled network engineers that command higher salaries). Even then, this protocol is prone to congestion problems, adding to further frustration.
  • Finally, and perhaps most telling, the two RDMA protocols (RoCE and iWARP) are mutually incompatible.

Unlike any other NVMe fabric, the pervasiveness of TCP is huge – it is absolutely everywhere. TCP/IP is the fundamental foundation of the Internet, every single Ethernet NIC/network out there supports the TCP protocol. With TCP, availability and reliability are just not issues to that need to be worried about. Extending the scale of NVMe over a TCP fabric seems like the logical thing to do.

NVMe/TCP is fast (especially if using Marvell FastLinQ 10/25/50/100GbE NICs – as they have a build-in full offload for NVMe/TCP), it leverages existing infrastructure and keeps things inherently simple. That is beautiful prospect for any technologist and is also attractive to company CIOs worried about budgets too.

So, once again, simplicity wins in the long run!

[1] https://nvmexpress.org/welcome-nvme-tcp-to-the-nvme-of-family-of-transports/

Posted on

Looking to Converge? HPE Launches Next Gen Marvell FastLinQ CNAs

By Todd Owens, Technical Marketing Manager, Marvell

Converging network and storage I/O onto a single wire can drive significant cost reductions in the small to mid-size data center by reducing the number of connections required. Fewer adapter ports means fewer cables, optics and switch ports consumed, all of which reduce OPEX in the data center. Customers can take advantage of converged I/O by deploying Converged Network Adapters (CNA) that provide not only networking connectivity, but also provide storage offloads for iSCSI and FCoE as well.

Just recently, HPE has introduced two new CNAs based on Marvell® FastLinQ® 41000 Series technology. The HPE StoreFabric CN1200R 10GBASE-T Converged Network Adapter and HPE StoreFabric CN1300R 10/25Gb Converged Network Adapter are the latest additions in HPE’s CNA portfolio. These are the only HPE StoreFabric CNAs to also support Remote Direct Memory Access (RDMA) technology (concurrently with storage offloads).

As we all know, the amount of data being generated continues to increase and that data needs to be stored somewhere. Recently, we are seeing an increase in the number of iSCSI connected storage devices in mid-market, branch and campus environments. iSCSI is great for these environments because it is easy to deploy, it can run on standard Ethernet, and there are a variety of new iSCSI storage offerings available, like Nimble and MSA all flash storage arrays (AFAs).

One challenge with iSCSI is the load it puts on the Server CPU for storage traffic processing when using software initiators –  a common approach to storage connectivity. To combat this, Storage Administrators can turn to CNAs with full iSCSI protocol offload. Offloading transfers the burden of processing the storage I/O from the CPU to the adapter.

Benefits of Adapter Offloads

Figure 1: Benefits of Adapter Offloads

As Figure 1 shows, Marvell driven testing shows that CPU utilization using H/W offload in FastLinQ 10/25GbE adapters can reduce CPU utilization by as much as 50% compared to an Ethernet NIC with software initiators. This means less burden on the CPU, allowing you to add more virtual machines per server and potentially reducing the number of physical servers required. A small item like an intelligent I/O adapter from Marvell can provide a significant TCO savings.

Another challenge has been the latency associated with Ethernet connectivity. This can now be addressed with RDMA technology. iWARP, RDMA over Converged Ethernet (RoCE) and iSCSI over Ethernet with RDMA (iSER) all allow for I/O transactions to be performed directly from the memory to the adapter, bypassing the software kernel in the user space of the O/S. This speeds transactions and reduces the overall I/O latency. The result is better performance and faster applications.

The new HPE StoreFabric CNAs become the ideal devices for converging network and iSCSI storage traffic for HPE ProLiant and Apollo customers. The HPE StoreFabric CN1300R 10/25GbE CNA supports plenty of bandwidth that can be allocated to both the network and storage traffic. In addition, with support for Universal RDMA (support for both iWARP and RoCE) as well as iSER, this adapter provides significantly lower latency than standard network adapters for both the network and storage traffic.

The HPE StoreFabric 1300R also supports a technology Marvell calls SmartAN™, which allows the adapter to automatically configure itself when transitioning between 10GbE and 25GbE networks. This is key because at 25GbE speeds, Forward Error Correction (FEC) can be required, depending on the cabling used. To make things more complex, there are two different types of FEC that can be implemented. To eliminate all the complexity, SmartAN automatically configures the adapter to match the FEC, cabling and switch settings for either 10GbE or 25GbE connections, with no user intervention required.

When budget is the key concern, the HPE StoreFabric CN1200R is the perfect choice. Supporting 10GBASE-T connectivity, this adapter connects to existing CAT6A copper cabling using RJ-45 connections. This eliminates the need for more expensive DAC cables or optical transceivers. The StoreFabric CN1200R also supports RDMA protocols (iWARP, RoCE and iSER) for lower overall latency.

Why converge though? It’s all about a tradeoff between cost and performance. If we do the math to compare the cost of deploying separate LAN and storage networks versus a converged network, we can see that converging I/O greatly reduces the complexity of the infrastructure and can reduce acquisition costs by half. There are additional long-term cost savings also, associated with managing one network versus two.

Eight Server Network Infrastructure Comparison

Figure 2: Eight Server Network Infrastructure Comparison

In this pricing scenario, we are looking at eight servers connecting to separate LAN and SAN environments versus connecting to a single converged environment as shown in figure 2.

LAN/SAN versus Converged Infrastructure Price Comparison

Table 1: LAN/SAN versus Converged Infrastructure Price Comparison

The converged environment price is 55% lower than the separate network approach. The downside is the potential storage performance impact of moving from a Fibre Channel SAN in the separate network environment to a converged iSCSI environment. The iSCSI performance can be increased by implementing a lossless Ethernet environment using Data Center Bridging and Priority Flow Control along with RoCE RDMA. This does add significant networking complexity but will improve the iSCSI performance by reducing the number of interrupts for storage traffic.

One additional scenario for these new adapters is in Hyper-Converged Infrastructure (HCI) implementations. With HCI, software defined storage is used. This means storage within the servers is shared across the network. Common implementations include Windows Storage Spaces Direct (S2D) and VMware vSAN Ready Node deployments. Both the HPE StoreFabric CN1200R BASE-T and CN1300R 10/25GbE CNAs are certified for use in either of these HCI implementations.

FastLinQ Technology Certified for Microsoft WSSD and VMware vSAN Ready Node

Figure 3: FastLinQ Technology Certified for Microsoft WSSD and VMware vSAN Ready Node

In summary, the new CNAs from the HPE StoreFabric group provide high performance, low cost connectivity for converged environments. With support for 10Gb and 25Gb Ethernet bandwidths, iWARP and RoCE RDMA and the ability to automatically negotiate changes between 10GbE and 25GbE connections with SmartAN™ technology, these are the ideal I/O connectivity options for small to mid-size server and storage networks.  To get the most out over your server investments, choose Marvell FastLinQ Ethernet I/O technology which is engineered from the start with performance, total cost of ownership, flexibility and scalability in mind.

For more information on converged networking, contact one our HPE experts in the field to talk through your requirements. Just use the HPE Contact Information link on our HPE Microsite at www.marvell.com/hpe.

Posted on

Infrastructure Powerhouse: Marvell and Cavium become one!

By Todd Owens, Technical Marketing Manager, Marvell

Marvell and Cavium

Marvell’s acquisition of Cavium closed on July 6th, 2018 and the integration is well under way. Cavium becomes a wholly-owned subsidiary of Marvell.  Our combined mission as Marvell is to develop and deliver semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else. The combination of the two companies makes for an infrastructure powerhouse, serving a variety of customers in the Cloud/Data Center, Enterprise/Campus, Service Providers, SMB/SOHO, Industrial and Automotive industries.

infrastructure powerhouse

For our business with HPE, the first thing you need to know is it is business as usual. The folks you engaged with on I/O and processor technology we provided to HPE before the acquisition are the same you engage with now.  Marvell is a leading provider of storage technologies, including ultra-fast read channels, high performance processors and transceivers that are found in the vast majority of hard disk drive (HDD) and solid-state drive (SDD) modules used in HPE ProLiant and HPE Storage products today.

Our industry leading QLogic® 8/16/32Gb Fibre Channel and FastLinQ® 10/20/25/50Gb Ethernet I/O technology will continue to provide connectivity for HPE Server and Storage solutions. The focus for these products will continue to be the intelligent I/O of choice for HPE, with the performance, flexibility, and reliability we are known for.

QLogic® 8/16/32Gb Fibre Channel and FastLinQ® 10/20/25/50Gb Ethernet I/O technology

Marvell’s Portfolio of FastLinQ Ethernet and QLogic Fibre Channel I/O Adapters

Marvell’s Portfolio of FastLinQ Ethernet and QLogic Fibre Channel I/O Adapters

We will continue to provide ThunderX2® Arm® processor technology for HPC servers like the HPE Apollo 70 for high-performance compute applications. We will also continue to provide Ethernet networking technology that is embedded into HPE Servers and Storage today and Marvell ASIC technology used for the iLO5 baseboard management controller (BMC) in all HPE ProLiant and HPE Synergy Gen10 servers.

iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs

iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs

That sounds great, but what’s going to change over time?
The combined company now has a much broader portfolio of technology to help HPE deliver best-in-class solutions at the edge, in the network and in the data center.

Marvell has industry-leading switching technology from 1GbE to 100GbE and beyond. This enables us to deliver connectivity from the IoT edge, to the data center and the cloud. Our Intelligent NIC technology provides compression, encryption and more to enable customers to analyze network traffic faster and more intelligently than ever before. Our security solutions and enhanced SoC and Processor capabilities will help our HPE design-in team collaborate with HPE to innovate next-generation server and storage solutions.

Down the road, you’ll see a shift in our branding and where you access info over time as well. While our product-specific brands, like ThunderX2 for Arm, or QLogic for Fibre Channel and FastLinQ for Ethernet will remain, many things will transition from Cavium to Marvell. Our web-based resources will start to change as will our email addresses. For example, you can now access our HPE Microsite at www.marvell.com/hpe . Soon, you’ll be able to contact us at “hpesolutions@marvell.com” as well. The collateral you leverage today will be updated over time. In fact, this has already started with updates to our HPE-specific Line Card, our HPE Ethernet Quick Reference Guide, our Fibre Channel Quick Reference Guides and our presentation materials. Updates will continue over the next few months.

In summary, we are bigger and better. We are one team that is more focused than ever to help HPE, their partners and customers thrive with world-class technology we can bring to bear. If you want to learn more, engage with us today. Our field contact details are here. We are all excited for this new beginning to make “I/O and Infrastructure Matter!” each and every day.

Posted on

Cavium FastLinQ Ethernet Adapters Available for HPE Cloudline Servers

By Todd Owens, Technical Marketing Manager, Marvell

Are you considering deploying HPE Cloudline servers in your hyper-scale environment? If you are, be aware that HPE now offers select Cavium FastLinQ® 10GbE and 10/25GbE Adapters as options for HPE Cloudline CL2100, CL2200 and CL3150 Gen 10 Servers. The adapters supported on the HPE Cloudline servers are shown in table 1 below.

Cavium FastLinQ 10GbE and 10/25GbE Adapters for HPE Cloudline Servers

Table 1: Cavium FastLinQ 10GbE and 10/25GbE Adapters for HPE Cloudline Servers

As today’s hyper-scale environments grow, the Ethernet I/O needs go well beyond providing basic L2 NIC connectivity. Faster processors, increase in scale, high performance NVMe and SSD storage and the need for better performance and lower latency have started to shift some of the performance bottlenecks from servers and storage to the network itself. That means architects of these environments need to rethink connectivity options.

While HPE already have some good I/O offerings for Cloudline from other vendors, having Cavium FastLinQ adapters in the portfolio increases the I/O features and capabilities available. Advanced features like Universal RDMA, SmartAN™, DPDK, NPAR and SR-IOV from Cavium, allow architects to design more flexible and scalable hyper-scale environments.

Cavium’s advanced feature set provides offload technologies that shift the burden of managing the I/O from the O/S and CPU to the adapter itself. Some of the benefits of offloading I/O tasks include:

  • Lower CPU utilization to free up resources for applications or more VM scalability
  • Accelerate processing of small-packet I/O with DPDK
  • Save time by automating adapter connectivity between 10GbE and 25GbE
  • Reduced latency through direct memory access for I/O transactions to increase performance
  • Network isolation and QoS at the VM level to improve VM application performance
  • Reduce TCO with heterogeneous management

Cavium FastLinQ Adapter and HPE Cloudline Gen10 Server

To deliver these benefits, customers can take advantage of some or all the advanced features in the Cavium FastLinQ Ethernet adapters for HPE Cloudline. Here’s a list of some of the technologies available in these adapters.

Advanced Features in Cavium FastLinQ Adapters for HPE Cloudline

* Source; Demartek findings 
Table 2: Advanced Features in Cavium FastLinQ Adapters for HPE Cloudline

Network Partitioning (NPAR) virtualizes the physical port into eight virtual functions on the PCIe bus. This makes a dual port adapter appear to the host O/S as if it were eight individual NICs. Furthermore, the bandwidth of each virtual function can be fine-tuned in increments of 500Mbps, providing full Quality of Service on each connection. SR-IOV is an additional virtualization offload these adapters support that moves management of VM to VM traffic from the host hypervisor to the adapter. This frees up CPU resources and reduces VM to VM latency.

Remote Direct Memory Access (RDMA) is an offload that routes I/O traffic directly from the adapter to the host memory. This bypasses the O/S kernel and can improve performance by reducing latency. The Cavium adapters support what is called Universal RDMA, which is the ability to support both RoCEv2 and iWARP protocols concurrently. This provides network administrators more flexibility and choice for low latency solutions built with HPE Cloudline servers.

SmartAN is a Cavium technology available on the 10/25GbE adapters that addresses issues related to bandwidth matching and the need for Forward Error Correction (FEC) when switching between 10Gbe and 25GbE connections. For 25GbE connections, either Reed Solomon FEC (RS-FEC) or Fire Code FEC (FC-FEC) is required to correct bit errors that occur at higher bandwidths. For the details behind SmartAN technology you can refer to the Marvell technology brief here.

Support for Data Plane Developer Kit (DPDK) offloads accelerate the processing of small packet I/O transmissions. This is especially important for applications in the Telco NFV and high-frequency trading environments.

For simplified management, Cavium provides a suite of utilities that allow for configuration and monitoring of the adapters that work across all the popular O/S environments including Microsoft Windows Server, VMware and Linux. Cavium’s unified management suite includes QCC GUI, CLI and v-Center plugins, as well as PowerShell Cmdlets for scripting configuration commands across multiple servers. Cavum’s unified management utilities can be downloaded from www.cavium.com .

Gen10 servers. Each of the Cavium adapters shown in table 1 support all of the capabilities noted above and are available in standup PCIe or OCP 2.0 form factors for use in the HPE Cloudline Gen10 Servers. One question you may have is how do these adapters compare to other offerings for Cloudline and those offered in HPE ProLiant servers? For that, we can look at the comparison chart here in table 3.

Comparison of I/O Features by Ethernet Supplier

Table 3: Comparison of I/O Features by Ethernet Supplier

Given that Cloudline is targeted for hyper-scale service provider customers with large and complex networks, the Cavium FastLinQ Ethernet adapters for HPE Cloudline offer administrators much more capability and flexibility than other I/O offerings. If you are considering HPE Cloudline servers, then you should also consider Cavium FastLinQ as your I/O of choice.