-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Posts Tagged ‘Fibre Channel’

August 18th, 2020

From Strong Awareness to Decisive Action: Meet Mr. QLogic

By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

Marvell® Fibre Channel HBAs are getting a promotion and here is the announcement email –

I am pleased to announce the promotion of “Mr. QLogic® Fibre Channel” to Senior Transport Officer, Storage Connectivity at Enterprise Datacenters Inc. Mr. QLogic has been an excellent partner and instrumental in optimizing mission critical enterprise application access to external storage over the past 20 years. When Mr. QLogic first arrived at Enterprise Datacenters, block storage was in a disarray and efficiently scaling out performance seemed like an unsurmountable challenge. Mr. QLogic quickly established himself as a go-to leader and trusted partner for enabling low latency access to external storage across disk and flash. Mr. QLogic successfully collaborated with other industry leaders like Brocade and Mr. Cisco MDS to lay the groundwork for a broad set of innovative technologies under the StorFusion™ umbrella. In his new role, Mr. QLogic will further extend the value of StorFusion by bringing awareness of Storage Area Network (SAN) congestion into the server, while taking decisive action to prevent bottlenecks that may degrade mission critical enterprise application performance.

Please join me in congratulating QLogic on this well-deserved promotion.

Soon after any big promotion, reality sets in and everyone asks what you are going to do for them and how are you going to add value in your new role. Will you live up to the expectations?

Let’s take a journey together (virtually) in this three-part blog and find out how Mr. QLogic delivers!

Part 1: Heterogeneous SANs and Flash bring in new challenges

In the era of rapid digitalization, work from home, mobility and data explosion, increasing numbers of organizations rely on a robust digital infrastructure to sustain and grow their businesses. Hosted business-critical applications must perform at high capacity and in a predictable manner. Fibre Channel remains the connection of choice between server applications and storage array data. FC SANs must remain free of congestion so workloads can perform at their peak or risk business interruption due to stalled or poorly performing applications.

Fibre Channel is known for its ultra-reliability since it is implemented on a dedicated network with buffer-to-buffer credits. For a real-life parallel, think of a guaranteed parking spot at your destination, and knowing it’s there before you leave your driveway. That worked well between two directly connected devices and until the recent past, congestion in FC SANs was generally an isolated problem. However, SAN Congestion has the potential to become a more severe problem occurring at more places in the datacenter for the following reasons:

  1. Heterogenous SAN Speeds: While datacenters have seen widespread deployment of 16GFC and 32GFC, 4/8GFC still remain part of the fabric. Servers and Storage FC ports operating at mismatched speeds tend to congest the links between them.
  2. CapEx and OpEx Pressure: Amidst the global pandemic, businesses are under unprecedented pressure to optimize OpEx and increase their bottom lines. More applications/VMs are being hosted on the same servers increasing stress onto existing SANs causing previously balanced SANs to be prone to congestion.
  3. Flash Storage: FC SANs are no longer limited in performance due to spinning media. With the advent of NVMe and FC-NVMe, SANs are being pushed to their limits and existing links (4/8GFC) may not be able to sustain AFA bandwidth, creating oversubscription scenarios that lead to congestion and congestion spreading.
  4. Data Explosion: Datasets and databases are growing in scale and so are the Fibre Channel SANs that support these applications. Scale out SAN architectures, more ports and end points in a domain, mean that a singular congestion event can spread to and impact a wide set of applications and services.

FC SANs are lossless networks and all frames sent must be acknowledged by the receiver. The sender (e.g. a Storage Array) will stop sending frames if these acknowledgments are not received. Simply put, the inability of a receiver (e.g. FC HBA) to accept frames at the expected rate results in congestion. Congestion can occur due to misbehaving end devices, physical errors or oversubscription. Oversubscription due to the reasons outline above is typically the main culprit.

A screenshot of a cell phone

Description automatically generated

Introducing the “Aware and Decisive” FC HBAs

Marvell’s QLogic Fibre Channel HBAs (aka. Mr. QLogic) and its StorFusion set of end-to-end orchestration and management capabilities are being extended to build a solution that is “aware” of the performance conditions of the SAN fabric and “decisive” in terms of the actions that it can take to prevent and mitigate the conditions like congestion, that can degrade the fabric and thus application performance.

Marvell QLogic Universal SAN Congestion Mitigation (USCM) technology works independently and in coordination with Brocade and Cisco FC fabrics to mitigate SAN congestion by enabling congestion detection, notification, and avoidance.  QLogic congestion “Awareness” capability means that it can either be informed of or automatically detect signs of congestion. QLogic “Decisive” action capability intends to prevent or resolve congestion by either switching traffic to a more optimized path or quarantining slower devices to lower priority virtual channels.

Available immediately, QLogic Enhanced 16GFC (2690 Series) and Enhanced 32GFC (2770 Series) Adapters have the ability to deliver a wide range of these SAN Congestion Management capabilities.

Preview of Part 2 and Part 3

If you think Mr. QLogic is up to something here and has the right vision, motivation and expertise to rescue FC SANs from Congestion, then come back again for the sequel(s).

In Part 2, I will talk about the underlying industry standards-based technology called “Fabric Performance Notification” (FPINs) that form the heart of the QLogic USCM solution.

In Part 3, you will see the technology in action and the uniqueness of the Marvell QLogic solution – for it is the one that can deliver on congestion management for both Brocade and Cisco SAN Fabrics.

August 12th, 2020

Put a Cherry on Top! Introducing FC-NVMe v2

By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

Once upon a time, data centers confronted a big problem – how to enable business-critical applications on servers to access distant storage with exceptional reliability. In response, the brightest storage minds invented Fibre Channel. Its ultra-reliability came from being implemented on a dedicated network and buffer-to-buffer credits. For a real-life parallel, think of a guaranteed parking spot at your destination, and knowing it’s there before you leave your driveway. That worked fairly well. But as technology evolved and storage changed from spinning media to flash memory with NVMe interfaces, the same bright minds developed FC-NVMe. This solution delivered a native NVMe storage transport without necessitating rip-and-replace by enabling existing 16GFC and 32GFC HBAs and switches to do FC-NVMe. Then came a better understanding of how cosmic rays affect high-speed networks, occasionally flipping a subset of bits, introducing errors.

Challenges in High-Speed Networks

But cosmic rays aren’t the only cause of bit errors. Soldering issues, vibrations due to heavy industrial equipment, as well as hardware and software bugs are all problematic, too. While engineers have responded with various protection mechanisms – ECC memories, shielding and FEC algorithms, among others – bit errors and lost frames still afflict all modern high-speed networks. In fact, the higher the speed of the network (e.g. 32GFC), the higher the probability of errors. Obviously, errors are never good; detecting them and re-transmitting lost frames slows down the network. But things get worse when the detection is left to the upper layers (SCSI or NVMe protocols) that must overcome their own quirks and sluggishness. Considering the low latency and high resiliency data center operators expect from FC-NVMe, these errors must be handled more efficiently, and much earlier in the process. In response, industry leaders like Marvell QLogic, along with other Fibre Channel leaders, came together to define FC-NVMe v2.

Introducing FC-NVMe v2

Intending to detect and recover from errors in a manner befitting a low-latency NVMe transport, FC-NVMe v2 (standardized by the T11 committee in August 2020) does not rely on the SCSI or NVMe layer error recovery. Rather, it automatically implements low-level error recovery mechanisms in the Fibre Channel’s link layer – solutions that work up to 30x faster than previous methods. These new and enhanced mechanisms include:

  • FLUSH: A new FC-NVMe link service that can quickly determine if a sent frame does not quickly reach its destination. It works this way: if two seconds pass without the QLogic FC HBA getting a response back regarding a transmitted frame, it sends a FLUSH to the same destination (like sending a second car to the same parking spot, to see if is occupied). If the FLUSH gets to the destination, we know that the original frame went missing en route, and the stack does not need to wait the typical 60 seconds to detect a missing frame (hence the 30x faster).
  • RED: Another new FC-NVMe link service, called Responder Error Detected (RED), essentially does the same lost frame detection but in the other direction. If a receiver knows it was supposed to get something but did not, it quickly sends out a RED rather than waiting on the slower, upper-layer protocols to detect the loss.
  • NVMe_SR: Once either FLUSH or RED detects a lost frame, NVMe_SR (NVMe Retransmit) kicks in, and enables the re-transmission of whatever got lost the first time.

Complicated? Not at all — these work in the background, automatically. FLUSH, RED, and NVMe_SR are the cherries on top of great underlying technology to deliver FC-NVMe v2!

Marvell QLogic FC HBAs with FC-NVMe v2

Marvel leads the T11 standards committee that defined the FC-NVMe v2 standard, and later this year will enable support for FC-NVMe v2 in QLogic 2690 Enhanced 16GFC and 2770 Enhanced 32GFC HBAs. Customers should expect a seamless transition from FC-NVMe v2 via a simple software upgrade, fulfilling our promise to avoid a disruptive rip-and-replace modernization of existing Fibre Channel infrastructure.

So, for your business-critical applications that rely on Fibre Channel infrastructure, go for Marvell QLogic FC-NVMe v2, and shake off the sluggish error recovery, and do more with Fibre Channel! Learn more at Marvell.com.

March 2nd, 2018

Connecting Shared Storage – iSCSI or Fibre Channel

By Todd Owens, Technical Marketing Manager, Marvell

At Cavium, we provide adapters that support a variety of protocols for connecting servers to shared storage including iSCSI, Fibre Channel over Ethernet (FCoE) and native Fibre Channel (FC). One of the questions we get quite often is which protocol is best for connecting servers to shared storage? The answer is, it depends.

We can simplify the answer by eliminating FCoE, as it has proven to be a great protocol for converging the edge of the network (server to top-of-rack switch), but not really effective for multi-hop connectivity, taking servers through a network to shared storage targets. That leaves us with iSCSI and FC.

Typically, people equate iSCSI with lower cost and ease of deployment because it works on the same kind of Ethernet network that servers and clients are already running on. These same folks equate FC as expensive and complex, requiring special adapters, switches and a “SAN Administrator” to make it all work.

This may have been the case in the early days of shared storage, but things have changed as the intelligence and performance of the storage network environment has evolved. What customers need to do is look at the reality of what they need from a shared storage environment and make a decision based on cost, performance and manageability. For this blog, I’m going to focus on these three criteria and compare 10Gb Ethernet (10GbE) with iSCSI hardware offload and 16Gb Fibre Channel (16GFC).

Before we crunch numbers, let me start by saying that shared storage requires a dedicated network, regardless of the protocol. The idea that iSCSI can be run on the same network as the server and client network traffic may be feasible for small or medium environments with just a couple of servers, but for any environment with mission-critical applications or with say four or more servers connecting to a shared storage device, a dedicated storage network is strongly advised to increase reliability and eliminate performance problems related to network issues.

Now that we have that out of the way, let’s start by looking at the cost difference between iSCSI and FC. We have to take into account the costs of the adapters, optics/cables and switch infrastructure. Here’s the list of Hewlett Packard Enterprise (HPE) components I will use in the analysis. All prices are based on published HPE list prices.

Notes: 1. Optical transceiver needed at both adapter and switch ports for 10GbE networks. Thus cost/port is two times the transceiver cost 2. FC switch pricing includes full featured management software and licenses 3. FC Host Bus Adapters (HBAs) ship with transceivers, thus only one additional transceiver is needed for the switch port

So if we do the math, the cost per port looks like this:

10GbE iSCSI with SFP+ Optics = $437+$2,734+$300 = $3,471

10GbE iSCSI with 3 meter Direct Attach Cable (DAC) =$437+$269+300 = $1,006

16GFC with SFP+ Optics = $773 + $405 + $1,400 = $2,578

So iSCSI is the lowest price if DAC cables are used. Note, in my example, I chose 3 meter cable length, but even if you choose shorter or longer cables (HPE supports from 0.65 to 7 meter cable lengths), this is still the lowest cost connection option. Surprisingly, the cost of the 10GbE optics makes the iSCSI solution with optical connections the most expensive configuration. When using fiber optic cables, the 16GFC configuration is lower cost.

So what are the trade-offs with DAC versus SFP+ options? It really comes down to distance and the number of connections required. The DAC cables can only span up to 7 meters or so. That means customers have only limited reach within or across racks. If customers have multiple racks or distance requirements of more than 7 meters, FC becomes the more attractive option, from a cost perspective. Also, DAC cables are bulky, and when trying to cable more than 10 ports or more, the cable bundles can become unwieldy to deal with.

On the performance side, let’s look at the differences. iSCSI adapters have impressive specifications of 10Gbps bandwidth and 1.5Million IOPS which offers very good performance. For FC, we have 16Gbps of bandwidth and 1.3Million IOPS. So FC has more bandwidth and iSCSI can deliver slightly more transactions. Well, that is, if you take the specifications at face value. If you dig a little deeper here’s some things we learn:

  • 16GFC delivers full line-rate performance for block storage data transfers. Today’s 10GbE iSCSI runs on the Ethernet protocol with Data Center Bridging (DCB) which makes this a lossless transmission protocol like FC. However the iSCSI commands are transferred via Transmission Control Protocol (TCP)/IP which add significant overhead to the headers of each packet. Because of this inefficiency, the actual bandwidth for iSCSI traffic is usually well below the stated line rate. This gives16GFC the clear advantage in terms of bandwidth performance.
  • iSCSI provides the best IOPS performance for block sizes below 2K. Figure 1 shows IOPS performance of Cavium iSCSI with hardware offload. Figure 2 shows IOPS performance of Cavium’s QLogic 16GFC adapter and you can see better IOPS performance for 4K and above, when compared to iSCSI.
  • Latency is an order of magnitude lower for FC compared to iSCSI. Latency of Brocade Gen 5 (16Gb) FC switching (using cut-through switch architecture) is in the 700 nanoseconds range and for 10GbE it is in the range of 5 to 50 microseconds. The impact of latency gets compounded with iSCSI should the user implement 10GBASE-T connections in the iSCSI adapters. This adds another significant hit to the latency equation for iSCSI.

Figure 1: Cavium’s iSCSI Hardware Offload IOPS Performance

 

Figure 2: Cavium’s QLogic 16Gb FC IOPS performance

If we look at manageability, this is where things have probably changed the most. Keep in mind, Ethernet network management hasn’t really changed much. Network administrators create virtual LANs (vLANs) to separate network traffic and reduce congestion. These network administrators have a variety of tools and processes that allow them to monitor network traffic, run diagnostics and make changes on the fly when congestion starts to impact application performance. The same management approach applies to the iSCSI network and can be done by the same network administrators.

On the FC side, companies like Cavium and HPE have made significant improvements on the software side of things to simplify SAN deployment, orchestration and management. Technologies like fabric-assigned port worldwide name (FA-WWN) from Cavium and Brocade enable the SAN administrator to configure the SAN without having HBAs available and allow a failed server to be replaced without having to reconfigure the SAN fabric. Cavium and Brocade have also teamed up to improve the FC SAN diagnostics capability with Gen 5 (16Gb) Fibre Channel fabrics by implementing features such as Brocade ClearLink™ diagnostics, Fibre Chanel Ping (FC ping) and Fibre Channel traceroute (FC traceroute), link cable beacon (LCB) technology and more. HPE’s Smart SAN for HPE 3PAR provides the storage administrator the ability to zone the fabric and map the servers and LUNs to an HPE 3PAR StoreServ array from the HPE 3PAR StoreServ management console.

Another way to look at manageability is in the number of administrators on staff. In many enterprise environments, there are typically dozens of network administrators. In those same environments, there may be less than a handful of “SAN” administrators. Yes, there are lots of LAN connected devices that need to be managed and monitored, but so few for SAN connected devices. The point is it doesn’t take an army to manage a SAN with today’s FC management software from vendors like Brocade.

So what is the right answer between FC and iSCSI? Well, it depends. If application performance is the biggest criteria, it’s hard to beat the combination of bandwidth, IOPS and latency of the 16GFC SAN. If compatibility and commonality with existing infrastructures is a critical requirement, 10GbE iSCSI is a good option (assuming the 10GbE infrastructure exists in the first place). If security is a key concern, FC is the best choice. When is the last time you heard of a FC network being hacked into? And if cost is the key criteria, iSCSI with DAC or 10GBASE-T connection is a good choice, understanding the tradeoff in latency and bandwidth performance.

So in very general terms, FC is the best choice for enterprise customers who need high performance, mission-critical capability, high reliability and scalable shared storage connectivity. For smaller customers who are more cost sensitive, iSCSI is a great alternative. iSCSI is also a good protocol for pre-configure systems like hyper-converged storage solutions to provide simple connectivity to existing infrastructure.

As a wise manager once told me many years ago, “If you start with the customer and work backwards, you can’t go wrong.” So the real answer is understand what the customer needs and design the best solution to meet those needs based on the information above.