-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Latest Articles

August 31st, 2020

Arm processors in the Data Center

By Raghib Hussain, Chief Strategy Officer and Executive Vice President, Networking and Processors Group

Last week, Marvell announced a change in our strategy for ThunderX, our Arm-based server-class processor product line. I’d like to take the opportunity to put some more context around that announcement, and our future plans in the data center market.

ThunderX is a product line that we started at Cavium, prior to our merger with Marvell in 2018. At Cavium, we had built many generations of successful processors for infrastructure applications, including our Nitrox security processor and OCTEON infrastructure processor. These processors have been deployed in the world’s most demanding data-plane applications such as firewalls, routers, SSL-acceleration, cellular base stations, and Smart NICs. Today, OCTEON is the most scalable and widely deployed multicore processor in the market.

As co-founder of Cavium, I had a strong belief that Arm-based processors also had a role to play in next generation data centers. One size simply doesn’t fit all anymore, so we started the ThunderX product line for the server market. It was a bold move, and we knew it would take significant time and investment to come to fruition. In fact, we have spent six years now building multiple generations of products, developing the ecosystem, the software, and working with customers to qualify systems for production deployment in large data centers. ThunderX2 was the industry’s first Arm-based processor capable of powering dual socket servers that could go toe-to-toe with x86-based solutions, and clearly established the performance credentials for Arm in the server market. We moved the bar higher yet again with ThunderX3, as we discussed at Hot Chips 32.

Today, we see strong ecosystem support and a significant opportunity for Arm-based processors in the data center. But the real market opportunity for server-class Arm processors is in customized solutions, optimized for the use cases at hyperscale data center operators. This should be no surprise, as the power of the Arm architecture has always been in its ability to be integrated into highly optimized designs tailored for specific use cases, and we see hyperscale datacenter applications as no different.

Our rich IP portfolio, decades of processor expertise with Nitrox, OCTEON, and ThunderX, combined with our new custom ASIC capability, and investment in the latest TSMC 5nm process node, puts Marvell in a unique position to address this market opportunity. So to us, this market driven change just makes sense. We look forward to partnering with our customers and helping to deliver highly optimized solutions tailored to their unique needs.

August 28th, 2020

Matt Murphy Talks Marvell’s Market Traction on CNBC’s Squawk Alley

By Stacey Keegan, Vice President, Corporate Marketing, Marvell

Marvell President and CEO, Matt Murphy, discussed Marvell’s second quarter Earnings beat this morning with the CNBC Squawk Alley team. 

Marvell’s growth is being driven by our success in our key data infrastructure end markets. Particularly, in looking at 5G wireless infrastructure we have seen 4 consecutive quarters of sequential growth. Right now, this is particularly pronounced in China, where 5G is being rolled out. But with other countries working on rollout plans, and 4 of the top 5 base station vendors as Marvell customers, the growth from 5G is just beginning.

Marvell also has a large and growing data center business, with both enterprise on-prem datacenters and now the cloud. We announced last quarter that cloud is now over 10% of our revenue and growing fast. And the reason we are seeing strong growth is that we are producing the key storage and security products for cloud. This includes chips for huge multi-terabyte hard drives, where all cloud data is stored. It also includes our networking products, which doubled year-over-year. And finally, growth in this area includes Marvell’s custom products that came to us through a recent acquisition. This is how several of the larger datacenter operators like to buy chips. Where we build exactly what they want. 

Watch the full interview here.

August 27th, 2020

How to Reap the Benefits of NVMe over Fabric in 2020

By Todd Owens, Technical Marketing Manager, Marvell

As native Non-volatile Memory Express (NVMe®) share-storage arrays continue enhancing our ability to store and access more information faster across a much bigger network, customers of all sizes – enterprise, mid-market and SMBs – confront a common question: what is required to take advantage of this quantum leap forward in speed and capacity?

Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and — offering 64,000 command queues with 64,000 commands per queue — can provide much more scalability than other storage protocols.

A screenshot of a cell phone

Description automatically generated

Unfortunately, most of the NVMe in use today is held captive in the system in which it is installed. While there are a few storage vendors offering NVMe arrays on the market today, the vast majority of enterprise datacenter and mid-market customers are still using traditional storage area networks, running SCSI protocol over either Fibre Channel or Ethernet Storage Area Networks (SAN).

The newest storage networks, however, will be enabled by what we call NVMe over Fabric (NVMe-oF) networks. As with SCSI today, NVMe-oF will offer users a choice of transport protocols. Today, there are three standard protocols that will likely make significant headway into the marketplace. These include:

  • NVMe over Fibre Channel (FC-NVMe)
  • NVMe over RoCE RDMA (NVMe/RoCE)
  • NVMe over TCP (NVMe/TCP)

If NVMe over Fabrics are to achieve their true potential, however, there are three major elements that need to align. First, users will need an NVMe-capable storage network infrastructure in place. Second, all of the major operating system (O/S) vendors will need to provide support for NVMe-oF. Third, customers will need disk array systems that feature native NVMe. Let’s look at each of these in order.

  1. NVMe Storage Network Infrastructure

In addition to Marvell, several leading network and SAN connectivity vendors support one or more varieties of NVMe-oF infrastructure today. This storage network infrastructure (also called the storage fabric), is made up of two main components: the host adapter that provides server connectivity to the storage fabric; and the switch infrastructure that provides all the traffic routing, monitoring and congestion management.

For FC-NVMe, today’s enhanced 16Gb Fibre Channel (FC) host bus adapters (HBA) and 32Gb FC HBAs already support FC-NVMe. This includes the Marvell® QLogic® 2690 series Enhanced 16GFC, 2740 series 32GFC and 2770 Series Enhanced 32GFC HBAs.

On the Fibre Channel switch side, no significant changes are needed to transition from SCSI-based connectivity to NVMe technology, as the FC switch is agnostic about the payload data. The job of the FC switch is to just route FC frames from point to point and deliver them in order, with the lowest latency required. That means any 16GFC or greater FC switch is fully FC-NVMe compatible.

A key decision regarding FC-NVMe infrastructure, however, is whether or not to support both legacy SCSI and next-generation NVMe protocols simultaneously. When customers eventually deploy new NVMe-based storage arrays (and many will over the next three years), they are not going to simply discard their existing SCSI-based systems. In most cases, customers will want individual ports on individual server HBAs that can communicate using both SCSI and NVMe, concurrently. Fortunately, Marvell’s QLogic 16GFC/32GFC portfolio does support concurrent SCSI and NVMe, all with the same firmware and a single driver. This use of a single driver greatly reduces complexity compared to alternative solutions, which typically require two (one for FC running SCSI and another for FC-NVMe).

If we look at Ethernet, which is the other popular transport protocol for storage networks, there is one option for NVMe-oF connectivity today and a second option on the horizon. Currently, customers can already deploy NVMe/RoCE infrastructure to support NVMe connectivity to shared storage. This requires RoCE RDMA-enabled Ethernet adapters in the host, and Ethernet switching that is configured to support a lossless Ethernet environment. There are a variety of 10/25/50/100GbE network adapters on the market today that support RoCE RDMA, including the Marvell FastLinQ® 41000 Series and the 45000 Series adapters. 

On the switching side, most 10/25/100GbE switches that have shipped in the past 2-3 years support data center bridging (DCB) and priority flow control (PFC), and can support the lossless Ethernet environment needed to support a low-latency, high-performance NVMe/RoCE fabric.

While customers may have to reconfigure their networks to enable these features and set up the lossless fabric, these features will likely be supported in any newer Ethernet switch or director. One point of caution: with lossless Ethernet networks, scalability is typically limited to only 1 or 2 hops. For high scalability environments, consider alternative approaches to the NVMe storage fabric.

One such alternative is NVMe/TCP. This is a relatively new protocol (NVM Express Group ratification in late 2018), and as such is not widely available today. However, the advantage of NVMe/TCP is that it runs on today’s TCP stack, leveraging TCP’s congestion control mechanisms. That means there’s no need for a tuned environment (like that required with NVMe/RoCE), and NVMe/TCP can scale right along with your network. Think of NVMe/TCP in the same way as you do iSCSI today. Like iSCSI, NVMe/TCP will provide good performance, work with existing infrastructure, and be highly scalable. For those customers seeking the best mix of performance and ease of implementation, NVMe/TCP will be the best bet.

Because there is limited operating system (O/S) support for NVMe/TCP (more on this below), I/O vendors are not currently shipping firmware and drivers that support NVMe/TCP. But a few, like Marvell, have adapters that, from a hardware standpoint, are NVMe/TCP-ready; all that will be required is a firmware update in the future to enable the functionality. Notably, Marvell will support NVMe over TCP with full hardware offload on its FastLinQ adapters in the future. This will enable our NVMe/TCP adapters to deliver high performance and low latency that rivals NVMe/RoCE implementations.

A screenshot of a cell phone

Description automatically generated
  • Operating System Support

While it’s great that there is already infrastructure to support NVMe-oF implementations, that’s only the first part of the equation. Next comes O/S support. When it comes to support for NVMe-oF, the major O/S vendors are all in different places – see the table below for a current (August 2020) summary. The major Linux distributions from RHEL and SUSE support both FC-NVMe and NVMe/RoCE and have limited support for NVMe/TCP. VMware, beginning with ESXi 7.0, supports both FC-NVMe and NVMe/RoCE but does not yet support NVMe/TCP. Microsoft Windows Server currently uses an SMB-direct network protocol and offers no support for any NVMe-oF technology today.

With VMware ESXi 7.0, be aware of a couple of caveats: VMware does not currently support FC-NVMe or NVMe/RoCE in vSAN or with vVols implementations. However, support for these configurations, along with support for NVMe/TCP, is expected in future releases.

  • Storage Array Support

A few storage array vendors have released mid-range and enterprise class storage arrays that are NVMe-native. NetApp sells arrays that support both NVMe/RoCE and FC-NVMe, and are available today. Pure Storage offers NVMe arrays that support NVMe/RoCE, with plans to support FC-NVMe and NVMe/TCP in the future. In late 2019, Dell EMC introduced its PowerMax line of flash storage that supports FC-NVMe. This year and next, other storage vendors will be bringing arrays to market that will support both NVMe/RoCE and FC-NMVe. We expect storage arrays that support NVMe/TCP will become available in the same time frame.

Future-proof your investments by anticipating NVMe-oF tomorrow

Altogether, we are not too far away from having all the elements in place to make NVMe-oF a reality in the data center. If you expect the servers you are deploying today to operate for the next five years, there is no doubt they will need to connect to NVMe-native storage during that time. So plan ahead.

The key from an I/O and infrastructure perspective is to make sure you are laying the groundwork today to be able to implement NVMe-oF tomorrow. Whether that’s Fibre Channel or Ethernet, customers should be deploying I/O technology that supports NVMe-oF today. Specifically, that means deploying 16GFC enhanced or 32GFC HBAs and switching infrastructure for Fibre Channel SAN connectivity. This includes the Marvell QLogic 2690, 2740 or 2770-series Fibre Channel HBAs. For Ethernet, this includes Marvell’s FastLinQ 41000/45000 series Ethernet adapter technology.

These advances represent a big leap forward and will deliver great benefits to customers. The sooner we build industry consensus around the leading protocols, the faster these benefits can be realized.

For more information on Marvell Fibre Channel and Ethernet technology, go to www.marvell.com. For technology specific to our OEM customer servers and storage, go to www.marvell.com/hpe or www.marvell.com/dell.

August 20th, 2020

Navigating Product Name Changes for Marvell Ethernet Adapters at HPE

By Todd Owens, Technical Marketing Manager, Marvell

Hewlett Packard Enterprise (HPE) recently updated its product naming protocol for the Ethernet adapters in its HPE ProLiant and HPE Apollo servers. Its new approach is to include the ASIC model vendor’s name in the HPE adapter’s product name. This commonsense approach eliminates the need for model number decoder rings on the part of Channel Partners and the HPE Field team and provides everyone with more visibility and clarity. This change also aligns more with the approach HPE has been taking with their “Open” adapters on HPE ProLiant Gen10 Plus servers. All of this is good news for everyone in the server sales ecosystem, including the end user. The products’ core SKU numbers remain the same, too, which is also good.

For HPE Ethernet adapters for HPE ProLiant Gen10 Plus and HPE Apollo Gen10 Plus servers, the name changes were fairly basic. Under this new naming protocol, HPE moved the name of the adapter’s manufacturer to the front and added “for HPE” to the end. For example, what was previously named “HPE Ethernet 10/25Gb 2-port SFP28 QL41232HLCU Adapter” is now “Marvell QL41232HLCU Ethernet 10/25Gb 2-port SFP28 Adapter for HPE”. The model number, QL41232HLCU, did not change.

The table below shows the new naming for the HPE adapters using Marvell FastLinQ I/O technology and makes it very easy to match up ASIC technology, connection type and form factor across the different products.

HPE SKU ORIGINAL HPE MODEL NEW SKU DESCRIPTION

867707-B21

521T

HPE Ethernet 10Gb 2-port BASE-T QL41401-A2G Adapter

P08446-B21

524SFP+

HPE Ethernet 10Gb 2-port SFP+ QL41401-A2G Adapter

652503-B21

530SFP+

HPE Ethernet 10Gb 2-port SFP+ 57810S Adapter

656596-B21

530T

HPE Ethernet 10Gb 2-port BASE-T 57810S Adapter

700759-B21

533FLR-T

HPE FlexFabric 10Gb 2-port FLR-T 57810S Adapter

700751-B21

534FLR-SFP+

HPE FlexFabric 10Gb 2-port FLR-SFP+ 57810S Adapter

764302-B21

536FLR-T

HPE FlexFabric 10Gb 4-port FLR-T 57840S Adapter

867328-B21

621SFP28

HPE Ethernet 10/25Gb 2-port SFP28 QL41401-A2G Adapter

867334-B21

622FLR-SFP28

HPE Ethernet 10/25Gb 2-port FLR-SFP28 QL41401-A2G CNA


Inevitably, there are a few challenges with the new approach, especially for the adapters used in Gen10 servers. The first is that the firmware in the adapters is not changing. So, when a customer boots up the server, the old model information, such as 524SFP+, will be displayed on the system management screens. The same applies to information passed from the adapter to other management software, such as HPE Network Orchestrator. However, in HPE’s configuration tools – One Config Advanced (OCA) – only the new names and model numbers appear, with no mention of the original numbers. This could create confusion when you’re configuring a system and it boots up, displaying a different model number than the one you are actually using.

Additionally, it is going to take some time for operating system vendors like VMware and Microsoft to update their hardware compatibility listings. Today, you can go to the VMware Compatibility Guide (VCG) and search on a 621SFP28 with no problem. But search on a QL41401 or QL41401-A2G, and you will come up empty. HPE is also working on updating its QuickSpec documents with the new naming, and that will take some time as well.

So, while the model number decoder rings are no longer required, you will need to have easy to access cross references to match the new name to the old model. To support you on this, we have updated all our key collateral for HPE-specific Marvell® FastLinQ® Ethernet adapters on the Marvell HPE Microsite. These documents were updated to include not only the new product names that HPE has implemented, but the original model number references as well.

Here are some links to the updated collateral:

Why Marvell FastLinQ for HPE? First, we are a strategic supplier to HPE for I/O technology. In fact, HPE Synergy I/O is based on Marvell FastLinQ technology. Value-add features like storage offload for iSCSI and FCoE and network partitioning are key to enabling HPE to deliver composable network connectivity on their flagship blade solutions.

In addition to storage offload, Marvell provides HPE with unique features such as Universal RDMA and SmartAN® technology. Universal RDMA provides the HPE customer with the ability to run either RoCE RDMA or iWARP RDMA protocols on a single adapter. So, as their needs for implementing RDMA protocols change, there is no need to change adapters. SmartAN technology automatically configures the adapter ports for the proper 10GbE or 25GbE bandwidth, and – based on the type of switch the adapter is connected to and the physical cabling connection – adjusts the forward error correction settings. FastLinQ adapters also support a variety of other offloads including SR-IOV, DPDK and tunneling. This minimizes the impact I/O traffic management has on the host CPU, freeing up CPU resources to do more important work.

Our team of I/O experts stands ready to help you differentiate your solutions based on industry leading I/O technology and features for HPE servers. If you need help selecting the right I/O technology for your HPE customer, contact our field sales and application engineering experts using the Contacts link on our Marvell HPE Microsite.

August 18th, 2020

From Strong Awareness to Decisive Action: Meet Mr. QLogic

By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

Marvell® Fibre Channel HBAs are getting a promotion and here is the announcement email –

I am pleased to announce the promotion of “Mr. QLogic® Fibre Channel” to Senior Transport Officer, Storage Connectivity at Enterprise Datacenters Inc. Mr. QLogic has been an excellent partner and instrumental in optimizing mission critical enterprise application access to external storage over the past 20 years. When Mr. QLogic first arrived at Enterprise Datacenters, block storage was in a disarray and efficiently scaling out performance seemed like an unsurmountable challenge. Mr. QLogic quickly established himself as a go-to leader and trusted partner for enabling low latency access to external storage across disk and flash. Mr. QLogic successfully collaborated with other industry leaders like Brocade and Mr. Cisco MDS to lay the groundwork for a broad set of innovative technologies under the StorFusion™ umbrella. In his new role, Mr. QLogic will further extend the value of StorFusion by bringing awareness of Storage Area Network (SAN) congestion into the server, while taking decisive action to prevent bottlenecks that may degrade mission critical enterprise application performance.

Please join me in congratulating QLogic on this well-deserved promotion.

Soon after any big promotion, reality sets in and everyone asks what you are going to do for them and how are you going to add value in your new role. Will you live up to the expectations?

Let’s take a journey together (virtually) in this three-part blog and find out how Mr. QLogic delivers!

Part 1: Heterogeneous SANs and Flash bring in new challenges

In the era of rapid digitalization, work from home, mobility and data explosion, increasing numbers of organizations rely on a robust digital infrastructure to sustain and grow their businesses. Hosted business-critical applications must perform at high capacity and in a predictable manner. Fibre Channel remains the connection of choice between server applications and storage array data. FC SANs must remain free of congestion so workloads can perform at their peak or risk business interruption due to stalled or poorly performing applications.

Fibre Channel is known for its ultra-reliability since it is implemented on a dedicated network with buffer-to-buffer credits. For a real-life parallel, think of a guaranteed parking spot at your destination, and knowing it’s there before you leave your driveway. That worked well between two directly connected devices and until the recent past, congestion in FC SANs was generally an isolated problem. However, SAN Congestion has the potential to become a more severe problem occurring at more places in the datacenter for the following reasons:

  • Heterogenous SAN Speeds: While datacenters have seen widespread deployment of 16GFC and 32GFC, 4/8GFC still remain part of the fabric. Servers and Storage FC ports operating at mismatched speeds tend to congest the links between them.
  • CapEx and OpEx Pressure: Amidst the global pandemic, businesses are under unprecedented pressure to optimize OpEx and increase their bottom lines. More applications/VMs are being hosted on the same servers increasing stress onto existing SANs causing previously balanced SANs to be prone to congestion.
  • Flash Storage: FC SANs are no longer limited in performance due to spinning media. With the advent of NVMe and FC-NVMe, SANs are being pushed to their limits and existing links (4/8GFC) may not be able to sustain AFA bandwidth, creating oversubscription scenarios that lead to congestion and congestion spreading.
  • Data Explosion: Datasets and databases are growing in scale and so are the Fibre Channel SANs that support these applications. Scale out SAN architectures, more ports and end points in a domain, mean that a singular congestion event can spread to and impact a wide set of applications and services.

FC SANs are lossless networks and all frames sent must be acknowledged by the receiver. The sender (e.g. a Storage Array) will stop sending frames if these acknowledgments are not received. Simply put, the inability of a receiver (e.g. FC HBA) to accept frames at the expected rate results in congestion. Congestion can occur due to misbehaving end devices, physical errors or oversubscription. Oversubscription due to the reasons outline above is typically the main culprit.

A screenshot of a cell phone

Description automatically generated

Introducing the “Aware and Decisive” FC HBAs

Marvell’s QLogic Fibre Channel HBAs (aka. Mr. QLogic) and its StorFusion set of end-to-end orchestration and management capabilities are being extended to build a solution that is “aware” of the performance conditions of the SAN fabric and “decisive” in terms of the actions that it can take to prevent and mitigate the conditions like congestion, that can degrade the fabric and thus application performance.

Marvell QLogic Universal SAN Congestion Mitigation (USCM) technology works independently and in coordination with Brocade and Cisco FC fabrics to mitigate SAN congestion by enabling congestion detection, notification, and avoidance.  QLogic congestion “Awareness” capability means that it can either be informed of or automatically detect signs of congestion. QLogic “Decisive” action capability intends to prevent or resolve congestion by either switching traffic to a more optimized path or quarantining slower devices to lower priority virtual channels.

Available immediately, QLogic Enhanced 16GFC (2690 Series) and Enhanced 32GFC (2770 Series) Adapters have the ability to deliver a wide range of these SAN Congestion Management capabilities.

Preview of Part 2 and Part 3

If you think Mr. QLogic is up to something here and has the right vision, motivation and expertise to rescue FC SANs from Congestion, then come back again for the sequel(s).

In Part 2, I will talk about the underlying industry standards-based technology called “Fabric Performance Notification” (FPINs) that form the heart of the QLogic USCM solution.

In Part 3, you will see the technology in action and the uniqueness of the Marvell QLogic solution – for it is the one that can deliver on congestion management for both Brocade and Cisco SAN Fabrics.

August 12th, 2020

Put a Cherry on Top! Introducing FC-NVMe v2

By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

Once upon a time, data centers confronted a big problem – how to enable business-critical applications on servers to access distant storage with exceptional reliability. In response, the brightest storage minds invented Fibre Channel. Its ultra-reliability came from being implemented on a dedicated network and buffer-to-buffer credits. For a real-life parallel, think of a guaranteed parking spot at your destination, and knowing it’s there before you leave your driveway. That worked fairly well. But as technology evolved and storage changed from spinning media to flash memory with NVMe interfaces, the same bright minds developed FC-NVMe. This solution delivered a native NVMe storage transport without necessitating rip-and-replace by enabling existing 16GFC and 32GFC HBAs and switches to do FC-NVMe. Then came a better understanding of how cosmic rays affect high-speed networks, occasionally flipping a subset of bits, introducing errors.

Challenges in High-Speed Networks

But cosmic rays aren’t the only cause of bit errors. Soldering issues, vibrations due to heavy industrial equipment, as well as hardware and software bugs are all problematic, too. While engineers have responded with various protection mechanisms – ECC memories, shielding and FEC algorithms, among others – bit errors and lost frames still afflict all modern high-speed networks. In fact, the higher the speed of the network (e.g. 32GFC), the higher the probability of errors. Obviously, errors are never good; detecting them and re-transmitting lost frames slows down the network. But things get worse when the detection is left to the upper layers (SCSI or NVMe protocols) that must overcome their own quirks and sluggishness. Considering the low latency and high resiliency data center operators expect from FC-NVMe, these errors must be handled more efficiently, and much earlier in the process. In response, industry leaders like Marvell QLogic, along with other Fibre Channel leaders, came together to define FC-NVMe v2.

Introducing FC-NVMe v2

Intending to detect and recover from errors in a manner befitting a low-latency NVMe transport, FC-NVMe v2 (standardized by the T11 committee in August 2020) does not rely on the SCSI or NVMe layer error recovery. Rather, it automatically implements low-level error recovery mechanisms in the Fibre Channel’s link layer – solutions that work up to 30x faster than previous methods. These new and enhanced mechanisms include:

  • FLUSH: A new FC-NVMe link service that can quickly determine if a sent frame does not quickly reach its destination. It works this way: if two seconds pass without the QLogic FC HBA getting a response back regarding a transmitted frame, it sends a FLUSH to the same destination (like sending a second car to the same parking spot, to see if is occupied). If the FLUSH gets to the destination, we know that the original frame went missing en route, and the stack does not need to wait the typical 60 seconds to detect a missing frame (hence the 30x faster).
  • RED: Another new FC-NVMe link service, called Responder Error Detected (RED), essentially does the same lost frame detection but in the other direction. If a receiver knows it was supposed to get something but did not, it quickly sends out a RED rather than waiting on the slower, upper-layer protocols to detect the loss.
  • NVMe_SR: Once either FLUSH or RED detects a lost frame, NVMe_SR (NVMe Retransmit) kicks in, and enables the re-transmission of whatever got lost the first time.

Complicated? Not at all — these work in the background, automatically. FLUSH, RED, and NVMe_SR are the cherries on top of great underlying technology to deliver FC-NVMe v2!

Marvell QLogic FC HBAs with FC-NVMe v2

Marvel leads the T11 standards committee that defined the FC-NVMe v2 standard, and later this year will enable support for FC-NVMe v2 in QLogic 2690 Enhanced 16GFC and 2770 Enhanced 32GFC HBAs. Customers should expect a seamless transition from FC-NVMe v2 via a simple software upgrade, fulfilling our promise to avoid a disruptive rip-and-replace modernization of existing Fibre Channel infrastructure.

So, for your business-critical applications that rely on Fibre Channel infrastructure, go for Marvell QLogic FC-NVMe v2, and shake off the sluggish error recovery, and do more with Fibre Channel! Learn more at Marvell.com.

July 28th, 2020

Living on the Network Edge: Security

By Alik Fishman, Senior Product Marketing Manager, Marvell

Living on the Network Edge: Security

In our series Living on the Network Edge, we have looked at the trends driving Intelligence, Performance and Telemetry to the network edge. In this installment, let’s look at the changing role of network security and the ways integrating security capabilities in network access can assist in effectively streamlining policy enforcement, protection, and remediation across the infrastructure.

Cybersecurity threats are now a daily struggle for businesses experiencing a huge increase in hacked and breached data from sources increasingly common in the workplace like mobile and IoT devices. Not only are the number of security breaches going up, they are also increasing in severity and duration, with the average lifecycle from breach to containment lasting nearly a year1 and presenting expensive operational challenges. With the digital transformation and emerging technology landscape (remote access, cloud-native models, proliferation of IoT devices, etc.) dramatically impacting networking architectures and operations, new security risks are introduced. To address this, enterprise infrastructure is on the verge of a remarkable change, elevating network intelligence, performance, visibility and security2.

COVID-19 has been a wake-up call for accelerating digital transformation – as companies with greater digital presences show more resiliency3. The workforce is expected to transform post-COVID-19 with 20-45%4 becoming distributed and working remotely, either from home or from smaller distributed office spaces. The change in the working environment and accelerated migration to hybrid-cloud and multi-cloud drives a new normal, and the borderless enterprise is now a reality – driving network infrastructure to add end-to-end management, automation and security functionalities needed to support businesses in this new digital era. As mobility and cloud applications extend traditional boundaries and this borderless enterprise becomes increasingly vulnerable, a broader attack surface is no longer contained within well-defined and defended perimeters. Cracks are showing. Remote workers’ identities and devices are the new security perimeter with 70% of all breaches originating at endpoints, according to IDC research5.

This is where embedded security in network access provides essential frontline protection from malicious attacks entry points by enforcing zero-trust access policies. No traffic is trusted from the outset, and the traffic isn’t in the clear within networking devices throughout the infrastructure. Network telemetry and integrated security safeguards capable of inspecting workloads at line-rate team up with security appliances and AI-analytic tools to intelligently flag suspicious traffic and rapidly detect threats. Segmentation of security zones and agile group policy enforcement limits areas of exposure, prevents lateral movement, and enables quick remediation. IEEE 802.1AE MACSec encryption on all ports secure data throughout the network and prevent intrusion. Monitoring control protocol exceptions and activating rate limiters add layers of protection to control and management planes, preventing DDOS attacks. Integrated secure boot and secured storage provide the protection from counterfeit attempts to compromise network hardware and software.

Cybersecurity is now the dominate priorities of every organization, as each adapts to a post-COVID 19 world. Network-embedded security is on the rise to become a powerful ally in fighting the battle against ever evolving security threats. In this dynamic world, what can your network do to secure your assets?

Living on the Network Edge

What steps are you taking to bolster your network for living on the edge? Telemetry, Intelligence, Performance and Security are critical technologies for the growing borderless campus as mobility and cloud applications proliferate and drive networking functions. Learn more at: https://www.marvell.com/solutions/enterprise.html.

###

1 https://www.varonis.com/blog/cybersecurity-statistics
2 Cisco 2019 Global Networking Trends Survey
3 Morgan Stanley, 2Q20 CIO Survey: IT Hardware Takeaways
4 Dell’Oro Group Ethernet Switch – Campus five-year forecast, 2020-2024
5 Forbes 2020 Roundup Of Cybersecurity Forecasts And Market Estimates