-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Latest Articles

April 2nd, 2021

Marvell Enables O-RAN to Help 5G Fulfill its True Potential

By Marvell, PR Team

At the most recent FierceWireless 5G Blitz Week, some of the world’s leading 5G innovators met via webinar to discuss the potential of O-RAN and challenges of the ongoing 5G rollout. In a keynote, EVP and General Manager of Marvell’s Processors Business Group Raj Singh explored the accelerating shift to O-RAN, which is an emerging open-source architecture for Radio Access Networks that enables customers to create better 5G applications by mixing and matching RAN technology from different vendors.

O-RAN architectures are compelling because they increase competition among vendors, reduce costs, and offer customers greater flexibility to combine RAN elements according to their application’s specific use cases. However, in addition to their obvious benefits, O-RAN solutions also raise operator concerns including potential challenges with integration, legacy support, interoperability and security – issues that Marvell and other companies in the Open RAN Policy Coalition are addressing through shared standards, proven solutions and innovative approaches.

As Raj noted, open RAN doesn’t change what is being processed, but where. Marvell’s O-RAN solutions address Radio Units, Distributed Units and Centralized Units. We provide reference hardware designs and protocol stack software implementations. He added that while the transition from RAN to O-RAN may take years, the market need is urgent, because general-purpose processors are sub-optimal for L1 functions.

In a separate roundtable discussion on O-RAN, Raj joined panelists from Rethink Technology Research, Vodafone, Facebook Connectivity, and Analog Devices to explore ways in which network operators, technology influencers and system integrators are working to expedite the availability of O-RAN in the 5G marketplace.

Panelists noted that once the O-RAN transition is complete, it will offer customers greater supply chain flexibility and diversity, better performance, more collaboration, bigger economies of scale, lower capital expenditures, and also drive further innovation.

That said, work remains in translating this vision into reality, because customers are unwilling to sacrifice familiar features, standards and security during the transition. “You can’t just say ‘oh, it’s Open-RAN, and it doesn’t have to do this, or doesn’t have to do that,” Raj said. “You have to have the same capacity, reliability, and feature parity that exists in networks today.”

These discussions with industry leaders demonstrate the significant inroads being made in advancing the ORAN architecture.

Fierce Wireless 5G Spring Blitz replay available here

ORAN Round Table Replay available here

January 29th, 2021

Full Steam Ahead! Marvell Ethernet Device Bridge Receives Avnu Certification

By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell

and John Bergen, Sr. Product Marketing Manager, Automotive Business Unit, Marvell

In the early decades of American railroad construction, competing companies laid their tracks at different widths. Such inconsistent standards drove inefficiencies, preventing the easy exchange of rolling stock from one railroad to the next, and impeding the infrastructure from coalescing into a unified national network. Only in the 1860s, when a national standard emerged – 4 feet, 8-1/2 inches – did railroads begin delivering their true, networked potential.

Some one hundred-and-sixty years later, as Marvell and its competitors race to reinvent the world’s transportation networks, universal design standards are more important than ever. Recently, Marvell’s 88Q5050 Ethernet Device Bridge became the first of its type in the automotive industry to receive Avnu certification, meeting exacting new technical standards that facilitate the exchange of information between diverse in-car networks, which enable today’s data-dependent vehicles to operate smoothly, safely and reliably.

Avnu, the industry alliance focused on promoting an interoperable ecosystem based on IEEE 802.1 standards for Time Sensitive Networking (TSN), issues product certifications based on testing by approved third-party labs. In this case, the 88Q5050 was tested and approved by the University of New Hampshire InterOperability Laboratory, known as UNH-IOL.

The TSN standards defined by IEEE provide a “tool box” of specifications designed to meet the networking requirements of today’s Automotive, Industrial, and Professional A/V industries.   Designed to enable low latency, time aware networking, with guaranteed delivery of time critical data across today’s Ethernet networks, the TSN specifications encompass 17 approved standards to date, with more continuing to be developed. 

Issues of precise timing and low latency are critical in many applications, but especially so in today’s motor vehicles, whose drivers rely on instant and reliable feedback from cameras, blind-spot indicators, lidar, radar and other safety systems. Marvell’s product line, of which the 88Q5050 is just one device, provides automotive companies and suppliers with the means – and confidence – to know that critical data will not be competing for bandwidth, and that information will flow quickly, consistently and accurately, without interference from potentially malicious actors.

Figure 1 shows a typical automotive architecture with cameras, lidar, and radar enabled as AVB/TSN talkers, with AVB/TSN switches (such as the AVB enabled 88Q5050) used to provide guaranteed delivery of low latency, time critical sensor data to the Automotive CPUs, acting as AVB listeners, for processing.

Figure 1: Automotive Architecture

Figure 2: AVB Topology

The 88Q5050 is an 8-port Ethernet switch offering 4 fixed IEEE 100BASE-T1 ports, and a configurable selection of an additional 4 ports from 1x IEEE 100BASE-T1 port, 1x IEEE 100BASE-TX, 2x MII/RMII/RGMII ports,1 GMII port, and 1 SGMII port. The switch offers local and remote management capabilities, for easy access and configuration of the device.

The switch also employs the highest hardware security features, designed at the root source of the switch to prevent hacks or compromises to data streamed within the vehicle – such as through an unguarded tire pressure sensor, or any other unexpected vulnerability.

This advanced switch – an end-to-end solution – employs deep packet inspection techniques and Trusted Boot functionality to deliver the industry’s most secure automotive Ethernet switch. To further enhance security, it supports both blacklisting and whitelisting addresses on all its Ethernet ports. Guaranteed to perform at or beyond its specifications, it will open up entirely new avenues to the future, and it has already secured design wins with several leading automotive OEMs.

Figure 3: 88Q5050 Block Diagram

High universal standards, however, don’t just emerge out of thin air. They are the result of committed partnership, as stakeholders seek the efficiencies and benefits that flow from standardization, quality and reliability. In establishing America’s unified railroad gage, leaders eventually decided to go with the existing European standard, because American engineers wanted the flexibility to import British locomotives, known for their power and reliability. But where did that European standard itself originate? After all, 4 feet 8-1/2 inches seems like an idiosyncratic, if not arbitrary, measurement.

Some historians have traced its measurement to the standard width of Roman roads and bridges, built in ancient times for chariots – vehicles whose size was limited by what an average horse could pull. [1] Like a spiderweb across Europe, the Middle East and North Africa, this Roman network comprised an estimated 50,000 miles of paved roads, connecting an empire and changing history.

Today, Marvell’s commitment to invest and advance the frontiers of automotive silicon, the standards we champion, and the networks they enable, just might change the future.


January 14th, 2021

What’s Next in System Integration and Packaging? New Approaches to Networking and Cloud Data Center Chip Design

By Wolfgang Sauter, Customer Solutions Architect - Packaging, Marvell

The continued evolution of 5G wireless infrastructure and high-performance networking is driving the semiconductor industry to unprecedented technological innovations, signaling the end of traditional scaling on Single-Chip Module (SCM) packaging. With the move to 5nm process technology and beyond, 50T Switches, 112G SerDes and other silicon design thresholds, it seems that we may have finally met the end of the road for Moore’s Law.1 The remarkable and stringent requirements coming down the pipe for next-generation wireless, compute and networking products have all created the need for more innovative approaches. So what comes next to keep up with these challenges? Novel partitioning concepts and integration at the package level are becoming game-changing strategies to address the many challenges facing these application spaces.

During the past two years, leaders in the industry have started to embrace these new approaches to modular design, partitioning and package integration. In this paper, we will look at what is driving the main application spaces and how packaging plays into next-generation system  architectures, especially as it relates to networking and cloud data center chip design.

What’s Driving Main Application Spaces?

First, let’s take a look at different application spaces and how package integration is critical to enable the next-generation product solutions. In the wireless application space, the market can be further subdivided into handheld and infrastructure devices. Handheld devices in this space are driven by ultimate density, memory and RF integration to support power and performance requirements, while achieving reasonable consumer price points. Wireless infrastructure products in support of 5G will drive antenna array with RF integration, and on the baseband side, require a modular approach to enable scalable products that meet power, thermal and cost requirements in a small area. In the datacenter, next-generation products will need next-node performance and power efficiency to keep up with demand. Key drivers here are the insatiable need for memory bandwidth and the switch to scalable compute systems with high chip-to-chip bandwidth. Wired networking products already need more silicon area than can fit in a reticle, along with more bandwidth between chips and off-module. This pushes design toward larger package sizes with lower loss, as well as a huge amount of power coupled with high-bandwidth memory (HBM) integration.

The overarching trend then is to integrate more function (and therefore more silicon) into any given product. This task is especially difficult when many of the different functions don’t necessarily want to reside on the same chip. This includes: IO function, analog and RF content, and DRAM technologies. SoCs simply can’t fit all the content needed into one chip. In addition, IP schedules versus technology readiness aren’t always aligned. For instance, processors for compute applications may be better suited to move to the next node, whereas interface IP, such as SerDes, may not be ready for that next node until perhaps a year later.

How does the package play into this?

All of these requirements mean we as semiconductor solution providers must now get more than Moore out of the package meaning: we need to get more data and more functionality out of the package, while driving more cost out.

As suitable packaging solutions become increasingly complex and expensive, the need to focus on optimized architectures becomes imperative. The result is a balancing act between the cost, area and complexity of the chip versus the package. Spending more on the package may be a wise call if it helps to significantly reduce chip cost (e.g. splitting a large chip in two halves). But the opposite may be true when the package complexity starts overwhelming the product cost, which can now frequently be seen on complex 2.5D products with HBM integration. Therefore, the industry is starting to embrace new packaging and architectural concepts such as modular packages, chiplet design with chip-to-chip interfaces, or KGD integrated packages. An example of this was the announcement of the AMD Epyc 2 Rome chiplet design which marries its 7nm Zen 2 Cores with 14nm I/O die. As articulated in the introductory review by Anton Shilov of AnandTech at the time of its announcement, “Separating CPU chiplets from the I/O die has its advantages because it enables AMD to make the CPU chiplets smaller as physical interfaces (such as DRAM and Infinity Fabric) do not scale that well with shrinks of process technology. Therefore, instead of making CPU chiplets bigger and more expensive to manufacture, AMD decided to incorporate DRAM and some other I/O into a separate chip.”

These new approaches are revolutionizing chip design as we know it. As the industry moves toward modularity, interface IP and package technology must be co-optimized. Interface requirements must be optimized for low power and high efficiency, while enabling a path to communicate with chips from other suppliers. These new packaging and systems designs must also be compatible with industry specs. The package requirements must enable lower loss in the package while also enabling higher data bandwidth (i.e. a larger package, or alternative data transfer through cables, CPO, etc.).

What’s Next for Data Center Packaging and Design?

This is the first in a two-part series about the challenges and exciting breakthroughs happening in systems integration and packaging as the industry moves beyond the traditional Moore’s Law model. In the next segment we will discuss how packaging and deep package expertise are beginning to share center stage with architecture design to create a new sweet spot for integration and next-generation modular design. We will also focus on how these new chip offerings will unleash opportunities specifically in the data center including acceleration, smartNICs, process, security and storage offload. As we embark on this new era of chip design, we will see how next-generation ASICs will help meet the expanding demands of wired networking and Cloud Data Center chip design to power the data center all the way to the network edge.

# # #

1 Moore’s Law, an observation or projection articulated in 1971 stated that the number of transistors in integrated circuit chips would double every two years.

January 11th, 2021

Industry’s First NVMe Boot Device for HPE® ProLiant® and HPE Apollo Servers Delivers Simple, Secure and Reliable Boot Solution based on Innovative Technology from Marvell

By Todd Owens, Technical Marketing Manager, Marvell

Today, operating systems (OSs) like VMware recommend that OS data be kept completely separated from user data using non-network RAID storage. This is a best practice for any virtualized operating system including VMware, Microsoft Azure Stack HCI (Storage Spaces Direct) and Linux. Thanks to innovative flash memory technology from Marvell, a new secure, reliable and easy-to-use OS boot solution is now available for Hewlett Packard Enterprise (HPE) servers.

While there are 32GB micro-SD or USB boot device options available today with VMware requiring as much as 128GB of storage for the OS and Microsoft Storage Spaces Direct needing 200GB — these solutions simply don’t have the storage capacity needed. Using hardware RAID controllers and disk drives in the server bays is another option. However, this adds significant cost and complexity to a server configuration just to meet the OS requirement. The proper solution to address separating the OS from user data is the HPE NS204i-p NVME OS Boot Device.

Thanks to innovative NVMe technology from Marvell, this award winning boot device from HPE delivers reliable and easy-to-use server boot capability for system administrators deploying HPE servers. The HPE NS204i-p is a standalone solution that requires a single PCIe slot in the server and uses standard OS NVMe drivers to provide 480GB of secure RAID 1 storage for the OS data. No special software is required for the RAID management as everything is self-contained in the hardware.

The target customer is anyone using HPE ProLiant or HPE Apollo servers with VMware or any other virtual server OS looking to optimize server platform density by eliminating the need to use drive bays for OS data storage. That also means it is ideal for any hyperconverged infrastructure solution, as they are all based on virtual server operating systems. This includes HPE vSAN ready nodes, HPE ProLiant DX servers for Nutanix, Microsoft Azure Stack HCI implementations and HPE Nimble DHCI, to name a few. This also provides another option for HPE ProLiant customers wanting to deploy diskless servers that connect to shared storage disk arrays, such as HPE Primera or HPE Nimble storage.

The HPE NS204i-p is the industry’s first NVMe boot controller. This single-slot PCIe controller includes 480GB of integrated RAID 1 storage that provides complete isolation of the OS data from the user data in the server. The HPE NS204i-p supports secure firmware update and is designed to allow the OS to be installed on the device using mirrored RAID 1 storage. The new boot device is supported on select HPE ProLiant and HPE Apollo server and systems as shown in the list below.

It should be noted that no user data should be stored on the NS204i-p, only Server OS and boot data. The drives used are read-intensive, so they are ideal for booting an OS, but are not optimal for random access to user data. In addition, there are only two lanes of the PCIe bus connected to these drives, meaning random access performance would be limited. The enterprise-class M.2 drives support Power Loss Protection and are replaceable should a failure occur.

The solution is centered around the Marvell 88NR2241 NVMe RAID accelerator. This device connects to two 480GB M.2 NVMe drives over a PCIe bus on the board and provides two critical functions. First, the Marvell 88NR2241 ASIC provides automatic RAID 1 protection and management for data stored on the drives. In addition, it presents a virtual drive to the host CPU over the server PCIe bus, making the entire device look just like a single 480GB NVMe drive. Because it is presented as a standard NVMe drive, native OS NVMe drivers are used for VMware, Microsoft and Linux operating systems. With no special drivers or software to install, this solution is greatly simplified as compared to traditional RAID controllers or software RAID installations.

Management for the HPE NS204i-p is built into the device itself. There is no user configuration or setup required. The administrator can access the device using the HPE iLO and UEFI system management console. Device configuration details, firmware versions and RAID configuration information can be viewed by the administrator. This information is read-only, and there is nothing for the administrator to configure on the device. The administrator can execute what is called “Media Patrol” on the device, which performs a sector-by-sector data consistency check on the two M.2 NVMe drives. But that’s it. There is nothing else to configure or manage on the HPE NS204i-p.

In summary, when deploying HPE ProLiant servers or HPE Apollo systems with virtualized operating systems, the HPE NS204i-p NVMe OS Boot Device is an ideal default setup for server configurations. This device provides a simple, secure, reliable and cost-effective way to separate the OS and user data within the virtualized server. For more information view the product overview video, or for questions contact the Marvell HPE team.

December 1st, 2020

Superior Performance in the Borderless Enterprise – White Paper

By Gidi Navon, Principal System Architect, Marvell

Superior Performance in the Borderless Enterprise – White Paper

The current environment and an expected “new normal” are driving the transition to a borderless enterprise that must support increasing performance requirements and evolving business models. The infrastructure is seeing growth in the number of endpoints (including IoT) and escalating demand for data such as high-definition content. Ultimately, wired and wireless networks are being stretched as data-intensive applications and cloud migrations continue to rise.

A refresh cycle of the enterprise network is necessarily transpiring. Innovative devices with high-speed SerDes and new Ethernet port types fabricated in advanced process nodes are achieving higher performance and scale on similar power envelopes. This refresh includes next-generation stackable switches for the access layer connected to Wi-Fi access points; alongside these are evolving aggregation switches with high-speed fiber transceivers and core switches based on new chassis architectures. 

For an in-depth analysis of the performance requirements of the borderless enterprise, with solutions based on the newly announced Prestera® devices to help address these needs, download the white paper.

November 12th, 2020

Flash Memory Summit Names Marvell a 2020 Best of Show Award Winner

By Lindsey Moore, Marketing Coordinator, Marvell

Marvell wins FMS Award for Most Innovative Technology

Flash Memory Summit, the industry’s largest trade show dedicated to flash memory and solid-state storage technology, presented its 2020 Best of Show Awards yesterday in a virtual ceremony. Marvell, alongside Hewlett Packard Enterprise (HPE), was named a winner for “Most Innovative Flash Memory Technology” in the controller/system category for the Marvell NVMe RAID accelerator in the HPE OS Boot Device.

Last month, Marvell introduced the industry’s first native NVMe RAID 1 accelerator, a state-of-the-art technology for virtualized, multi-tenant cloud and enterprise data center environments which demand optimized reliability, efficiency, and performance. HPE is the first of Marvell’s partners to support the new accelerator in the HPE NS204i-p NVMe OS Boot Device offered on select HPE ProLiant servers and HPE Apollo systems. The solution lowers data center total cost of ownership (TCO) by offloading RAID 1 processing from costly and precious server CPU resources, maximizing application processing performance.

“Enterprise IT environments need to protect the integrity of flash data storage while delivering an optimized, application-level user experience,” said Jay Kramer, Chairman of the Awards Program and President of Network Storage Advisors Inc. “We are proud to recognize the Marvell NVMe RAID Accelerator for efficiently offloading RAID 1 processing and directly connecting to two NVMe M.2 SSDs allowing the HPE OS boot solution to consume a single PCIe slot.”

More information about the 15th Annual Flash Memory Summit Best of Show Award Winners can be found here.

October 30th, 2020

Adhir Mattu Named Global Winner of the 2020 Bay Area CIO of the Year ORBIE Awards

By Lindsey Moore, Marketing Coordinator, Marvell

The BayAreaCIO recognized chief information officers in eight key categories – Leadership, Super Global, Global, Large Enterprise, Enterprise, Large Corporate, Corporate, and Nonprofit/Public Sector.

“The BayAreaCIO ORBIE winners demonstrate the value great leadership creates. Especially in these uncertain times, CIOs are leading in unprecedented ways and enabling the largest work-from-home experiment in history,” according to Lourdes Gipson, Executive Director of BayAreaCIO. “The ORBIE Awards are meaningful because they are judged by peers – CIOs who understand how difficult this job is and why great leadership matters.”

The CIO of the Year ORBIE Awards is the premier technology executive recognition program in the United States. Since inception in 1998, over 1,200 CIOs have been honored as finalists and over 300 CIO of the Year winners have received the prestigious ORBIE Award. The ORBIE honors chief information officers who have demonstrated excellence in technology leadership.

Congratulations again to our very own Adhir Mattu on this honorable win!

To learn more about the winners (Click Here)