Marvell Blog

Featuring technology ideas and solutions worth sharing.

Latest Articles

March 17th, 2017

Three Days, Two Speaking Sessions and One New Product Line: Marvell Sets the (IEEE 802.1BR) Standard for Data Center Solutions at the 2017 OCP U.S. Summit

By Michael Zimmerman, Vice President and General Manager, CSIBU

At last week’s 2017 OCP U.S. Summit, it was impossible to miss the buzz and activity happening at Marvell’s booth. Taking our mantra #MarvellOnTheMove to heart, the team worked tirelessly throughout the week to present and demo Marvell’s vision for the future of the data center, which came to fruition with the launch of our newest Prestera® PX Passive Intelligent Port Extender (PIPE) family.

But we’re getting ahead of ourselves…

DSC00242

Marvell kicked off OCP with two speaking sessions from its leading technologists. Yaniv Kopelman, Networking CTO of the Networking Group, presented “Extending the Lifecycle of 3.2T Switches,” a discussion on the concept of port extender technology and how to apply it to future data center architecture. Michael Zimmerman, vice president and general manager of the Networking Group, then spoke on “Modular Networking” and teased Marvell’s first modular solution based on port extender technology.

Throughout the show, customers, media and attendees visited Marvell’s booth to see our breakthrough innovations that are leading the disaggregation of the cloud network infrastructure industry. These products included:

Marvell’s Prestera PX PIPE family purpose-built to reduce power consumption, complexity and cost in the data center

Marvell’s 88SS1092 NVMe SSD controller designed to help boost next-generation storage and data center systems

Marvell’s Prestera 98CX84xx switch family designed to help data centers break the 1W per 25G port barrier for 25G Top-of-Rack (ToR) applications

Marvell’s ARMADA® 64-bit ARM®-based modular SoCs developed to improve the flexibility, performance and efficiency of servers and network appliances in the data center

Marvell’s Alaska® C 100G/50G/25G Ethernet transceivers which enable low-power, high-performance and small form factor solutions

We’re especially excited to introduce our PIPE solution on the heels of OCP because of the dramatic impact we anticipate it will have on the data center…

Until now, data centers with 10GbE and 25GbE port speeds have been challenged with achieving lower operating expense (OPEX) and capital expenditure (CAPEX) costs as their bandwidth needs increase. As the industry’s first purpose-built port extender supporting the IEEE 802.1BR standard, Marvell’s PIPE solution is a revolutionary approach that makes it possible to deploy ToR switches at half the power and cost of a traditional Ethernet switch.

Marvell’s PIPE solution enables data centers to be architected with a very simple, low-cost, low-power port extender in place of a traditional ToR switch, pushing the heavy switching functionality upstream. As the industry today transitions from 10GbE to 25GbE and from 40GbE to 100GbE port speeds, data centers are also in need of a modular building block to bridge the variety of current and next-generation port speeds. Marvell’s PIPE family provides a flexible and scalable solution to simplify and accelerate such transitions, offering multiple configuration options of Ethernet connectivity for a range of port speeds and port densities.

PIPE-Data

Amidst all of the announcements, speaking sessions and demos, our very own George Hervey, principal architect, also sat down with Semiconductor Engineering’s Ed Sperling for a Tech Talk. In the white board session, George discussed the power efficiency of networking in the enterprise and how costs can be saved by rightsizing Ethernet equipment.

The 2017 OCP U.S. Summit was filled with activity for Marvell, and we can’t wait to see how our customers benefit from our suite of data center solutions. In the meantime, we’re here to help with all of your data center needs, questions and concerns as we watch the industry evolve.

What were some of your OCP highlights? Did you get a chance to stop by the Marvell booth at the show? Tweet us at @marvellsemi to let us know, and check out all of the activity from last week. We want to hear from you!

March 13th, 2017

Port Extender Technology Changes Network Switch Landscape

By George Hervey, Principal Architect, Marvell

PIPE-Data-Center_V21-sized

Our lives are increasingly dependent on cloud-based computing and storage infrastructure. Whether at home, at work, or on the move with our smartphones and other mobile computing devices, cloud compute and storage resources are omnipresent. It is no surprise therefore that the demands on such infrastructure are growing at an alarming rate, especially as the trends of big data and the internet of things start to make their impact. With an increasing number of applications and users, the annual growth rate is believed to be 30x per annum, and even up to 100x in some cases. Such growth leaves Moore’s law and new chip developments unable to keep up with the needs of the computing and network infrastructure. These factors are making the data and communication network providers invest in multiple parallel computing and storage resources as a way of scaling to meet demands. It is now common for cloud data centers to have hundreds if not thousands of servers that need to be connected together.

Interconnecting all of these compute and storage appliances is becoming a real challenge, as more and more switches are required. Within a data center a classic approach to networking is a hierarchical one, with an individual rack using a leaf switch – also termed a top-of-rack or ToR switch – to connect within the rack, a spine switch for a series of racks, and a core switch for the whole center. And, like the servers and storage appliances themselves, these switches all need to be managed. In the recent past there have usually been one or two vendors of data center network switches and the associated management control software, but things are changing fast. Most of the leading cloud service providers, with their significant buying power and technical skills, recognised that they could save substantial cost by designing and building their own network equipment. Many in the data center industry saw this as the first step in disaggregating the network hardware and the management software controlling it. With no shortage of software engineers, the cloud providers took the management software development in-house while outsourcing the hardware design. While that, in part, satisfied the commercial needs of the data center operators, from a technical and operational management perspective nothing has been simplified, leaving a huge number of switches to be managed.

The first breakthrough to simplify network complexity came in 2009 with the introduction of what we know now as a port extender. The concept rests on the belief that there are many nodes in the network that don’t need the extensive management capabilities most switches have. Essentially this introduces a parent/child relationship, with the controlling switch, the parent, being the managed switch and the child, the port extender, being fed from it. This port extender approach was ratified into the networking standard 802.1BR in 2012, and every network switch built today complies with this standard. With less technical complexity within the port extenders, there were perceived benefits that would come from lower per unit cost compared to a full bridge switch, in addition to power savings.

The controlling bridge and port extender approach has certainly helped to drive simplicity into network switch management, but that’s not the end of the story. Look under the lid of a port extender and you’ll find the same switch chip being used as in the parent bridge. We have moved forward, sort of. Without a chip specifically designed as a port extender switch vendors have continued to use their standard chips sets, without realising potential cost and power savings. However, the truly modular approach to network switching has taken a leap forward with the launch of Marvell’s 802.1BR compliant port extender IC termed PIPE – passive intelligent port extender, enabling interoperability with a controlling bridge switch from any of the industry’s leading OEMs. It also offers attractive cost and power consumption benefits, something that took the shine off the initial interest in port extender technology. Seen as the second stage of network disaggregation, this approach effectively leads to decoupling the port connectivity from the processing power in the parent switch, creating a far more modular approach to networking. The parent switch no longer needs to know what type of equipment it is connecting to, so all the logic and processing can be focused on the parent, and the port I/O taken care of in the port extender.

Marvell’s Prestera® PIPE family targets data centers operating at 10GbE and 25GbE speeds that are challenged to achieve lower CAPEX and OPEX costs as the need for bandwidth increases. The Prestera PIPE family will facilitate the deployment of top-of-rack switches at half the cost and power consumption of a traditional Ethernet switch. The PIPE approach also includes a fast fail over and resiliency function, essential for providing continuity and high availability to critical infrastructure.

March 8th, 2017

NVMe-based Work Fabrics Blow Through Legacy Rotational Media Limitations in the Data Center: Speed and Cost Benefits of NVMe SSD Shared Storage Now in Its Second Generation

By Nick Ilyadis, VP of Portfolio Technology, Marvell

Marvell Debuts 88SS1092 Second-Gen NVM Express SSD Controller at OCP Summit  

88SS1092_C-sized
SSDs in the Data Center: NVMe and Where We’ve Been
When solid-state drives (SSDs) were first introduced into the data center, the infrastructure mandated they work within the confines of the then current bus technology, such as Serial ATA (SATA) and Serial Attached SCSI (SAS), developed for rotational media. Even the fastest hard disk drives (HDDs) of course, couldn’t keep up with an SSD, but neither could their current pipelines, which created a bottleneck that hampered the full exploitation of SSD technology. PCI Express (PCIe) offered a suitable high-bandwidth bus technology already in place as a transport layer for networking, graphics and other add-in cards. It became the next viable option, but the PCIe interface still relied on old HDD-based SCSI or SATA protocols. Thus the NVM Express (NVMe) industry working group was formed to create a standardized set of protocols and commands developed for the PCIe bus, in order to allow multiple paths that could take advantage of the full benefits of SSDs in the data center. The NVMe specification was designed from the ground up to deliver high-bandwidth and low-latency storage access for current and future NVM technologies.

The NVMe interface provides an optimized command issue and completion path. It includes support for parallel operation by supporting up to 64K commands within a single I/O queue to the device. Additionally, support was added for many Enterprise capabilities like end-to-end data protection (compatible with T10 DIF and DIX standards), enhanced error reporting and virtualization. All-in-all, NVMe is a scalable host controller interface designed to address the needs of Enterprise, Data Center and Client systems that utilize PCIe-based solid-state drives to help maximize SSD performance.

SSD Network Fabrics
New NVMe controllers from companies like Marvell allowed the data center to share storage data to further maximize cost and performance efficiencies. By creating SSD network fabrics, a cluster of SSDs can be formed to pool storage from individual servers and maximize overall data center storage. In addition, by creating a common enclosure for additional servers, data can be transported for shared data access. These new compute models therefore allow data centers to not only fully optimize the fast performance of SSDs, but more economically deploy those SSDs throughout the data center, lowering overall cost and streamlining maintenance. Instead of adding additional SSDs to individual servers, under-deployed SSDs can be tapped into and redeployed for use by over-allocated servers.

Here’s a simple example of how these network fabrics work: If a system has ten servers, each with an SSD sitting on the PCIe bus, an SSD cluster can be formed from each of the SSDs to provide not only a means for additional storage, but also a method to pool and share data access. If, let’s say one server is only 10 percent utilized, while another is over allocated, that SSD cluster will allow more storage for the over-allocated server without having to add SSDs to the individual servers. When the example is multiplied by hundreds of servers, you can see that cost, maintenance and performance efficiencies skyrocket.

Marvell helped pave the way for these new types of compute models for the data center when it introduced its first NVMe SSD controller. That product supported up to four lanes of PCIe 3.0, and was suitable for full 4GB/s or 2GB/s end points depending on host system customization. It enabled unparalleled IOPS performance using the NVMe advanced Command Handling. In order to fully utilize the high-speed PCIe connection, Marvell’s innovative NVMe design facilitated PCIe link data flows by deploying massive hardware automation. This helped to alleviate the legacy host control bottlenecks and unleash the true Flash performance.

Second-Generation NVMe Controllers are Here!
This first product has now been followed up with the introduction of the Marvell 88SS1092 second-generation NVMe SSD controller, which has passed through in-house SSD validation and third-party OS/platform compatibility testing. Therefore, the Marvell® 88SS1092 is ready to go to boost next-generation Storage and Datacenter systems, and is being debuted at the Open Computing Project (OCP) Summit March 8 and 9 in San Jose, Calif.

The Marvell 88SS1092 is Marvell’s second-generation NVMe SSD controller capable of PCIe 3.0 X 4 end points to provide full 4GB/s interface to the host and help remove performance bottlenecks. While the new controller advances a solid-state storage system to a more fully flash-optimized architecture for greater performance, it also includes Marvell’s third-generation error-correcting, low-density parity check (LDPC) technology for the additional reliability enhancement, endurance boost and TLC NAND device support on top of MLC NAND.

Today, the speed and cost benefits of NVMe SSD shared storage is not only a reality, but is now in its second generation. The network paradigm has been shifted. By using the NVMe protocol, designed from the ground up to exploit the full performance of SSDs, new compute models are being created without the limitations of legacy rotational media. SSD performance can be maximized, while SSD clusters and new network fabrics enable pooled storage and shared data access. The hard work of the NVMe working group is becoming a reality for today’s data center, as new controllers and technology help optimize performance and cost efficiencies of SSD technology.

Marvell 88SS1092 Second-Generation NVMe SSD Controller
New process and advanced NAND controller design includes:
88SS1092-chart-sized
 

March 1st, 2017

Marvell at the Forefront of Connecting the Cars of Tomorrow, Today

By Alex Tan, Director, Automotive Solutions Group

When you sit in a car today, the focal point of the interior is likely an infotainment system. From displaying vehicle diagnostics to parking assistance to enabling multimedia streaming and additional controls such as phone calls, navigation, etc., the infotainment system has become the touchpoint of the in-vehicle connectivity experience.

In order for drivers to take full advantage of these advanced features, internal vehicle data networks need to provide high bandwidth and seamless connectivity so these technologies can effectively communicate with each other. However, with multiple in-vehicle systems using different interfaces and connectivity technologies, how can we bridge the communication to get them to speak the same language?

The IEEE’s Ethernet standards act as the connectivity backbone to seamlessly link the different domains of the car such as infotainment and Advanced Driver Assistance Systems (ADAS). Marvell is proud to have played an instrumental role in the development of the IEEE 802.3bp 1000BASE-T1 PHY standard which enables data between in-vehicle systems to be distributed over a flexible, low cost and high bandwidth network. In October 2015, Marvell introduced the 88Q2112 automotive Ethernet physical layer (PHY) transceiver, the industry’s first 1000BASE-T1 automotive Ethernet PHY transceiver based on the IEEE’s draft 1000BASE-T1 spec. Leveraging our advanced wireless and Ethernet technology solutions, the 1000BASE-T1 solution supports uncompressed HD video, ideal for distributing camera and sensor data in ADAS applications. In the infotainment space, gigabit Ethernet over a single unshielded twisted pair copper cable is a logical solution for transporting audio, video and voice data at a higher data rate and resolution. Marvell’s 88Q2112 PHY transceiver enables automakers to use one Ethernet switch to connect the multiple advanced features of tomorrow’s cars. Furthering our commitment to automotive innovation, in April 2016 we opened the Marvell Automotive Center of Excellence (ACE), a first-of-its-kind automotive networking technology development center. Located in Ettlingen, Germany, ACE aims to expand development and education efforts to advance the architecture of future connected, intelligent cars.

We showcased Marvell’s advanced auto connectivity solutions at the 2016 IEEE-SA Ethernet & IP @ Automotive Technology Day (E&IP@ATD) in Paris this past September, demonstrating how our technology supports multiple HD video streams with up to 4K resolution. Covering the exciting activities at E&IP@ATD, Tadashi Nezu of Nikkei wrote about our automotive connectivity leadership, noting that Marvell is rapidly coming to the forefront of the market. Nezu also lauded the Company for its early Ethernet development efforts, noting how Marvell quickly developed a solution compliant to the draft IEEE 802.3bp 1000BASE-T1 standard, before the specifications were even finalized.

Earlier this month, we presented our solutions at the heart of the world’s automotive development at the 3rd annual Automotive Ethernet Congress in Munich. Manfred Kunz, head of development at the ACE, spoke about automotive Ethernet security, while Christopher Mash, senior manager of automotive system architecture and field applications, co-presented with Bosch and Continental who shared their experience with the new 1000BASE-T1 technology. We showcased several automotive Ethernet solutions across nine customer booths, including the world’s first 1000Base-T1 Automotive Ethernet system, industry-leading intelligent security on the new 88Q5050 switch and a new platform demonstrating Marvell’s 10Gb capability for automotive.

The event was a success, drawing over 700 attendees, as well as speakers and exhibitors from over 20 countries.

Automotive Ethernet Congress, Munich, Germany

Automotive Ethernet Congress, Munich, Germany

As automotive technological developments continue to advance rapidly and data continues to play a fundamental role in advancing the future of connected cars, we look forward to continue innovating and collaborating with our auto partners to further accelerate car connectivity.

February 3rd, 2017

Super Bowl LI Scores a Touchdown on Tech

By Sander Arts, Interim VP of Marketing

With Super Bowl Sunday just around the corner, we’re reminded of last year’s game that took place just a few blocks away from Marvell’s campus in the heart of Silicon Valley. Taking inspiration from the locale, Super Bowl 50 was undoubtedly the most tech-savvy event to date. The Denver Broncos and Carolina Panthers played at Levi’s Stadium in Santa Clara, one of the most technologically advanced venues in the country and the first stadium to feature 40 gigabits per second of internet capacity. TechRepublic reported that there were 10.15 terabytes of data transferred across the network during the game, with cloud storage, social networking and web surfing accounting for the top three applications transferring data on Levi’s Wi-Fi network.  What was even more impressive was Levi Stadium’s mobile app which enabled attendees to order food and beverages in advance, find the shortest bathroom and concession lines and access game highlights in high-definition.

But where does the game go from here? With sports fans being more engaged and connected than ever, how can technology continue enhancing the fan experience for Super Bowl 51?

NRG Stadium, Houston, TX Source: Wikipedia

NRG Stadium, Houston, TX
Source: Wikipedia

This year, the mobile app worth cheering for is Fox Sports Go. For fans unable to watch the New England Patriots and Atlanta Falcons face off live in Houston on Sunday, they can still get up close to the game in virtual reality. Fox Sports will stream the game live on its app which can be viewed in VR using a Samsung Gear headset or Google Cardboard. The app’s “virtual suite” will offer viewers various viewpoints of the game – even those without a VR headset can experience the game in 360-degree video.

However, we can’t forget that for many viewers, the Super Bowl commercials are just as entertaining as the game itself. With the price of a 30-second ad reaching nearly $5 million this year, brands are, more than ever, using this opportunity to release some of the funniest, strangest and powerful ads to meet viewers’ high expectations. This Sunday, we’re especially looking forward to the technology commercials, such as the Kia Niro and Ford “Go Further” ads, which will highlight advancements in connected car technology. As consumers become increasingly interested in automotive technology, we can expect to see more Super Bowl commercials highlighting data and connectivity both this year and in the years to come.

Last year’s record-breaking data usage is just an example of how important Wi-Fi and connectivity have become in our fast-paced world, especially at events such as the Super Bowl where instant streaming and sharing play an essential role in the viewers’ experience. At last year’s game, 15.9 terabytes of data were transferred via Distributed Antenna System, which was 2.5 times the amount compared to the Super Bowl the year before. Will the record to be broken again this Sunday?

As we tune in to the biggest TV event of the year, we look forward to seeing how technology will up the ante at Super Bowl 51, from the amount of data being transferred to fans sharing their experience on social media, it’s sure to be a touchdown performance!

You can follow Sander Arts on Twitter @Sander1Arts

January 27th, 2017

Flashback Friday: How Santa Clara Valley Traded Fruit Trees for Silicon

By Sander Arts, Interim VP of Marketing

Before Silicon Valley became synonymous with high-tech and innovation, it was known as the “Valley of Heart’s Delight.” This expansive piece of land boasting acres of orchards just south of San Francisco would grow to become the future home to some of the world’s most innovative tech companies.

FlashbackFriday1

Looking back at the area’s history, it’s no surprise that it would someday inspire a culture where groundbreaking tech comes to life in a garage, college dorm room or even at a kitchen table. The story of Silicon Valley itself begins in a garage, with two Stanford grads William Hewlett and David Packard, who would later found Hewlett-Packard in 1939. Throughout the next decade, entrepreneurs and scientists would come to the Santa Clara Valley to explore radio, military and electronic technology, laying the groundwork for the future hotbed of innovation.

Birthplace

By 1953 notable tech companies began to officially establish ground in Santa Clara Valley with property in Stanford Industrial Park, closely followed by the area’s first semi company in 1956. With the Space Race in full force throughout the 1960s, the country experienced a heightened focus on the need for advanced silicon technologies and the valley began to take shape as the country’s hub for advanced high tech. Due to the sheer number of silicon companies in the park and growing attention on the semi industry, the area was officially coined “Silicon Valley” in an Electronic News series published in 1971.

Over the next 30 years, some of the world’s most groundbreaking technology companies would make their way to Silicon Valley. Marvell officially became a part of the phenomenon in 1995, right in the middle of the dot.com bubble. Since then, we’ve seen firsthand how the tech scene has grown exponentially through innovation and entrepreneurship. The startup mentality that originated in 1939 and which continues to boom in the 21st century inspires us to continue improving, innovating and challenging technology of today. At Marvell, we are confident in our vision for the future of silicon, data and cloud technologies and look forward to being a part of the next generation of great entrepreneurs, thinkers and tinkerers in Silicon Valley.

Marvell-campus

January 20th, 2017

The New Scaling Paradigm: Ethernet Port Extenders

By Michael Zimmerman, Vice President and General Manager, CSIBU

Over the last three decades, Ethernet has grown to be the unifying communications infrastructure across all industries. More than 3M Ethernet ports are deployed daily across all speeds, from FE to 100GbE. In enterprise and carrier deployments, a combination of pizza boxes — utilizing stackable and high-density chassis-based switches — are used to address the growth in Ethernet. However, over the past several years, the Ethernet landscape has continued to change. With Ethernet deployment and innovation happening fastest in the data center, Ethernet switch architecture built for the data center dominates and forces adoption by the enterprise and carrier markets. This new paradigm shift has made architecture decisions in the data center critical and influential across all Ethernet markets. However, the data center deployment model is different.
 

How Data Centers are Different

Ethernet-ExtendersEthernet port deployment in data centers tends to be uniform, the same Ethernet port speed whether 10GbE, 25GbE or 50GbE is deployed to every server through a top of rack (ToR) switch, and then aggregated in multiple CLOS layers. The ultimate goal is to pack as many Ethernet ports at the highest commercially available speed onto the Ethernet switch, and make it the most economical and power efficient. The end point connected to the ToR switch is the server NIC which is typically the highest available speed in the market (currently 10/25GbE moving to 25/50GbE). Today, 128 ports of 25GbE switches are in deployment, going to 64x 100GbE and beyond in the next few years. But while data centers are moving to higher port density and higher port speeds, and homogenous deployment, there is still a substantial market for lower speeds such as 10GbE that continues to be deployed and must be served economically. The innovation in data centers drives higher density and higher port speeds but many segments of the market still need a solution with lower port speeds with different densities. How can this problem be solved?
 

Bridging the Gap

Fortunately, the technology to bridge lower speed ports to higher density switches has existed for several years. The IEEE standards codified the 802.1br port extender standard as the protocol needed to allow a fan-out of ports from an originating higher speed port. In essence, one high end, high port density switch can fan out hundreds or even thousands of lower speed ports. The high density switch is the control bridge, while the devices which fan out the lower speed ports are the port extenders.
 

Why Use Port Extenders

In addition to re-packaging the data center switch as a control bridge, there are several unique advantages for using port extenders:

  1. Port extenders are only a fraction of the cost, power and board space of any other solution aimed for serving Ethernet ports.
  2. Port extenders have very little or no software. This simplified operational deployment results in the number of managed entities limited to only the high end control bridges.
  3. Port extenders communicate with any high-end switch, via standard protocol 802.1br. Additional options such as Marvell DSA, or programmable headers are possible.)
  4. Port extenders work well with any transition service: 100GbE to 10GbE ports, 400GbE to 25GbE ports, etc.
  5. Port extenders can operate in any downstream speed: 1GbE, 2.5GbE, 10GbE, 25GbE, etc.
  6. Port extenders can be oversubscribed or non-oversubscribed, which means the ratio of upstream bandwidth to downstream bandwidth can be programmable from 1:1 to 1:4 (depending on the application). This by itself can lower cost and power by a factor of 4x.

 
 
Port-Extenders

 

Marvell Port Extenders

Marvell has launched multiple purpose-built port extender products, which allow fan-out of 1GbE and 10GbE ports of 40GbE and 100GbE higher speed ports. Along with the silicon solution, software reference code is available and can be easily integrated to a control bridge. Marvell conducted interoperability tests with a variety of control bridge switches, including the leading switches in the market. The benchmarked design offers 2x cost reduction and 2x power savings. SDK, data sheet and design package are available. Marvell IEEE802.1br port extenders are shipping to the market now. Contact your sales representatives for more information.