Marvell Blog

Featuring technology ideas and solutions worth sharing

Marvell

Archive for the ‘Data Center’ Category

May 1st, 2019

Revolutionizing Data Center Architectures for the New Era in Connected Intelligence

By George Hervey, Principal Architect, Marvell

Though established, mega-scale cloud data center architectures were adequately able to support global data demands for many years, there is a fundamental change taking place.  Emerging 5G, industrial automation, smart cities and autonomous cars are driving the need for data to be directly accessible at the network edge.   New architectures are needed in the data center to support these new requirements including reduced power consumption, low latency and smaller footprints, as well as composable infrastructure.

Composability provides a disaggregation of data storage resources to bring a more flexible and efficient platform for data center requirements to be met.  But it does, of course, need cutting-edge switch solutions to support it.  Capable of running at 12.8Tbps, the Marvell® Prestera® CX 8500 Ethernet switch portfolio has two key innovations that are set to redefine data center architectures: Forwarding Architecture using Slices of Terabit Ethernet Routers (FASTER) technology and Storage Aware Flow Engine (SAFE) technology.

With FASTER and SAFE technologies, the Marvell Prestera CX 8500 family can reduce overall network costs by more than 50%; lower power, space and latency; and determine exactly where congestion issues are occurring by providing complete per flow visibility.

View the video below to learn more about how Marvell Prestera CX 8500 devices represent a revolutionary approach to data center architectures.

 

 

August 3rd, 2018

IOPs and Latency

By admin,

Shared storage performance has significant impact on overall system performance. That’s why system administrators try to understand its performance and plan accordingly. Shared storage subsystems have three components: storage system software (host), storage network (switches and HBAs) and the storage array.

Storage performance can be measured at all three levels and aggregated to get to the subsystem performance. This can get quite complicated. Fortunately, storage performance can effectively be represented using two simple metrics: Input/Output operations per Second (IOPS) and Latency. Knowing these two values for a target workload, a user can optimize the performance of a storage system.

Let’s understand what these key factors are and how to use them to optimize of storage performance.

What is IOPS?
IOPS is a standard unit of measurement for the maximum number of reads and writes to a storage device for a given unit of time (e.g. seconds). IOPS represent the number of transactions that can be performed and not bytes of data. In order to calculate throughput, one would have to multiply the IOPS number by the block size used in the IO.

IOPS is a neutral measure of performance and can be used in a benchmark where two systems are compared using same block sizes and read/write mix.

What is a Latency?
Latency is the total time for completing a requested operation and the requestor receiving a response. Latency includes the time spent in all subsystems, and is a good indicator of congestion in the system.

IOPS is a neutral measure of performance and can be used in a benchmark where two systems are compared using same block sizes and read/write mix.

What is a Latency?
Latency is the total time for completing a requested operation and the requestor receiving a response. Latency includes the time spent in all subsystems, and is a good indicator of congestion in the system.

Find more about Marvell’s QLogic Fibre Channel adapter technology at:

https://www.marvell.com/fibre-channel-adapters-and-controllers/qlogic-fibre-channel-adapters/

April 2nd, 2018

Understanding Today’s Network Telemetry Requirements

By Tal Mizrahi, Feature Definition Architect, Marvell

There have, in recent years, been fundamental changes to the way in which networks are implemented, as data demands have necessitated a wider breadth of functionality and elevated degrees of operational performance. Accompanying all this is a greater need for accurate measurement of such performance benchmarks in real time, plus in-depth analysis in order to identify and subsequently resolve any underlying issues before they escalate.

The rapidly accelerating speeds and rising levels of complexity that are being exhibited by today’s data networks mean that monitoring activities of this kind are becoming increasingly difficult to execute. Consequently more sophisticated and inherently flexible telemetry mechanisms are now being mandated, particularly for data center and enterprise networks.

A broad spectrum of different options are available when looking to extract telemetry material, whether that be passive monitoring, active measurement, or a hybrid approach. An increasingly common practice is the piggy-backing of telemetry information onto the data packets that are passing through the network. This tactic is being utilized within both in-situ OAM (IOAM) and in-band network telemetry (INT), as well as in an alternate marking performance measurement (AM-PM) context.

At Marvell, our approach is to provide a diverse and versatile toolset through which a wide variety of telemetry approaches can be implemented, rather than being confined to a specific measurement protocol. To learn more about this subject, including longstanding passive and active measurement protocols, and the latest hybrid-based telemetry methodologies, please view the video below and download our white paper.

WHITE PAPER, Network Telemetry Solutions for Data Center and Enterprise Networks

January 11th, 2018

Storing the World’s Data

By Marvell, PR Team

Storage is the foundation for a data-centric world, but how tomorrow’s data will be stored is the subject of much debate. What is clear is that data growth will continue to rise significantly. According to a report compiled by IDC titled ‘Data Age 2025’, the amount of data created will grow at an almost exponential rate. This amount is predicted to surpass 163 Zettabytes by the middle of the next decade (which is almost 8 times what it is today, and nearly 100 times what it was back in 2010). Increasing use of cloud-based services, the widespread roll-out of Internet of Things (IoT) nodes, virtual/augmented reality applications, autonomous vehicles, machine learning and the whole ‘Big Data’ phenomena will all play a part in the new data-driven era that lies ahead.

Further down the line, the building of smart cities will lead to an additional ramp up in data levels, with highly sophisticated infrastructure being deployed in order to alleviate traffic congestion, make utilities more efficient, and improve the environment, to name a few. A very large proportion of the data of the future will need to be accessed in real-time. This will have implications on the technology utilized and also where the stored data is situated within the network. Additionally, there are serious security considerations that need to be factored in, too.

So that data centers and commercial enterprises can keep overhead under control and make operations as efficient as possible, they will look to follow a tiered storage approach, using the most appropriate storage media so as to lower the related costs. Decisions on the media utilized will be based on how frequently the stored data needs to be accessed and the acceptable degree of latency. This will require the use of numerous different technologies to make it fully economically viable – with cost and performance being important factors.

There are now a wide variety of different storage media options out there. In some cases these are long established while in others they are still in the process of emerging. Hard disk drives (HDDs) in certain applications are being replaced by solid state drives (SSDs), and with the migration from SATA to NVMe in the SSD space, NVMe is enabling the full performance capabilities of SSD technology. HDD capacities are continuing to increase substantially and their overall cost effectiveness also adds to their appeal. The immense data storage requirements that are being warranted by the cloud mean that HDD is witnessing considerable traction in this space.

There are other forms of memory on the horizon that will help to address the challenges that increasing storage demands will set. These range from higher capacity 3D stacked flash to completely new technologies, such as phase-change with its rapid write times and extensive operational lifespan. The advent of NVMe over fabrics (NVMf) based interfaces offers the prospect of high bandwidth, ultra-low latency SSD data storage that is at the same time extremely scalable.

Marvell was quick to recognize the ever growing importance of data storage and has continued to make this sector a major focus moving forwards, and has established itself as the industry’s leading supplier of both HDD controllers and merchant SSD controllers.

Within a period of only 18 months after its release, Marvell managed to ship over 50 million of its 88SS1074 SATA SSD controllers with NANDEdge™ error-correction technology. Thanks to its award-winning 88NV11xx series of small form factor DRAM-less SSD controllers (based on a 28nm CMOS semiconductor process), the company is able to offer the market high performance NVMe memory controller solutions that are optimized for incorporation into compact, streamlined handheld computing equipment, such as tablet PCs and ultra-books. These controllers are capable of supporting reads speeds of 1600MB/s, while only drawing minimal power from the available battery reserves. Marvell offers solutions like its 88SS1092 NVMe SSD controller designed for new compute models that enable the data center to share storage data to further maximize cost and performance efficiencies.

The unprecedented growth in data means that more storage will be required. Emerging applications and innovative technologies will drive new ways of increasing storage capacity, improving latency and ensuring security. Marvell is in a position to offer the industry a wide range of technologies to support data storage requirements, addressing both SSD or HDD implementation and covering all accompanying interface types from SAS and SATA through to PCIe and NMVe.

Check out www.marvell.com to learn more about how Marvell is storing the world’s data.

January 10th, 2018

Moving the World’s Data

By Marvell, PR Team

The way in which data is moved via wireline and wireless connectivity is going through major transformations. The dynamics that are causing these changes are being seen across a broad cross section of different sectors.

Within our cars, the new features and functionality that are being incorporated mean that the traditional CAN and LIN based communication technology is no longer adequate. More advanced in-vehicle networking needs to be implemented which is capable of supporting multi-Gigabit data rates, in order to cope with the large quantities of data that high resolution cameras, more sophisticated infotainment, automotive radar and LiDAR will produce. With CAN, LIN and other automotive networking technologies not offering viable upgrade paths, it is clear that Ethernet will be the basis of future in-vehicle network infrastructure – offering the headroom needed as automobile design progresses towards the long term goal of fully autonomous vehicles. Marvell is already proving itself to be ahead of the game here, following the announcement of the industry’s first secure automotive gigabit Ethernet switch, which delivers the speeds now being required by today’s data-heavy automotive designs, while also ensuring secure operation is maintained and the threat of hacking or denial of service (DoS) attacks is mitigated.

Within the context of modern factories and processing facilities, the arrival of Industry 4.0 will allow greater levels of automation, through use of machine-to-machine (M2M) communication. This communication can enable the access of data — data that is provided by a multitude of different sensor nodes distributed throughout the site. The ongoing in-depth analysis of this data is designed to ultimately bring improvements in efficiency and productivity for the modern factory environment. Ethernet capable of supporting Gigabit data rates has shown itself to be the prime candidate and it is already experiencing extensive implementation. Not only will this meet the speed and bandwidth requirements needed, but it also has the robustness that is mandatory in such settings (dealing with high temperatures, ESD strikes, exposure to vibrations, etc.) and the low latency characteristics that are essential for real-time control/analysis. Marvell has developed highly sophisticated Gigabit Ethernet transceivers with elevated performance that are targeted at such applications.

Within data centers things are changing too, but in this case the criteria involved are somewhat different. Here it is more about how to deal with the large volumes of data involved, while keeping the associated capital and operational expenses in check. Marvell has been championing a more cost effective and streamlined approach through its Prestera® PX Passive Intelligent Port Extender (PIPE) products. These present data center engineers with a modular approach to deploy network infrastructure that meets their specific requirements, rather than having to add further layers of complexity unnecessarily that will only serve to raise the cost and the power consumption. The result is a fully scalable, more economical and energy efficient solution.

In the wireless domain, there is ever greater pressure being placed upon WLAN hardware – in the home, office, municipal and retail environments. As well as increasing user densities and overall data capacity to contend with, network operators and service providers need to be able to address alterations that are now occurring in user behavior too. Wi-Fi connectivity is no longer just about downloading data, increasingly it will be the uploading of data that will be an important consideration. This will be needed for a range of different applications including augmented reality gaming, the sharing of HD video content and cloud-based creative activities. In order to address this, Wi-Fi technology will need to exhibit enhanced bandwidth capabilities on its uplink as well as its downlink.

The introduction of the much anticipated 802.11ax protocol is set to radically change how Wi-Fi is implemented. Not only will this allow far greater user densities to be supported (thereby meeting the coverage demands of places where large numbers of people are in need of Internet access, such as airports, sports stadia and concert venues), it also offers greater uplink/downlink data capacity – supporting multi-Gigabit operation in both directions. Marvell is looking to drive things forward via its portfolio of recently unveiled multi-Gigabit 802.11ax Wi-Fi system-on-chips (SoCs), which are the first in the industry to have orthogonal frequency-division multiple access (OFDMA) and multi-user MIMO operation on both the downlink and the uplink.

Check out www.marvell.com to learn more about how Marvell is moving the world’s data.

January 10th, 2018

Marvell Demonstrates Edge Computing by Extending Google Cloud to the Network Edge with Pixeom Edge Platform at CES 2018

By Maen Suleiman, Senior Software Product Line Manager, Marvell

The adoption of multi-gigabit networks and planned roll-out of next generation 5G networks will continue to create greater available network bandwidth as more and more computing and storage services get funneled to the cloud. Increasingly, applications running on IoT and mobile devices connected to the network are becoming more intelligent and compute-intensive. However, with so many resources being channeled to the cloud, there is strain on today’s networks.

Instead of following a conventional cloud centralized model, next generation architecture will require a much greater proportion of its intelligence to be distributed throughout the network infrastructure. High performance computing hardware (accompanied by the relevant software), will need to be located at the edge of the network. A distributed model of operation should provide the needed compute and security functionality required for edge devices, enable compelling real-time services and overcome inherent latency issues for applications like automotive, virtual reality and industrial computing. With these applications, analytics of high resolution video and audio content is also needed.

Through use of its high performance ARMADA® embedded processors, Marvell is able to demonstrate a highly effective solution that will facilitate edge computing implementation on the Marvell MACCHIATObin™ community board using the ARMADA 8040 system on chip (SoC). At CES® 2018, Marvell and Pixeom teams will be demonstrating a fully effective, but not costly, edge computing system using the Marvell MACCHIATObin community board in conjunction with the Pixeom Edge Platform to extend functionality of Google Cloud Platform™ services at the edge of the network. The Marvell MACCHIATObin community board will run Pixeom Edge Platform software that is able to extend the cloud capabilities by orchestrating and running Docker container-based micro-services on the Marvell MACCHIATObin community board.

Currently, the transmission of data-heavy, high resolution video content to the cloud for analysis purposes places a lot of strain on network infrastructure, proving to be both resource-intensive and also expensive. Using Marvell’s MACCHIATObin hardware as a basis, Pixeom will demonstrate its container-based edge computing solution which provides video analytics capabilities at the network edge. This unique combination of hardware and software provides a highly optimized and straightforward way to enable more processing and storage resources to be situated at the edge of the network. The technology can significantly increase operational efficiency levels and reduce latency.

The Marvell and Pixeom demonstration deploys Google TensorFlow™ micro-services at the network edge to enable a variety of different key functions, including object detection, facial recognition, text reading (for name badges, license plates, etc.) and intelligent notifications (for security/safety alerts). This technology encompasses the full scope of potential applications, covering everything from video surveillance and autonomous vehicles, right through to smart retail and artificial intelligence. Pixeom offers a complete edge computing solution, enabling cloud service providers to package, deploy, and orchestrate containerized applications at scale, running on premise “Edge IoT Cores.” To accelerate development, Cores come with built-in machine learning, FaaS, data processing, messaging, API management, analytics, offloading capabilities to Google Cloud, and more.

The MACCHIATObin community board is using Marvell’s ARMADA 8040 processor and has a 64-bit ARMv8 quad-core processor core (running at up to 2.0GHZ), and supports up to 16GB of DDR4 memory and a wide array of different I/Os. Through use of Linux® on the Marvell MACCHIATObin board, the multifaceted Pixeom Edge IoT platform can facilitate implementation of edge computing servers (or cloudlets) at the periphery of the cloud network. Marvell will be able to show the power of this popular hardware platform to run advanced machine learning, data processing, and IoT functions as part of Pixeom’s demo. The role-based access features of the Pixeom Edge IoT platform also mean that developers situated in different locations can collaborate with one another in order to create compelling edge computing implementations. Pixeom supplies all the edge computing support needed to allow Marvell embedded processors users to establish their own edge-based applications, thus offloading operations from the center of the network.

Marvell will also be demonstrating the compatibility of its technology with the Google Cloud platform, which enables the management and analysis of deployed edge computing resources at scale. Here, once again the MACCHIATObin board provides the hardware foundation needed by engineers, supplying them with all the processing, memory and connectivity required.

Those visiting Marvell’s suite at CES (Venetian, Level 3 – Murano 3304, 9th-12th January 2018, Las Vegas) will be able to see a series of different demonstrations of the MACCHIATObin community board running cloud workloads at the network edge. Make sure you come by!

November 6th, 2017

The USR-Alliance – Enabling an Open Multi-Chip Module (MCM) Ecosystem

By Gidi Navon, System Architect, Marvell

The semiconductor industry is witnessing exponential growth and rapid changes to its bandwidth requirements, as well as increasing design complexity, emergence of new processes and integration of multi-disciplinary technologies. All this is happening against a backdrop of shorter development cycles and fierce competition. Other technology-driven industry sectors, such as software and hardware, are addressing similar challenges by creating open alliances and open standards. This blog does not attempt to list all the open alliances that now exist —  the Open Compute Project, Open Data Path and the Linux Foundation are just a few of the most prominent examples. One technological area that still hasn’t embraced such open collaboration is Multi-Chip-Module (MCM), where multiple semiconductor dies are packaged together, thereby creating a combined system in a single package.

The MCM concept has been around for a while, generating multiple technological and market benefits, including:

  • Improved yield – Instead of creating large monolithic dies with low yield and higher cost (which sometimes cannot even be fabricated), splitting the silicon into multiple die can significantly improve the yield of each building block and the combined solution. Better yield consequently translates into reductions in costs.
  • Optimized process – The final MCM product is a mix-and-match of units in different fabrication processes which enables optimizing of the process selection for specific IP blocks with similar characteristics.
  • Multiple fabrication plants – Different fabs, each with its own unique capabilities, can be utilized to create a given product.
  • Product variety – New products are easily created by combining different numbers and types of devices to form innovative and cost‑optimized MCMs.
  • Short product cycle time – Dies can be upgraded independently, which promotes ease in the addition of new product capabilities and/or the ability to correct any issues within a given die. For example, integrating a new type of I/O interface can be achieved without having to re-spin other parts of the solution that are stable and don’t require any change (thus avoiding waste of time and money).
  • Economy of scale – Each die can be reused in multiple applications and products, increasing its volume and yield as well as the overall return on the initial investment made in its development.

Sub-dividing large semiconductor devices and mounting them on an MCM has now become the new printed circuit board (PCB) – providing smaller footprint, lower power, higher performance and expanded functionality.

Now, imagine that the benefits listed above are not confined to a single chip vendor, but instead are shared across the industry as a whole. By opening and standardizing the interface between dies, it is possible to introduce a true open platform, wherein design teams in different companies, each specializing in different technological areas, are able to create a variety of new products beyond the scope of any single company in isolation.

This is where the USR Alliance comes into action. The alliance has defined an Ultra Short Reach (USR) link, optimized for communication across the very short distances between the components contained in a single package. This link provides high bandwidth with less power and smaller die size than existing very short reach (VSR) PHYs which cross package boundaries and connectors and need to deal with challenges that simply don’t exist inside a package. The USR PHY is based on a multi-wire differential signaling technique optimized for MCM environments.

There are many applications in which the USR link can be implemented. Examples include CPUs, switches and routers, FPGAs, DSPs, analog components and a variety of long reach electrical and optical interfaces.

Figure 1: Example of a possible MCM layout

Marvell is an active promoter member of the USR Alliance and is working to create an ecosystem of interoperable components, interconnects, protocols and software that will help the semiconductor industry bring more value to the market.  The alliance is working on creating PHY, MAC and software standards and interoperability agreements in collaboration with the industry and other standards development organizations, and is promoting the development of a full ecosystem around USR applications (including certification programs) to ensure widespread interoperability.

To learn more about the USR Alliance visit: www.usr-alliance.org