-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Archive for the ‘Storage’ Category

November 12th, 2020

Flash Memory Summit Names Marvell a 2020 Best of Show Award Winner

By Lindsey Moore, Marketing Coordinator, Marvell

Marvell wins FMS Award for Most Innovative Technology

Flash Memory Summit, the industry’s largest trade show dedicated to flash memory and solid-state storage technology, presented its 2020 Best of Show Awards yesterday in a virtual ceremony. Marvell, alongside Hewlett Packard Enterprise (HPE), was named a winner for “Most Innovative Flash Memory Technology” in the controller/system category for the Marvell NVMe RAID accelerator in the HPE OS Boot Device.

Last month, Marvell introduced the industry’s first native NVMe RAID 1 accelerator, a state-of-the-art technology for virtualized, multi-tenant cloud and enterprise data center environments which demand optimized reliability, efficiency, and performance. HPE is the first of Marvell’s partners to support the new accelerator in the HPE NS204i-p NVMe OS Boot Device offered on select HPE ProLiant servers and HPE Apollo systems. The solution lowers data center total cost of ownership (TCO) by offloading RAID 1 processing from costly and precious server CPU resources, maximizing application processing performance.

“Enterprise IT environments need to protect the integrity of flash data storage while delivering an optimized, application-level user experience,” said Jay Kramer, Chairman of the Awards Program and President of Network Storage Advisors Inc. “We are proud to recognize the Marvell NVMe RAID Accelerator for efficiently offloading RAID 1 processing and directly connecting to two NVMe M.2 SSDs allowing the HPE OS boot solution to consume a single PCIe slot.”

More information about the 15th Annual Flash Memory Summit Best of Show Award Winners can be found here.

October 27th, 2020

Unleashing a Better Gaming Experience with NVMe RAID

By Shahar Noy, Senior Director, Product Marketing

You are an avid gamer. You spend countless hours in forums to decide between the ASUS TUF components and researching Radeon RX 500 or GeForce RTX 20, to ensure games would show at their best on your hard-earned PC gaming rig. You made your selection and can’t stop bragging about your system’s ray tracing capabilities and how realistic is the “Forza Motorsport 7” view from your McLaren F1 GT cockpit when you drive through the legendary Le Mans circuit at dusk. You are very proud of your machine and the year 2020 is turning out to be good: Microsoft finally launched the gorgeous looking “Flight Simulator 2020,” and CD Projekt just announced that the beloved and award-winning “The Witcher 3” is about to get an upgrade to take advantage of the myriad of hardware updates available to serious gamers like you. You have your dream system in hand and life can’t be better.

You are frustrated though because something doesn’t feel right. Booting your PC takes forever, and on top of this, loading new advanced texture-rich games is very, very long, but the worst of it – some games have a prolonged “no action” period between scenes. You are puzzled about the latter, but you just learned that game developers use many tricks to mask slow scene (“asset”) load time. In advanced games, in order to load two rich environments where enough textures and models are required to fill the memory, the developers will have to add a long staircase or elevator ride or a windy corridor, to buy enough time (sometimes up to 30 seconds!) to ditch the old assets and load new ones instead. You start realizing that upgrading your HDD or SSD might be all it takes to get your system to the next level and provide the ultimate gaming experience. A faster SSD that can match the CPU and GPU capabilities of your system will ensure storage is no longer the bottleneck, enabling improved system boot-time, faster game-loading, and quicker world and new scene changes to keep an active and engaging experience.

You consult with your friends and they show off their new polished Ryzen 9 but you are a diehard Intel fan and won’t even consider giving up your i5. You are worried, because they have a new platform with PCIe Gen4 and you are a generation behind with your Gen3 system. Your best buddy, who just got an offer to join a pro Overwatch Team gunning for next year’s OWL Grand Finals, urges you to invest in a new system with Gen4 interfaces to boost your storage performance but you want to enjoy your existing rig for longer and your budget is limited. We know 2020 has been tough on many of us, but every cloud has a silver lining and, in this case, it is Marvell’s collaboration with Western Digital that has introduced one of the highest performing SSD solutions to unlock more of your existing system potential.

Benefits of NVMe RAID

Marvell and Western Digital have been listening to gamers and collaborated on the introduction of Western Digital’s WD_BLACKTM AN1500 NVMeTM SSD Add-In-Card to provide a better storage solution for gamers who are on PCIe Gen3 platforms (or even PCIe Gen4) and need more performance. By using Marvell’s advanced native hardware NVMe RAID engine, Western Digital can now double the capacity and significantly improve the performance over a single WD_BlackTM SN750 NVMeTM SSD without making any compromises. Marvell’s NVMe RAID technology was designed to connect two Gen3 SSDs through an extremely low latency engine and combine them into one bigger and faster SSD. The benefit of NVMe RAID is that the throughput of read and write operations to any file is multiplied by the number of drives since reads and writes are done concurrently. The two SSDs run in parallel and are exposed to the host as a single drive, unlocking higher capacity and throughput capabilities. In some ways you can think of this technology as combining Mario and Luigi into one mega character with greater powers who can easily take down Bowser in “Super Mario 3”!

Here is a quick comparison between the new WD_BLACK AN1500 NVMe SSD Add-In-Card and WD_BLACK SN750 NVMe SSD at similar capacity:

 

WD_BLACK AN1500

WD_BLACK SN750

PCIe Interface

Gen3x8

Gen3x4

Capacity

2TB

2TB

Performance (Spec)

 

 

Sequential Read

6.5GB/s

3.4GB/s

Random Write

780,000 IOPS

560,000 IOPS


When you consider what SSD will work best for gaming there are two critical metrics which provide good “real life” performance indicators – the sustained read performance and the mixed read/write performance.

The sustained read will indicate how fast the game can load and how effective is the new assets loading time during a game; the faster it is, the quicker your GPU can process the data and render it to the screen, providing better overall gaming experience.

AN1500 (red) peak above 5GB/s and even beat Sabrent PCI Gen4
Source: https://www.tomshardware.com/reviews/wd-black-an1500-nvme-aic-ssd-review/2
AN1500 (red) peak above 5GB/s and even beat Sabrent PCI Gen4
Single SN750 hit 3GB/s with big blocks
Source: https://www.tomshardware.com/reviews/wd-black-sn750-ssd,5957-2.html
Single SN750 hit 3GB/s with big blocks

In the chart above WD_BLACK AN1500 is 66% faster than WD_BLACK SN750 testing the same 2TB overall SSD capacity. It is amazing to see WD_BLACK AN1500 Gen3 performance beat other Gen4 SSDs and get closer to Samsung’s latest Gen4 SSD! The WD_BLACK AN1500 performance advantage, coupled with the ever-growing file size of games, can improve game load time by minutes and make new scenes available seconds earlier.

Mixed read/write, on the other hand, is important when your OS or game starts saving fragments of data. The OS constantly updates small fragments of the meta-data in case your system crashes and you need to recover your boot with minimum disruptions. The game itself can autosave the details of your progress and although it requires only small infrequent writes, it is extremely important that those be serviced quickly by the SSD. This enables the SSD to focus back on its main task – sustaining fast read performance in order to transfer big chunks of rich graphics data to the GPU and ensure quick game and scene load times.

In Boot, the WD_BLACK AN1500 came in first by a wide margin yet again with 174,143 IOPS with a latency of 183.8µs
Source: https://www.storagereview.com/review/wd_black-an1500-aic-ssd-review
In Boot, the WD_BLACK AN1500 came in first by a wide margin yet again with 174,143 IOPS with a latency of 183.8µs
The WD_BLACK SN750 went on to peak in third place with 115,170 IOPS at a latency of 282.5μs.
Source: https://www.storagereview.com/review/wd-black-sn750-nvme-ssd-review
The WD_BLACK SN750 went on to peak in third place with 115,170 IOPS at a latency of 282.5μs.

VDI Boot is a good indicator of mixed read/write transactions during an enterprise grade boot process. In this test the SSD is taxed with mixed operations and the results indicate real word random (IOPS) and latency measurements. In this test WD_BLACK AN1500 IOPS were higher by 51% compared to WD_BLACK SN750 and the only Gen3 drive to perform at sub 200us latency, therefore clearing up small fragmented writes very quickly, minimizing SSD bottleneck, and going back to service the intense read requests.

Summary

It is 2020 and it is not all gloomy. Marvell and Western Digital collaborated to provide these superhero-like storage speeds to elevate the gaming experience of your system to a whole new level. With the WD_BLACK AN1500 NVMe SSD Add-In-Card, fully equipped with an integrated heatsink and RGB lighting, you will be able to spend more time playing and less time waiting! Enjoy circling the Eiffel Tower piloting your 787 in “Flight Simulator” with quicker game and map loading times. Immerse yourself in “The Witcher 3”, controlling Geralt of Rivia as he combs quicker through the open world of Continent for monsters. Lastly, with your faster machine you can practice your skills more often and rise through the ranks in Overwatch Open and Contenders Divisions, so next time you team with your friends in a 6v6 match, there is no question who is heading into the finals!

August 3rd, 2018

Infrastructure Powerhouse: Marvell and Cavium become one!

By Todd Owens, Technical Marketing Manager, Marvell

Marvell’s acquisition of Cavium closed on July 6th, 2018 and the integration is well under way. Cavium becomes a wholly-owned subsidiary of Marvell.  Our combined mission as Marvell is to develop and deliver semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else. The combination of the two companies makes for an infrastructure powerhouse, serving a variety of customers in the Cloud/Data Center, Enterprise/Campus, Service Providers, SMB/SOHO, Industrial and Automotive industries.

For our business with HPE, the first thing you need to know is it is business as usual. The folks you engaged with on I/O and processor technology we provided to HPE before the acquisition are the same you engage with now.  Marvell is a leading provider of storage technologies, including ultra-fast read channels, high performance processors and transceivers that are found in the vast majority of hard disk drive (HDD) and solid-state drive (SDD) modules used in HPE ProLiant and HPE Storage products today.

Our industry leading QLogic® 8/16/32Gb Fibre Channel and FastLinQ® 10/20/25/50Gb Ethernet I/O technology will continue to provide connectivity for HPE Server and Storage solutions. The focus for these products will continue to be the intelligent I/O of choice for HPE, with the performance, flexibility, and reliability we are known for.

Marvell’s Portfolio of FastLinQ Ethernet and QLogic Fibre Channel I/O Adapters

We will continue to provide ThunderX2® Arm® processor technology for HPC servers like the HPE Apollo 70 for high-performance compute applications. We will also continue to provide Ethernet networking technology that is embedded into HPE Servers and Storage today and Marvell ASIC technology used for the iLO5 baseboard management controller (BMC) in all HPE ProLiant and HPE Synergy Gen10 servers.

iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs

That sounds great, but what’s going to change over time?
The combined company now has a much broader portfolio of technology to help HPE deliver best-in-class solutions at the edge, in the network and in the data center.

Marvell has industry-leading switching technology from 1GbE to 100GbE and beyond. This enables us to deliver connectivity from the IoT edge, to the data center and the cloud. Our Intelligent NIC technology provides compression, encryption and more to enable customers to analyze network traffic faster and more intelligently than ever before. Our security solutions and enhanced SoC and Processor capabilities will help our HPE design-in team collaborate with HPE to innovate next-generation server and storage solutions.

Down the road, you’ll see a shift in our branding and where you access info over time as well. While our product-specific brands, like ThunderX2 for Arm, or QLogic for Fibre Channel and FastLinQ for Ethernet will remain, many things will transition from Cavium to Marvell. Our web-based resources will start to change as will our email addresses. For example, you can now access our HPE Microsite at www.marvell.com/hpe . Soon, you’ll be able to contact us at “hpesolutions@marvell.com” as well. The collateral you leverage today will be updated over time. In fact, this has already started with updates to our HPE-specific Line Card, our HPE Ethernet Quick Reference Guide, our Fibre Channel Quick Reference Guides and our presentation materials. Updates will continue over the next few months.

In summary, we are bigger and better. We are one team that is more focused than ever to help HPE, their partners and customers thrive with world-class technology we can bring to bear. If you want to learn more, engage with us today. Our field contact details are here. We are all excited for this new beginning to make “I/O and Infrastructure Matter!” each and every day.

April 5th, 2018

VMware vSAN ReadyNode Recipes Can Use Substitutions

By Todd Owens, Technical Marketing Manager, Marvell

VMware vSAN ReadyNode Recipes Can Use Substitutions

When you are baking a cake, at times you substitute in different ingredients to make the result better. The same can be done with VMware vSAN ReadyNode configurations or recipes. Some changes to the documented configurations can make the end solution much more flexible and scalable.

VMware allows certain elements within a vSAN ReadyNode bill of materials (BOM) to be substituted. In this VMware BLOG, the author outlines that server elements in the bom can change including:

  • CPU
  • Memory
  • Caching Tier
  • Capacity Tier
  • NIC
  • Boot Device

However, changes can only be made with devices that are certified as supported by VMware. The list of certified I/O devices can be found on VMware vSAN Compatibility Guide and the full portfolio of NICs, FlexFabric Adapters and Converged Network Adapters form HPE and Cavium are supported.

If we zero in on the HPE recipes for vSAN ReadyNode configurations, here are the substitutions you can make for I/O adapters.

Ok, so we know what substitutions we can make in these vSAN storage solutions. What are the benefits to the customer for making this change?

There are several benefits to the HPE/Cavium technology compared to the other adapter offerings.

  • HPE 520/620 Series adapters support Universal RDMA – the ability to support both RoCE and IWARP RDMA protocols with the same adapter.
    • Why Does This Matter? Universal RDMA offers flexibility in choice when low-latency is a requirement. RoCE works great if customers have already deployed using lossless Ethernet infrastructure. iWARP is a great choice for greenfield environments as it works on existing networks, doesn’t require complexity of lossless Ethernet and thus scales infinitely better.
  • Concurrent Network Partitioning (NPAR) and SR-IOV
    • NPAR (Network Partitioning) allows for virtualization of the physical adapter port. SR-IOV Offloadmove management of the VM network from the Hypervisor (CPU) to the Adapter. With HPE/Cavium adapters, these two technologies can work together to optimize the connectivity for virtual server environments and offload the Hypervisor (and thus CPU) from managing VM traffic, while providing full Quality of Service at the same time.
  • Storage Offload
    • Ability to reduce CPU utilization by offering iSCSI or FCoE Storage offload on the adapter itself. The freed-up CPU resources can then be used for other, more critical tasks and applications. This also reduces the need for dedicated storage adapters, connectivity hardware and switches, lowering overall TCO for storage connectivity.
  • Offloads in general – In addition to RDMA, Storage and SR-IOV Offloads mentioned above, HPE/Cavium Ethernet adapters also support TCP/IP Stateless Offloads and DPDK small packet acceleration offloads as well. Each of these offloads moves work from the CPU to the adapter, reducing the CPU utilization associated with I/O activity. As mentioned in my previous blog, because these offloads bypass tasks in the O/S Kernel, they also mitigate any performance issues associated with Spectre/Meltdown vulnerability fixes on X86 systems.
  • Adapter Management integration with vCenter – All HPE/Cavium Ethernet adapters are managed by Cavium’s QCC utility which can be fully integrated into VMware v-Center. This provides a much simpler approach to I/O management in vSAN configurations.

In summary, if you are looking to deploy vSAN ReadyNode, you might want to fit in a substitution or two on the I/O front to take advantage of all the intelligent capabilities available in Ethernet I/O adapters from HPE/Cavium. Sure, the standard ingredients work, but the right substitution will make things more flexible, scalable and deliver an overall better experience for your client.

March 2nd, 2018

Connecting Shared Storage – iSCSI or Fibre Channel

By Todd Owens, Technical Marketing Manager, Marvell

At Cavium, we provide adapters that support a variety of protocols for connecting servers to shared storage including iSCSI, Fibre Channel over Ethernet (FCoE) and native Fibre Channel (FC). One of the questions we get quite often is which protocol is best for connecting servers to shared storage? The answer is, it depends.

We can simplify the answer by eliminating FCoE, as it has proven to be a great protocol for converging the edge of the network (server to top-of-rack switch), but not really effective for multi-hop connectivity, taking servers through a network to shared storage targets. That leaves us with iSCSI and FC.

Typically, people equate iSCSI with lower cost and ease of deployment because it works on the same kind of Ethernet network that servers and clients are already running on. These same folks equate FC as expensive and complex, requiring special adapters, switches and a “SAN Administrator” to make it all work.

This may have been the case in the early days of shared storage, but things have changed as the intelligence and performance of the storage network environment has evolved. What customers need to do is look at the reality of what they need from a shared storage environment and make a decision based on cost, performance and manageability. For this blog, I’m going to focus on these three criteria and compare 10Gb Ethernet (10GbE) with iSCSI hardware offload and 16Gb Fibre Channel (16GFC).

Before we crunch numbers, let me start by saying that shared storage requires a dedicated network, regardless of the protocol. The idea that iSCSI can be run on the same network as the server and client network traffic may be feasible for small or medium environments with just a couple of servers, but for any environment with mission-critical applications or with say four or more servers connecting to a shared storage device, a dedicated storage network is strongly advised to increase reliability and eliminate performance problems related to network issues.

Now that we have that out of the way, let’s start by looking at the cost difference between iSCSI and FC. We have to take into account the costs of the adapters, optics/cables and switch infrastructure. Here’s the list of Hewlett Packard Enterprise (HPE) components I will use in the analysis. All prices are based on published HPE list prices.

Notes: 1. Optical transceiver needed at both adapter and switch ports for 10GbE networks. Thus cost/port is two times the transceiver cost 2. FC switch pricing includes full featured management software and licenses 3. FC Host Bus Adapters (HBAs) ship with transceivers, thus only one additional transceiver is needed for the switch port

So if we do the math, the cost per port looks like this:

10GbE iSCSI with SFP+ Optics = $437+$2,734+$300 = $3,471

10GbE iSCSI with 3 meter Direct Attach Cable (DAC) =$437+$269+300 = $1,006

16GFC with SFP+ Optics = $773 + $405 + $1,400 = $2,578

So iSCSI is the lowest price if DAC cables are used. Note, in my example, I chose 3 meter cable length, but even if you choose shorter or longer cables (HPE supports from 0.65 to 7 meter cable lengths), this is still the lowest cost connection option. Surprisingly, the cost of the 10GbE optics makes the iSCSI solution with optical connections the most expensive configuration. When using fiber optic cables, the 16GFC configuration is lower cost.

So what are the trade-offs with DAC versus SFP+ options? It really comes down to distance and the number of connections required. The DAC cables can only span up to 7 meters or so. That means customers have only limited reach within or across racks. If customers have multiple racks or distance requirements of more than 7 meters, FC becomes the more attractive option, from a cost perspective. Also, DAC cables are bulky, and when trying to cable more than 10 ports or more, the cable bundles can become unwieldy to deal with.

On the performance side, let’s look at the differences. iSCSI adapters have impressive specifications of 10Gbps bandwidth and 1.5Million IOPS which offers very good performance. For FC, we have 16Gbps of bandwidth and 1.3Million IOPS. So FC has more bandwidth and iSCSI can deliver slightly more transactions. Well, that is, if you take the specifications at face value. If you dig a little deeper here’s some things we learn:

  • 16GFC delivers full line-rate performance for block storage data transfers. Today’s 10GbE iSCSI runs on the Ethernet protocol with Data Center Bridging (DCB) which makes this a lossless transmission protocol like FC. However the iSCSI commands are transferred via Transmission Control Protocol (TCP)/IP which add significant overhead to the headers of each packet. Because of this inefficiency, the actual bandwidth for iSCSI traffic is usually well below the stated line rate. This gives16GFC the clear advantage in terms of bandwidth performance.
  • iSCSI provides the best IOPS performance for block sizes below 2K. Figure 1 shows IOPS performance of Cavium iSCSI with hardware offload. Figure 2 shows IOPS performance of Cavium’s QLogic 16GFC adapter and you can see better IOPS performance for 4K and above, when compared to iSCSI.
  • Latency is an order of magnitude lower for FC compared to iSCSI. Latency of Brocade Gen 5 (16Gb) FC switching (using cut-through switch architecture) is in the 700 nanoseconds range and for 10GbE it is in the range of 5 to 50 microseconds. The impact of latency gets compounded with iSCSI should the user implement 10GBASE-T connections in the iSCSI adapters. This adds another significant hit to the latency equation for iSCSI.

Figure 1: Cavium’s iSCSI Hardware Offload IOPS Performance

 

Figure 2: Cavium’s QLogic 16Gb FC IOPS performance

If we look at manageability, this is where things have probably changed the most. Keep in mind, Ethernet network management hasn’t really changed much. Network administrators create virtual LANs (vLANs) to separate network traffic and reduce congestion. These network administrators have a variety of tools and processes that allow them to monitor network traffic, run diagnostics and make changes on the fly when congestion starts to impact application performance. The same management approach applies to the iSCSI network and can be done by the same network administrators.

On the FC side, companies like Cavium and HPE have made significant improvements on the software side of things to simplify SAN deployment, orchestration and management. Technologies like fabric-assigned port worldwide name (FA-WWN) from Cavium and Brocade enable the SAN administrator to configure the SAN without having HBAs available and allow a failed server to be replaced without having to reconfigure the SAN fabric. Cavium and Brocade have also teamed up to improve the FC SAN diagnostics capability with Gen 5 (16Gb) Fibre Channel fabrics by implementing features such as Brocade ClearLink™ diagnostics, Fibre Chanel Ping (FC ping) and Fibre Channel traceroute (FC traceroute), link cable beacon (LCB) technology and more. HPE’s Smart SAN for HPE 3PAR provides the storage administrator the ability to zone the fabric and map the servers and LUNs to an HPE 3PAR StoreServ array from the HPE 3PAR StoreServ management console.

Another way to look at manageability is in the number of administrators on staff. In many enterprise environments, there are typically dozens of network administrators. In those same environments, there may be less than a handful of “SAN” administrators. Yes, there are lots of LAN connected devices that need to be managed and monitored, but so few for SAN connected devices. The point is it doesn’t take an army to manage a SAN with today’s FC management software from vendors like Brocade.

So what is the right answer between FC and iSCSI? Well, it depends. If application performance is the biggest criteria, it’s hard to beat the combination of bandwidth, IOPS and latency of the 16GFC SAN. If compatibility and commonality with existing infrastructures is a critical requirement, 10GbE iSCSI is a good option (assuming the 10GbE infrastructure exists in the first place). If security is a key concern, FC is the best choice. When is the last time you heard of a FC network being hacked into? And if cost is the key criteria, iSCSI with DAC or 10GBASE-T connection is a good choice, understanding the tradeoff in latency and bandwidth performance.

So in very general terms, FC is the best choice for enterprise customers who need high performance, mission-critical capability, high reliability and scalable shared storage connectivity. For smaller customers who are more cost sensitive, iSCSI is a great alternative. iSCSI is also a good protocol for pre-configure systems like hyper-converged storage solutions to provide simple connectivity to existing infrastructure.

As a wise manager once told me many years ago, “If you start with the customer and work backwards, you can’t go wrong.” So the real answer is understand what the customer needs and design the best solution to meet those needs based on the information above.

January 11th, 2018

Storing the World’s Data

By Marvell, PR Team

Storage is the foundation for a data-centric world, but how tomorrow’s data will be stored is the subject of much debate. What is clear is that data growth will continue to rise significantly. According to a report compiled by IDC titled ‘Data Age 2025’, the amount of data created will grow at an almost exponential rate. This amount is predicted to surpass 163 Zettabytes by the middle of the next decade (which is almost 8 times what it is today, and nearly 100 times what it was back in 2010). Increasing use of cloud-based services, the widespread roll-out of Internet of Things (IoT) nodes, virtual/augmented reality applications, autonomous vehicles, machine learning and the whole ‘Big Data’ phenomena will all play a part in the new data-driven era that lies ahead.

Further down the line, the building of smart cities will lead to an additional ramp up in data levels, with highly sophisticated infrastructure being deployed in order to alleviate traffic congestion, make utilities more efficient, and improve the environment, to name a few. A very large proportion of the data of the future will need to be accessed in real-time. This will have implications on the technology utilized and also where the stored data is situated within the network. Additionally, there are serious security considerations that need to be factored in, too.

So that data centers and commercial enterprises can keep overhead under control and make operations as efficient as possible, they will look to follow a tiered storage approach, using the most appropriate storage media so as to lower the related costs. Decisions on the media utilized will be based on how frequently the stored data needs to be accessed and the acceptable degree of latency. This will require the use of numerous different technologies to make it fully economically viable – with cost and performance being important factors.

There are now a wide variety of different storage media options out there. In some cases these are long established while in others they are still in the process of emerging. Hard disk drives (HDDs) in certain applications are being replaced by solid state drives (SSDs), and with the migration from SATA to NVMe in the SSD space, NVMe is enabling the full performance capabilities of SSD technology. HDD capacities are continuing to increase substantially and their overall cost effectiveness also adds to their appeal. The immense data storage requirements that are being warranted by the cloud mean that HDD is witnessing considerable traction in this space.

There are other forms of memory on the horizon that will help to address the challenges that increasing storage demands will set. These range from higher capacity 3D stacked flash to completely new technologies, such as phase-change with its rapid write times and extensive operational lifespan. The advent of NVMe over fabrics (NVMf) based interfaces offers the prospect of high bandwidth, ultra-low latency SSD data storage that is at the same time extremely scalable.

Marvell was quick to recognize the ever growing importance of data storage and has continued to make this sector a major focus moving forwards, and has established itself as the industry’s leading supplier of both HDD controllers and merchant SSD controllers.

Within a period of only 18 months after its release, Marvell managed to ship over 50 million of its 88SS1074 SATA SSD controllers with NANDEdge™ error-correction technology. Thanks to its award-winning 88NV11xx series of small form factor DRAM-less SSD controllers (based on a 28nm CMOS semiconductor process), the company is able to offer the market high performance NVMe memory controller solutions that are optimized for incorporation into compact, streamlined handheld computing equipment, such as tablet PCs and ultra-books. These controllers are capable of supporting reads speeds of 1600MB/s, while only drawing minimal power from the available battery reserves. Marvell offers solutions like its 88SS1092 NVMe SSD controller designed for new compute models that enable the data center to share storage data to further maximize cost and performance efficiencies.

The unprecedented growth in data means that more storage will be required. Emerging applications and innovative technologies will drive new ways of increasing storage capacity, improving latency and ensuring security. Marvell is in a position to offer the industry a wide range of technologies to support data storage requirements, addressing both SSD or HDD implementation and covering all accompanying interface types from SAS and SATA through to PCIe and NMVe.

Check out www.marvell.com to learn more about how Marvell is storing the world’s data.

December 13th, 2017

The Marvell NVMe DRAM-less SSD Controller Proves Victorious at the 2017 ACE Awards

By Sander Arts, Interim VP of Marketing, Marvell

Key representatives of the global technology sector were gathered together at the San Jose Convention Center last week to hear the recipients of this year’s Annual Creativity in Electronics (ACE) Awards announced. This prestigious awards event, which is organized in conjunction with leading electronics engineering magazines EDN and EE Times, highlights the most innovative products announced in the last 12 months, as well as recognizing visionary executives and the most promising new start-ups. A panel made up of the editorial teams of these magazines, plus several highly respected independent judges, were all involved in the process of selecting the winner in each category.

The 88NV1160 high performance controller for non-volatile memory express (NVMe), which was introduced by Marvell earlier this year, fought off tough competition from companies like Diodes Inc. and Cypress Semiconductor to win the coveted Logic/Interface/Memory category. Marvell gained two further nominations at the awards – with 98PX1012 Prestera PX Passive Intelligent Port Extender (PIPE) also being featured in the Logic/Interface/Memory category, while the 88W8987xA automotive wireless combo SoC was among those cited in the Automotive category.

Designed for inclusion in the next generation of streamlined portable computing devices (such as high-end tablets and ultra-books), the 88NV1160 NVMe solid-state drive (SSD) controllers are able to deliver 1600MB/s read speeds while simultaneously keeping the power consumption required for such operations extremely low (<1.2W). Based on a 28nm low power CMOS process, each of these controller ICs has a dual core 400MHz Arm® Cortex®-R5 processor embedded into it.

Through incorporation of a host memory buffer, the 88NV1160 exhibits far lower latency than competing devices. It is this that is responsible for accelerating the read speeds supported. By utilizing its embedded SRAM, the controller does not need to rely on an external DRAM memory – thereby simplifying the memory controller implementation. As a result, there is a significant reduction in the board space required, as well as a lowering of the overall bill-of-materials costs involved.

The 88NV1160’s proprietary NANDEdge™ low density parity check error-correction functionality raises SSD endurance and makes sure that long term system reliability is upheld throughout the end product’s entire operational lifespan. The controller’s built-in 256-bit AES encryption engine ensures that stored metadata is safeguarded from potential security breaches. Furthermore, these DRAM-less ICs are very compact, thus enabling multiple-chip package integration to be benefitted from.

Consumers are now expecting their portable electronics equipment to possess a lot more computing resource, so that they can access the exciting array of new software apps that are now becoming available; making use of cloud-based services, enjoying augmented reality and gaming. At the same time as offering functions of this kind, such items of equipment need to be able to support longer periods between battery recharges, so as to further enhance the user experience derived. This calls for advanced ICs combining strong processing capabilities with improved power efficiency levels and that is where the 88NV1160 comes in.

“We’re excited to honor this robust group for their dedication to their craft and efforts in bettering the industry for years to come,” said Nina Brown, Vice President of Events at UBM Americas. “The judging panel was given the difficult task of selecting winners from an incredibly talented group of finalists and we’d like to thank all of those participants for their amazing work and also honor their achievements. These awards aim to shine a light on the best in today’s electronics realm and this group is the perfect example of excellence within both an important and complex industry.”