Marvell Blog

Featuring technology ideas and solutions worth sharing

Marvell

Archive for the ‘Storage’ Category

August 3rd, 2018

Infrastructure Powerhouse: Marvell and Cavium become one!

By Todd Owens, Technical Marketing Manager

Marvell’s acquisition of Cavium closed on July 6th, 2018 and the integration is well under way. Cavium becomes a wholly-owned subsidiary of Marvell.  Our combined mission as Marvell is to develop and deliver semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else. The combination of the two companies makes for an infrastructure powerhouse, serving a variety of customers in the Cloud/Data Center, Enterprise/Campus, Service Providers, SMB/SOHO, Industrial and Automotive industries.

For our business with HPE, the first thing you need to know is it is business as usual. The folks you engaged with on I/O and processor technology we provided to HPE before the acquisition are the same you engage with now.  Marvell is a leading provider of storage technologies, including ultra-fast read channels, high performance processors and transceivers that are found in the vast majority of hard disk drive (HDD) and solid-state drive (SDD) modules used in HPE ProLiant and HPE Storage products today.

Our industry leading QLogic® 8/16/32Gb Fibre Channel and FastLinQ® 10/20/25/50Gb Ethernet I/O technology will continue to provide connectivity for HPE Server and Storage solutions. The focus for these products will continue to be the intelligent I/O of choice for HPE, with the performance, flexibility, and reliability we are known for.

Marvell’s Portfolio of FastLinQ Ethernet and QLogic Fibre Channel I/O Adapters

We will continue to provide ThunderX2® Arm® processor technology for HPC servers like the HPE Apollo 70 for high-performance compute applications. We will also continue to provide Ethernet networking technology that is embedded into HPE Servers and Storage today and Marvell ASIC technology used for the iLO5 baseboard management controller (BMC) in all HPE ProLiant and HPE Synergy Gen10 servers.

iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs

That sounds great, but what’s going to change over time?
The combined company now has a much broader portfolio of technology to help HPE deliver best-in-class solutions at the edge, in the network and in the data center.

Marvell has industry-leading switching technology from 1GbE to 100GbE and beyond. This enables us to deliver connectivity from the IoT edge, to the data center and the cloud. Our Intelligent NIC technology provides compression, encryption and more to enable customers to analyze network traffic faster and more intelligently than ever before. Our security solutions and enhanced SoC and Processor capabilities will help our HPE design-in team collaborate with HPE to innovate next-generation server and storage solutions.

Down the road, you’ll see a shift in our branding and where you access info over time as well. While our product-specific brands, like ThunderX2 for Arm, or QLogic for Fibre Channel and FastLinQ for Ethernet will remain, many things will transition from Cavium to Marvell. Our web-based resources will start to change as will our email addresses. For example, you can now access our HPE Microsite at www.marvell.com/hpe . Soon, you’ll be able to contact us at “hpesolutions@marvell.com” as well. The collateral you leverage today will be updated over time. In fact, this has already started with updates to our HPE-specific Line Card, our HPE Ethernet Quick Reference Guide, our Fibre Channel Quick Reference Guides and our presentation materials. Updates will continue over the next few months.

In summary, we are bigger and better. We are one team that is more focused than ever to help HPE, their partners and customers thrive with world-class technology we can bring to bear. If you want to learn more, engage with us today. Our field contact details are here. We are all excited for this new beginning to make “I/O and Infrastructure Matter!” each and every day.

April 5th, 2018

VMware vSAN ReadyNode Recipes Can Use Substitutions

By Todd Owens, Technical Marketing Manager

VMware vSAN ReadyNode Recipes Can Use Substitutions

When you are baking a cake, at times you substitute in different ingredients to make the result better. The same can be done with VMware vSAN ReadyNode configurations or recipes. Some changes to the documented configurations can make the end solution much more flexible and scalable.

VMware allows certain elements within a vSAN ReadyNode bill of materials (BOM) to be substituted. In this VMware BLOG, the author outlines that server elements in the bom can change including:

  • CPU
  • Memory
  • Caching Tier
  • Capacity Tier
  • NIC
  • Boot Device

However, changes can only be made with devices that are certified as supported by VMware. The list of certified I/O devices can be found on VMware vSAN Compatibility Guide and the full portfolio of NICs, FlexFabric Adapters and Converged Network Adapters form HPE and Cavium are supported.

If we zero in on the HPE recipes for vSAN ReadyNode configurations, here are the substitutions you can make for I/O adapters.

Ok, so we know what substitutions we can make in these vSAN storage solutions. What are the benefits to the customer for making this change?

There are several benefits to the HPE/Cavium technology compared to the other adapter offerings.

  • HPE 520/620 Series adapters support Universal RDMA – the ability to support both RoCE and IWARP RDMA protocols with the same adapter.
    • Why Does This Matter? Universal RDMA offers flexibility in choice when low-latency is a requirement. RoCE works great if customers have already deployed using lossless Ethernet infrastructure. iWARP is a great choice for greenfield environments as it works on existing networks, doesn’t require complexity of lossless Ethernet and thus scales infinitely better.
  • Concurrent Network Partitioning (NPAR) and SR-IOV
    • NPAR (Network Partitioning) allows for virtualization of the physical adapter port. SR-IOV Offloadmove management of the VM network from the Hypervisor (CPU) to the Adapter. With HPE/Cavium adapters, these two technologies can work together to optimize the connectivity for virtual server environments and offload the Hypervisor (and thus CPU) from managing VM traffic, while providing full Quality of Service at the same time.
  • Storage Offload
    • Ability to reduce CPU utilization by offering iSCSI or FCoE Storage offload on the adapter itself. The freed-up CPU resources can then be used for other, more critical tasks and applications. This also reduces the need for dedicated storage adapters, connectivity hardware and switches, lowering overall TCO for storage connectivity.
  • Offloads in general – In addition to RDMA, Storage and SR-IOV Offloads mentioned above, HPE/Cavium Ethernet adapters also support TCP/IP Stateless Offloads and DPDK small packet acceleration offloads as well. Each of these offloads moves work from the CPU to the adapter, reducing the CPU utilization associated with I/O activity. As mentioned in my previous blog, because these offloads bypass tasks in the O/S Kernel, they also mitigate any performance issues associated with Spectre/Meltdown vulnerability fixes on X86 systems.
  • Adapter Management integration with vCenter – All HPE/Cavium Ethernet adapters are managed by Cavium’s QCC utility which can be fully integrated into VMware v-Center. This provides a much simpler approach to I/O management in vSAN configurations.

In summary, if you are looking to deploy vSAN ReadyNode, you might want to fit in a substitution or two on the I/O front to take advantage of all the intelligent capabilities available in Ethernet I/O adapters from HPE/Cavium. Sure, the standard ingredients work, but the right substitution will make things more flexible, scalable and deliver an overall better experience for your client.

March 2nd, 2018

Connecting Shared Storage – iSCSI or Fibre Channel

By Todd Owens, Technical Marketing Manager

At Cavium, we provide adapters that support a variety of protocols for connecting servers to shared storage including iSCSI, Fibre Channel over Ethernet (FCoE) and native Fibre Channel (FC). One of the questions we get quite often is which protocol is best for connecting servers to shared storage? The answer is, it depends.

We can simplify the answer by eliminating FCoE, as it has proven to be a great protocol for converging the edge of the network (server to top-of-rack switch), but not really effective for multi-hop connectivity, taking servers through a network to shared storage targets. That leaves us with iSCSI and FC.

Typically, people equate iSCSI with lower cost and ease of deployment because it works on the same kind of Ethernet network that servers and clients are already running on. These same folks equate FC as expensive and complex, requiring special adapters, switches and a “SAN Administrator” to make it all work.

This may have been the case in the early days of shared storage, but things have changed as the intelligence and performance of the storage network environment has evolved. What customers need to do is look at the reality of what they need from a shared storage environment and make a decision based on cost, performance and manageability. For this blog, I’m going to focus on these three criteria and compare 10Gb Ethernet (10GbE) with iSCSI hardware offload and 16Gb Fibre Channel (16GFC).

Before we crunch numbers, let me start by saying that shared storage requires a dedicated network, regardless of the protocol. The idea that iSCSI can be run on the same network as the server and client network traffic may be feasible for small or medium environments with just a couple of servers, but for any environment with mission-critical applications or with say four or more servers connecting to a shared storage device, a dedicated storage network is strongly advised to increase reliability and eliminate performance problems related to network issues.

Now that we have that out of the way, let’s start by looking at the cost difference between iSCSI and FC. We have to take into account the costs of the adapters, optics/cables and switch infrastructure. Here’s the list of Hewlett Packard Enterprise (HPE) components I will use in the analysis. All prices are based on published HPE list prices.

Notes: 1. Optical transceiver needed at both adapter and switch ports for 10GbE networks. Thus cost/port is two times the transceiver cost 2. FC switch pricing includes full featured management software and licenses 3. FC Host Bus Adapters (HBAs) ship with transceivers, thus only one additional transceiver is needed for the switch port

So if we do the math, the cost per port looks like this:

10GbE iSCSI with SFP+ Optics = $437+$2,734+$300 = $3,471

10GbE iSCSI with 3 meter Direct Attach Cable (DAC) =$437+$269+300 = $1,006

16GFC with SFP+ Optics = $773 + $405 + $1,400 = $2,578

So iSCSI is the lowest price if DAC cables are used. Note, in my example, I chose 3 meter cable length, but even if you choose shorter or longer cables (HPE supports from 0.65 to 7 meter cable lengths), this is still the lowest cost connection option. Surprisingly, the cost of the 10GbE optics makes the iSCSI solution with optical connections the most expensive configuration. When using fiber optic cables, the 16GFC configuration is lower cost.

So what are the trade-offs with DAC versus SFP+ options? It really comes down to distance and the number of connections required. The DAC cables can only span up to 7 meters or so. That means customers have only limited reach within or across racks. If customers have multiple racks or distance requirements of more than 7 meters, FC becomes the more attractive option, from a cost perspective. Also, DAC cables are bulky, and when trying to cable more than 10 ports or more, the cable bundles can become unwieldy to deal with.

On the performance side, let’s look at the differences. iSCSI adapters have impressive specifications of 10Gbps bandwidth and 1.5Million IOPS which offers very good performance. For FC, we have 16Gbps of bandwidth and 1.3Million IOPS. So FC has more bandwidth and iSCSI can deliver slightly more transactions. Well, that is, if you take the specifications at face value. If you dig a little deeper here’s some things we learn:

  • 16GFC delivers full line-rate performance for block storage data transfers. Today’s 10GbE iSCSI runs on the Ethernet protocol with Data Center Bridging (DCB) which makes this a lossless transmission protocol like FC. However the iSCSI commands are transferred via Transmission Control Protocol (TCP)/IP which add significant overhead to the headers of each packet. Because of this inefficiency, the actual bandwidth for iSCSI traffic is usually well below the stated line rate. This gives16GFC the clear advantage in terms of bandwidth performance.
  • iSCSI provides the best IOPS performance for block sizes below 2K. Figure 1 shows IOPS performance of Cavium iSCSI with hardware offload. Figure 2 shows IOPS performance of Cavium’s QLogic 16GFC adapter and you can see better IOPS performance for 4K and above, when compared to iSCSI.
  • Latency is an order of magnitude lower for FC compared to iSCSI. Latency of Brocade Gen 5 (16Gb) FC switching (using cut-through switch architecture) is in the 700 nanoseconds range and for 10GbE it is in the range of 5 to 50 microseconds. The impact of latency gets compounded with iSCSI should the user implement 10GBASE-T connections in the iSCSI adapters. This adds another significant hit to the latency equation for iSCSI.

Figure 1: Cavium’s iSCSI Hardware Offload IOPS Performance

 

Figure 2: Cavium’s QLogic 16Gb FC IOPS performance

If we look at manageability, this is where things have probably changed the most. Keep in mind, Ethernet network management hasn’t really changed much. Network administrators create virtual LANs (vLANs) to separate network traffic and reduce congestion. These network administrators have a variety of tools and processes that allow them to monitor network traffic, run diagnostics and make changes on the fly when congestion starts to impact application performance. The same management approach applies to the iSCSI network and can be done by the same network administrators.

On the FC side, companies like Cavium and HPE have made significant improvements on the software side of things to simplify SAN deployment, orchestration and management. Technologies like fabric-assigned port worldwide name (FA-WWN) from Cavium and Brocade enable the SAN administrator to configure the SAN without having HBAs available and allow a failed server to be replaced without having to reconfigure the SAN fabric. Cavium and Brocade have also teamed up to improve the FC SAN diagnostics capability with Gen 5 (16Gb) Fibre Channel fabrics by implementing features such as Brocade ClearLink™ diagnostics, Fibre Chanel Ping (FC ping) and Fibre Channel traceroute (FC traceroute), link cable beacon (LCB) technology and more. HPE’s Smart SAN for HPE 3PAR provides the storage administrator the ability to zone the fabric and map the servers and LUNs to an HPE 3PAR StoreServ array from the HPE 3PAR StoreServ management console.

Another way to look at manageability is in the number of administrators on staff. In many enterprise environments, there are typically dozens of network administrators. In those same environments, there may be less than a handful of “SAN” administrators. Yes, there are lots of LAN connected devices that need to be managed and monitored, but so few for SAN connected devices. The point is it doesn’t take an army to manage a SAN with today’s FC management software from vendors like Brocade.

So what is the right answer between FC and iSCSI? Well, it depends. If application performance is the biggest criteria, it’s hard to beat the combination of bandwidth, IOPS and latency of the 16GFC SAN. If compatibility and commonality with existing infrastructures is a critical requirement, 10GbE iSCSI is a good option (assuming the 10GbE infrastructure exists in the first place). If security is a key concern, FC is the best choice. When is the last time you heard of a FC network being hacked into? And if cost is the key criteria, iSCSI with DAC or 10GBASE-T connection is a good choice, understanding the tradeoff in latency and bandwidth performance.

So in very general terms, FC is the best choice for enterprise customers who need high performance, mission-critical capability, high reliability and scalable shared storage connectivity. For smaller customers who are more cost sensitive, iSCSI is a great alternative. iSCSI is also a good protocol for pre-configure systems like hyper-converged storage solutions to provide simple connectivity to existing infrastructure.

As a wise manager once told me many years ago, “If you start with the customer and work backwards, you can’t go wrong.” So the real answer is understand what the customer needs and design the best solution to meet those needs based on the information above.

January 11th, 2018

Storing the World’s Data

By Marvell, PR Team

Storage is the foundation for a data-centric world, but how tomorrow’s data will be stored is the subject of much debate. What is clear is that data growth will continue to rise significantly. According to a report compiled by IDC titled ‘Data Age 2025’, the amount of data created will grow at an almost exponential rate. This amount is predicted to surpass 163 Zettabytes by the middle of the next decade (which is almost 8 times what it is today, and nearly 100 times what it was back in 2010). Increasing use of cloud-based services, the widespread roll-out of Internet of Things (IoT) nodes, virtual/augmented reality applications, autonomous vehicles, machine learning and the whole ‘Big Data’ phenomena will all play a part in the new data-driven era that lies ahead.

Further down the line, the building of smart cities will lead to an additional ramp up in data levels, with highly sophisticated infrastructure being deployed in order to alleviate traffic congestion, make utilities more efficient, and improve the environment, to name a few. A very large proportion of the data of the future will need to be accessed in real-time. This will have implications on the technology utilized and also where the stored data is situated within the network. Additionally, there are serious security considerations that need to be factored in, too.

So that data centers and commercial enterprises can keep overhead under control and make operations as efficient as possible, they will look to follow a tiered storage approach, using the most appropriate storage media so as to lower the related costs. Decisions on the media utilized will be based on how frequently the stored data needs to be accessed and the acceptable degree of latency. This will require the use of numerous different technologies to make it fully economically viable – with cost and performance being important factors.

There are now a wide variety of different storage media options out there. In some cases these are long established while in others they are still in the process of emerging. Hard disk drives (HDDs) in certain applications are being replaced by solid state drives (SSDs), and with the migration from SATA to NVMe in the SSD space, NVMe is enabling the full performance capabilities of SSD technology. HDD capacities are continuing to increase substantially and their overall cost effectiveness also adds to their appeal. The immense data storage requirements that are being warranted by the cloud mean that HDD is witnessing considerable traction in this space.

There are other forms of memory on the horizon that will help to address the challenges that increasing storage demands will set. These range from higher capacity 3D stacked flash to completely new technologies, such as phase-change with its rapid write times and extensive operational lifespan. The advent of NVMe over fabrics (NVMf) based interfaces offers the prospect of high bandwidth, ultra-low latency SSD data storage that is at the same time extremely scalable.

Marvell was quick to recognize the ever growing importance of data storage and has continued to make this sector a major focus moving forwards, and has established itself as the industry’s leading supplier of both HDD controllers and merchant SSD controllers.

Within a period of only 18 months after its release, Marvell managed to ship over 50 million of its 88SS1074 SATA SSD controllers with NANDEdge™ error-correction technology. Thanks to its award-winning 88NV11xx series of small form factor DRAM-less SSD controllers (based on a 28nm CMOS semiconductor process), the company is able to offer the market high performance NVMe memory controller solutions that are optimized for incorporation into compact, streamlined handheld computing equipment, such as tablet PCs and ultra-books. These controllers are capable of supporting reads speeds of 1600MB/s, while only drawing minimal power from the available battery reserves. Marvell offers solutions like its 88SS1092 NVMe SSD controller designed for new compute models that enable the data center to share storage data to further maximize cost and performance efficiencies.

The unprecedented growth in data means that more storage will be required. Emerging applications and innovative technologies will drive new ways of increasing storage capacity, improving latency and ensuring security. Marvell is in a position to offer the industry a wide range of technologies to support data storage requirements, addressing both SSD or HDD implementation and covering all accompanying interface types from SAS and SATA through to PCIe and NMVe.

Check out www.marvell.com to learn more about how Marvell is storing the world’s data.

December 13th, 2017

The Marvell NVMe DRAM-less SSD Controller Proves Victorious at the 2017 ACE Awards

By Sander Arts, Interim VP of Marketing

Key representatives of the global technology sector were gathered together at the San Jose Convention Center last week to hear the recipients of this year’s Annual Creativity in Electronics (ACE) Awards announced. This prestigious awards event, which is organized in conjunction with leading electronics engineering magazines EDN and EE Times, highlights the most innovative products announced in the last 12 months, as well as recognizing visionary executives and the most promising new start-ups. A panel made up of the editorial teams of these magazines, plus several highly respected independent judges, were all involved in the process of selecting the winner in each category.

The 88NV1160 high performance controller for non-volatile memory express (NVMe), which was introduced by Marvell earlier this year, fought off tough competition from companies like Diodes Inc. and Cypress Semiconductor to win the coveted Logic/Interface/Memory category. Marvell gained two further nominations at the awards – with 98PX1012 Prestera PX Passive Intelligent Port Extender (PIPE) also being featured in the Logic/Interface/Memory category, while the 88W8987xA automotive wireless combo SoC was among those cited in the Automotive category.

Designed for inclusion in the next generation of streamlined portable computing devices (such as high-end tablets and ultra-books), the 88NV1160 NVMe solid-state drive (SSD) controllers are able to deliver 1600MB/s read speeds while simultaneously keeping the power consumption required for such operations extremely low (<1.2W). Based on a 28nm low power CMOS process, each of these controller ICs has a dual core 400MHz Arm® Cortex®-R5 processor embedded into it.

Through incorporation of a host memory buffer, the 88NV1160 exhibits far lower latency than competing devices. It is this that is responsible for accelerating the read speeds supported. By utilizing its embedded SRAM, the controller does not need to rely on an external DRAM memory – thereby simplifying the memory controller implementation. As a result, there is a significant reduction in the board space required, as well as a lowering of the overall bill-of-materials costs involved.

The 88NV1160’s proprietary NANDEdge™ low density parity check error-correction functionality raises SSD endurance and makes sure that long term system reliability is upheld throughout the end product’s entire operational lifespan. The controller’s built-in 256-bit AES encryption engine ensures that stored metadata is safeguarded from potential security breaches. Furthermore, these DRAM-less ICs are very compact, thus enabling multiple-chip package integration to be benefitted from.

Consumers are now expecting their portable electronics equipment to possess a lot more computing resource, so that they can access the exciting array of new software apps that are now becoming available; making use of cloud-based services, enjoying augmented reality and gaming. At the same time as offering functions of this kind, such items of equipment need to be able to support longer periods between battery recharges, so as to further enhance the user experience derived. This calls for advanced ICs combining strong processing capabilities with improved power efficiency levels and that is where the 88NV1160 comes in.

“We’re excited to honor this robust group for their dedication to their craft and efforts in bettering the industry for years to come,” said Nina Brown, Vice President of Events at UBM Americas. “The judging panel was given the difficult task of selecting winners from an incredibly talented group of finalists and we’d like to thank all of those participants for their amazing work and also honor their achievements. These awards aim to shine a light on the best in today’s electronics realm and this group is the perfect example of excellence within both an important and complex industry.”

 

October 17th, 2017

Unleashing the Potential of Flash Storage with NVMe

By Jeroen Dorgelo, Director of Strategy, Marvell Storage Group

The dirty little secret of flash drives today is that many of them are running on yesterday’s interfaces. While SATA and SAS have undergone several iterations since they were first introduced, they are still based on decades-old concepts and were initially designed with rotating disks in mind. These legacy protocols are bottlenecking the potential speeds possible from today’s SSDs.

NVMe is the latest storage interface standard designed specifically for SSDs. With its massively parallel architecture, it enables the full performance capabilities of today’s SSDs to be realized. Because of price and compatibility, NVMe has taken a while to see uptake, but now it is finally coming into its own.

Serial Attached Legacy

Currently, SATA is the most common storage interface. Whether a hard drive or increasingly common flash storage, chances are it is running through a SATA interface. The latest generation of SATA – SATA III – has a 600 MB/s bandwidth limit. While this is adequate for day-to-day consumer applications, it is not enough for enterprise servers. Even I/O intensive consumer use cases, such as video editing, can run into this limit.

The SATA standard was originally released in 2000 as a serial-based successor to the older PATA standard, a parallel interface. SATA uses the advanced host controller interface (AHCI) which has a single command queue with a depth of 32 commands. This command queuing architecture is well-suited to conventional rotating disk storage, though more limiting when used with flash.

Whereas SATA is the standard storage interface for consumer drives, SAS is much more common in the enterprise world. Released originally in 2004, SAS is also a serial replacement to an older parallel standard SCSI. Designed for enterprise applications, SAS storage is usually more expensive to implement than SATA, but it has significant advantages over SATA for data center use – such as longer cable lengths, multipath IO, and better error reporting. SAS also has a higher bandwidth limit of 1200MB/s.

Just like SATA, SAS, has a single command queue, although the queue depth of SAS goes to 254 commands instead of 32 commands. While the larger command queue and higher bandwidth limit make it better performing than SATA, SAS is still far from being the ideal flash interface.

NVMe – Massive Parallelism

Introduced in 2011, NVMe was designed from the ground up for addressing the needs of flash storage. Developed by a consortium of storage companies, its key objective is specifically to overcome the bottlenecks on flash performance imposed by SATA and SAS.

Whereas SATA is restricted to 600MB/s and SAS to 1200MB/s (as mentioned above), NVMe runs over the PCIe bus and its bandwidth is theoretically limited only by the PCIe bus speed. With current PCIe standards providing 1GB/s or more per lane, and PCIe connections generally offering multiple lanes, bus speed almost never represents a bottleneck for NVMe-based SSDs.

NVMe is designed to deliver massive parallelism, offering 64,000 command queues, each with a queue depth of 64,000 commands. This parallelism fits in well with the random access nature of flash storage, as well as the multi-core, multi-threaded processors in today’s computers. NVMe’s protocol is streamlined, with an optimized command set that does more in fewer operations compared to AHCI. IO operations often need fewer commands than with SATA or SAS, allowing latency to be reduced. For enterprise customers, NVMe also supports many enterprise storage features, such as multi-path IO and robust error reporting and management.

Pure speed and low latency, plus the ability to deal with high IOPs have made NVMe SSDs a hit in enterprise data centers. Companies that particularly value low latency and high IOPs, such as high-frequency trading firms and  database and web application hosting companies, have been some of the first and most avid endorsers of NVMe SSDs.

Barriers to Adoption

While NVMe is high performance, historically speaking it has also been considered relatively high cost. This cost has negatively affected its popularity in the consumer-class storage sector. Relatively few operating systems supported NVMe when it first came out, and its high price made it less attractive for ordinary consumers, many of whom could not fully take advantage of its faster speeds anyway.

However, all this is changing. NVMe prices are coming down and, in some cases, achieving price parity with SATA drives. This is due not only to market forces but also to new innovations, such as DRAM-less NVMe SSDs.

As DRAM is a significant bill of materials (BoM) cost for SSDs, DRAM-less SSDs are able to achieve lower, more attractive price points. Since NVMe 1.2, host memory buffer (HMB) support has allowed DRAM-less SSDs to borrow host system memory as the SSD’s DRAM buffer for better performance. DRAM-less SSDs that take advantage of HMB support can achieve performance similar to that of DRAM-based SSDs, while simultaneously saving cost, space and energy.

NVMe SSDs are also more power-efficient than ever. While the NVMe protocol itself is already efficient, the PCIe link it runs over can consume significant levels of idle power. Newer NVMe SSDs support highly efficient, autonomous sleep state transitions, which allow them to achieve energy consumption on par or lower than SATA SSDs.

All this means that NVMe is more viable than ever for a variety of use cases, from large data centers that can save on capital expenditures due to lower cost SSDs and operating expenditures as a result of lower power consumption, as well as power-sensitive mobile/portable applications such as laptops, tablets and smartphones, which can now consider using NVMe.

Addressing the Need for Speed

While the need for speed is well recognized in enterprise applications, is the speed offered by NVMe actually needed in the consumer world? For anyone who has ever installed more memory, bought a larger hard drive (or SSD), or ordered a faster Internet connection, the answer is obvious.

Today’s consumer use cases generally do not yet test the limits of SATA drives, and part of the reason is most likely because SATA is still the most common interface for consumer storage. Today’s video recording and editing, gaming and file server applications are already pushing the limits of consumer SSDs, and tomorrow’s use cases are only destined to push them further. With NVMe now achieving price points that are comparable with SATA, there is no reason not to build future-proof storage today.

August 31st, 2017

Securing Embedded Storage with Hardware Encryption

By Jeroen Dorgelo, Director of Strategy, Marvell Storage Group

For industrial, military and a multitude of modern business applications, data security is of course incredibly important. While software based encryption often works well for consumer and some enterprise environments, in the context of the embedded systems used in industrial and military applications, something that is of a simpler nature and is intrinsically more robust is usually going to be needed.

Self encrypting drives utilize on-board cryptographic processors to secure data at the drive level. This not only increases drive security automatically, but does so transparently to the user and host operating system. By automatically encrypting data in the background, they thus provide the simple to use, resilient data security that is required by embedded systems.

Embedded vs Enterprise Data Security

Both embedded and enterprise storage often require strong data security. Depending on the industry sectors involved this is often related to the securing of customer (or possibly patient) privacy, military data or business data. However that is where the similarities end. Embedded storage is often used in completely different ways from enterprise storage, thereby leading to distinctly different approaches to how data security is addressed.

Enterprise storage usually consists of racks of networked disk arrays in a data center, while embedded storage is often simply a solid state drive (SSD) installed into an embedded computer or device. The physical security of the data center can be controlled by the enterprise, and software access control to enterprise networks (or applications) is also usually implemented. Embedded devices, on the other hand – such as tablets, industrial computers, smartphones, or medical devices – are often used in the field, in what are comparatively unsecure environments. Data security in this context has no choice but to be implemented down at the device level.

Hardware Based Full Disk Encryption

For embedded applications where access control is far from guaranteed, it is all about securing the data as automatically and transparently as possible. Full disk, hardware based encryption has shown itself to be the best way of achieving this goal.

Full disk encryption (FDE) achieves high degrees of both security and transparency by encrypting everything on a drive automatically. Whereas file based encryption requires users to choose files or folders to encrypt, and also calls for them to provide passwords or keys to decrypt them, FDE works completely transparently. All data written to the drive is encrypted, yet, once authenticated, a user can access the drive as easily as an unencrypted one. This not only makes FDE much easier to use, but also means that it is a more reliable method of encryption, as all data is automatically secured. Files that the user forgets to encrypt or doesn’t have access to (such as hidden files, temporary files and swap space) are all nonetheless automatically secured.

While FDE can be achieved through software techniques, hardware based FDE performs better, and is inherently more secure. Hardware based FDE is implemented at the drive level, in the form of a self encrypting SSD. The SSD controller contains a hardware cryptographic engine, and also stores private keys on the drive itself.

Because software based FDE relies on the host processor to perform encryption, it is usually slower – whereas hardware based FDE has much lower overhead as it can take advantage of the drive’s integrated crypto-processor. Hardware based FDE is also able to encrypt the master boot record of the drive, which conversely software based encryption is unable to do.

Hardware centric FDEs are transparent to not only the user, but also the host operating system. They work transparently in the background and no special software is needed to run them. Besides helping to maximize ease of use, this also means sensitive encryption keys are kept separate from the host operating system and memory, as all private keys are stored on the drive itself.

Improving Data Security

Besides providing the transparent, easy to use encryption that is now being sought, hardware- based FDE also has specific benefits for data security in modern SSDs. NAND cells have a finite service life and modern SSDs use advanced wear leveling algorithms to extend this as much as possible. Instead of overwriting the NAND cells as data is updated, write operations are constantly moved around a drive, often resulting in multiple copies of a piece of data being spread across an SSD as a file is updated. This wear leveling technique is extremely effective, but it makes file based encryption and data erasure much more difficult to accomplish, as there are now multiple copies of data to encrypt or erase.

FDE solves both these encryption and erasure issues for SSDs. Since all data is encrypted, there are not any concerns about the presence of unencrypted data remnants. In addition, since the encryption method used (which is generally 256-bit AES) is extremely secure, erasing the drive is as simple to do as erasing the private keys.

Solving Embedded Data Security

Embedded devices often present considerable security challenges to IT departments, as these devices are often used in uncontrolled environments, possibly by unauthorized personnel. Whereas enterprise IT has the authority to implement enterprise wide data security policies and access control, it is usually much harder to implement these techniques for embedded devices situated in industrial environments or used out in the field.

The simple solution for data security in embedded applications of this kind is hardware based FDE. Self encrypting drives with hardware crypto-processors have minimal processing overhead and operate completely in the background, transparent to both users and host operating systems. Their ease of use also translates into improved security, as administrators do not need to rely on users to implement security policies, and private keys are never exposed to software or operating systems.