Marvell Blog

Featuring technology ideas and solutions worth sharing

Marvell

Latest Articles

October 17th, 2017

Unleashing the Potential of Flash Storage with NVMe

By Jeroen Dorgelo, Director of Strategy, Marvell Storage Group

The dirty little secret of flash drives today is that many of them are running on yesterday’s interfaces. While SATA and SAS have undergone several iterations since they were first introduced, they are still based on decades-old concepts and were initially designed with rotating disks in mind. These legacy protocols are bottlenecking the potential speeds possible from today’s SSDs.

NVMe is the latest storage interface standard designed specifically for SSDs. With its massively parallel architecture, it enables the full performance capabilities of today’s SSDs to be realized. Because of price and compatibility, NVMe has taken a while to see uptake, but now it is finally coming into its own.

Serial Attached Legacy

Currently, SATA is the most common storage interface. Whether a hard drive or increasingly common flash storage, chances are it is running through a SATA interface. The latest generation of SATA – SATA III – has a 600 MB/s bandwidth limit. While this is adequate for day-to-day consumer applications, it is not enough for enterprise servers. Even I/O intensive consumer use cases, such as video editing, can run into this limit.

The SATA standard was originally released in 2000 as a serial-based successor to the older PATA standard, a parallel interface. SATA uses the advanced host controller interface (AHCI) which has a single command queue with a depth of 32 commands. This command queuing architecture is well-suited to conventional rotating disk storage, though more limiting when used with flash.

Whereas SATA is the standard storage interface for consumer drives, SAS is much more common in the enterprise world. Released originally in 2004, SAS is also a serial replacement to an older parallel standard SCSI. Designed for enterprise applications, SAS storage is usually more expensive to implement than SATA, but it has significant advantages over SATA for data center use – such as longer cable lengths, multipath IO, and better error reporting. SAS also has a higher bandwidth limit of 1200MB/s.

Just like SATA, SAS, has a single command queue, although the queue depth of SAS goes to 254 commands instead of 32 commands. While the larger command queue and higher bandwidth limit make it better performing than SATA, SAS is still far from being the ideal flash interface.

NVMe – Massive Parallelism

Introduced in 2011, NVMe was designed from the ground up for addressing the needs of flash storage. Developed by a consortium of storage companies, its key objective is specifically to overcome the bottlenecks on flash performance imposed by SATA and SAS.

Whereas SATA is restricted to 600MB/s and SAS to 1200MB/s (as mentioned above), NVMe runs over the PCIe bus and its bandwidth is theoretically limited only by the PCIe bus speed. With current PCIe standards providing 1GB/s or more per lane, and PCIe connections generally offering multiple lanes, bus speed almost never represents a bottleneck for NVMe-based SSDs.

NVMe is designed to deliver massive parallelism, offering 64,000 command queues, each with a queue depth of 64,000 commands. This parallelism fits in well with the random access nature of flash storage, as well as the multi-core, multi-threaded processors in today’s computers. NVMe’s protocol is streamlined, with an optimized command set that does more in fewer operations compared to AHCI. IO operations often need fewer commands than with SATA or SAS, allowing latency to be reduced. For enterprise customers, NVMe also supports many enterprise storage features, such as multi-path IO and robust error reporting and management.

Pure speed and low latency, plus the ability to deal with high IOPs have made NVMe SSDs a hit in enterprise data centers. Companies that particularly value low latency and high IOPs, such as high-frequency trading firms and  database and web application hosting companies, have been some of the first and most avid endorsers of NVMe SSDs.

Barriers to Adoption

While NVMe is high performance, historically speaking it has also been considered relatively high cost. This cost has negatively affected its popularity in the consumer-class storage sector. Relatively few operating systems supported NVMe when it first came out, and its high price made it less attractive for ordinary consumers, many of whom could not fully take advantage of its faster speeds anyway.

However, all this is changing. NVMe prices are coming down and, in some cases, achieving price parity with SATA drives. This is due not only to market forces but also to new innovations, such as DRAM-less NVMe SSDs.

As DRAM is a significant bill of materials (BoM) cost for SSDs, DRAM-less SSDs are able to achieve lower, more attractive price points. Since NVMe 1.2, host memory buffer (HMB) support has allowed DRAM-less SSDs to borrow host system memory as the SSD’s DRAM buffer for better performance. DRAM-less SSDs that take advantage of HMB support can achieve performance similar to that of DRAM-based SSDs, while simultaneously saving cost, space and energy.

NVMe SSDs are also more power-efficient than ever. While the NVMe protocol itself is already efficient, the PCIe link it runs over can consume significant levels of idle power. Newer NVMe SSDs support highly efficient, autonomous sleep state transitions, which allow them to achieve energy consumption on par or lower than SATA SSDs.

All this means that NVMe is more viable than ever for a variety of use cases, from large data centers that can save on capital expenditures due to lower cost SSDs and operating expenditures as a result of lower power consumption, as well as power-sensitive mobile/portable applications such as laptops, tablets and smartphones, which can now consider using NVMe.

Addressing the Need for Speed

While the need for speed is well recognized in enterprise applications, is the speed offered by NVMe actually needed in the consumer world? For anyone who has ever installed more memory, bought a larger hard drive (or SSD), or ordered a faster Internet connection, the answer is obvious.

Today’s consumer use cases generally do not yet test the limits of SATA drives, and part of the reason is most likely because SATA is still the most common interface for consumer storage. Today’s video recording and editing, gaming and file server applications are already pushing the limits of consumer SSDs, and tomorrow’s use cases are only destined to push them further. With NVMe now achieving price points that are comparable with SATA, there is no reason not to build future-proof storage today.

October 11th, 2017

Bringing IoT intelligence to the enterprise edge by supporting Google Cloud IoT Core Public Beta on ESPRESSObin and MACCHIATObin community platforms

By Aviad Enav Zagha, Sr. Director Embedded Processors Product Line Manager, Networking Group at Marvell

Though the projections made by market analysts still differ to a considerable degree, there is little doubt about the huge future potential that implementation of Internet of Things (IoT) technology has within an enterprise context. It is destined to lead to billions of connected devices being in operation, all sending captured data back to the cloud, from which analysis can be undertaken or actions initiated. This will make existing business/industrial/metrology processes more streamlined and allow a variety of new services to be delivered.

With large numbers of IoT devices to deal with in any given enterprise network, the challenges of efficiently and economically managing them all without any latency issues, and ensuring that elevated levels of security are upheld, are going to prove daunting. In order to put the least possible strain on cloud-based resources, we believe the best approach is to divest some intelligence outside the core and place it at the enterprise edge, rather than following a purely centralized model. This arrangement places computing functionality much nearer to where the data is being acquired and makes a response to it considerably easier. IoT devices will then have a local edge hub that can reduce the overhead of real-time communication over the network. Rather than relying on cloud servers far away from the connected devices to take care of the ‘heavy lifting’, these activities can be done closer to home. Deterministic operation is maintained due to lower latency, bandwidth is conserved (thus saving money), and the likelihood of data corruption or security breaches is dramatically reduced.

Sensors and data collectors in the enterprise, industrial and smart city segments are expected to generate more than 1GB per day of information, some needing a response within a matter of seconds. Therefore, in order for the network to accommodate the large amount of data, computing functionalities will migrate from the cloud to the network edge, forming a new market of edge computing.

In order to accelerate the widespread propagation of IoT technology within the enterprise environment, Marvell now supports the multifaceted Google Cloud IoT Core platform. Cloud IoT Core is a fully managed service mechanism through which the management and secure connection of devices can be accomplished on the large scales that will be characteristic of most IoT deployments.

Through its IoT enterprise edge gateway technology, Marvell is able to provide the necessary networking and compute capabilities required (as well as the prospect of localized storage) to act as mediator between the connected devices within the network and the related cloud functions. By providing the control element needed, as well as collecting real-time data from IoT devices, the IoT enterprise gateway technology serves as a key consolidation point for interfacing with the cloud and also has the ability to temporarily control managed devices if an event occurs that makes cloud services unavailable. In addition, the IoT enterprise gateway can perform the role of a proxy manager for lightweight, rudimentary IoT devices that (in order to keep power consumption and unit cost down) may not possess any intelligence. Through the introduction of advanced ARM®-based community platforms, Marvell is able to facilitate enterprise implementations using Cloud IoT Core. The recently announced Marvell MACCHIATObin™ and Marvell ESPRESSObin™ community boards support open source applications, local storage and networking facilities. At the heart of each of these boards is Marvell’s high performance ARMADA® system-on-chip (SoC) that supports Google Cloud IoT Core Public Beta.

Via Cloud IoT Core, along with other related Google Cloud services (including Pub/Sub, Dataflow, Bigtable, BigQuery, Data Studio), enterprises can benefit from an all-encompassing IoT solution that addresses the collection, processing, evaluation and visualization of real-time data in a highly efficient manner. Cloud IoT Core features certificate-based authentication and transport layer security (TLS), plus an array of sophisticated analytical functions.

Over time, the enterprise edge is going to become more intelligent. Consequently, mediation between IoT devices and the cloud will be needed, as will cost-effective processing and management. With the combination of Marvell’s proprietary IoT gateway technology and Google Cloud IoT Core, it is now possible to migrate a portion of network intelligence to the enterprise edge, leading to various major operational advantages.

Please visit MACCHIATObin Wiki and ESPRESSObin Wiki for instructions on how to connect to Google’s Cloud IoT Core Public Beta platform.

October 10th, 2017

Celebrating 20 Years of Wi-Fi – Part II

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

This is the second instalment in a series of blogs covering the history of Wi-Fi®. While the first part looked at the origins of Wi-Fi, this part will look at how the technology has progressed to the high speed connection we know today.

Wireless Revolution

By the early years of the new millennium, Wi-Fi quickly had started to gain widespread popularity, as the benefits of wireless connectivity became clear. Hotspots began popping up at coffee shops, airports and hotels as businesses and consumers started to realize the potential for Wi-Fi to enable early forms of what we now know as mobile computing. Home users, many of whom were starting to get broadband Internet, were able to easily share their connections throughout the house.

Thanks to the IEEE® 802.11 working group’s efforts, a proprietary wireless protocol that was originally designed simply for connecting cash registers (see previous blog) had become the basis for a wireless networking standard that was changing the whole fabric of society.

Improving Speeds

The advent of 802.11b, in 1999, set the stage for Wi-Fi mass adoption. Its cheaper price point made it accessible for consumers, and its 11 Mbit/s speeds made it fast enough to replace wired Ethernet connections for enterprise users. Driven by the broadband internet explosion in the early years post 2000, 802.11b became a great success. Both consumers and businesses found wireless was a great way to easily share the newfound high speed connections that DSL, cable and other broadband technologies gave them.

As broadband speeds became the norm, consumer’s computer usage habits changed accordingly. Higher bandwidth applications such as music/movie sharing and streaming audio started to see increasing popularity within the consumer space.

Meanwhile, in the enterprise market, wireless had even greater speed demands to contend with, as it was competing with fast local networking over Ethernet. Business use cases (such as VoIP, file sharing and printer sharing, as well as desktop virtualization) needed to work seamlessly if wireless was to be adopted.

Even in the early 2000’s, the speed that 802.11b could support was far from cutting edge. On the wired side of things, 10/100 Ethernet was already a widespread standard. At 100 Mbit/s, it was almost 10 times faster than 802.11b’s nominal 11 Mbit/s speed. 802.11b’s protocol overhead meant that, in fact, maximum theoretical speeds were 5.9 Mbit/s. In practice though, as 802.11b used the increasingly popular 2.4 GHz band, speeds proved to be lower than that still. Interference from microwave ovens, cordless phones and other consumer electronics, meant that real world speeds often didn’t reach the 5.9 Mbit/s mark (sometimes not even close).

802.11g

To address speed concerns, in 2003 the IEEE 802.11 working group came out with 802.11g. Though 802.11g would use the 2.4 GHz frequency band just like 802.11b, it was able to achieve speeds of up to 54 Mbit/s. Even after speed decreases due to protocol overhead, its theoretical maximum of 31.4 Mbit/s was enough bandwidth to accommodate increasingly fast household broadband speeds.

Actually 802.11g was not the first 802.11 wireless standard to achieve 54 Mbit/s. That crown goes to 802.11a, which had done it back in 1999. However, 802.11a used a separate 5.8 GHz frequency to achieve its fast speeds. While 5.8 GHz had the benefit of less radio interference from consumer electronics, it also meant incompatibility with 802.11b. That fact, along with more expensive equipment, meant that 802.11a was only ever popular within the business market segment and never saw proliferation into the higher volume domestic/consumer arena.

By using 2.4 GHz to reach 54 Mbit/s, 802.11g was able to achieve high speeds while retaining full backwards compatibility with 802.11b. This was crucial, as 802.11b had already established itself as the main wireless standard for consumer devices by this point. Its backwards compatibility, along with cheaper hardware compared to 802.11a, were big selling points, and 802.11g soon became the new, faster wireless standard for consumer and, increasingly, even business related applications.

802.11n

Introduced in 2009, 802.11n made further speed improvements upon 802.11g and 802.11a. Operating on either 2.4 GHz or 5.8 GHz frequency bands (though not simultaneously), 802.11n improved transfer efficiency through frame aggregation, and also introduced optional MIMO and 40 Hz channels – double the channel width of 802.11g.

802.11n offered significantly faster network speeds. At the low end, if it was operating in the same type of single antenna, 20 Hz channel width configuration as an 802.11g network, the 802.11n network could achieve 72 Mbit/s. If, in addition, the double width 40 Hz channel was used, with multiple antennas, then data rates could be much faster – up to 600 Mbit/s (for a four antenna configuration).

The third and final blog in this series will take us right up to the modern day and will also look at the potential of Wi-Fi in the future.

 

October 3rd, 2017

Celebrating 20 Years of Wi-Fi – Part I

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

You can’t see it, touch it, or hear it – yet Wi-Fi® has had a tremendous impact on the modern world – and will continue to do so. From our home wireless networks, to offices and public spaces, the ubiquity of high speed connectivity without reliance on cables has radically changed the way computing happens. It would not be much of an exaggeration to say that because of ready access to Wi-Fi, we are consequently able to lead better lives – using our laptops, tablets and portable electronics goods in a far more straightforward, simplistic manner with a high degree of mobility, no longer having to worry about a complex tangle of wires tying us down.

Though it may be hard to believe, it is now two decades since the original 802.11 standard was ratified by the IEEE®. This first in a series of blogs will look at the history of Wi-Fi to see how it has overcome numerous technical challenges and evolved into the ultra-fast, highly convenient wireless standard that we know today. We will then go on to discuss what it may look like tomorrow.

Unlicensed Beginnings
While we now think of 802.11 wireless technology as predominantly connecting our personal computing devices and smartphones to the Internet, it was in fact initially invented as a means to connect up humble cash registers. In the late 1980s, NCR Corporation, a maker of retail hardware and point-of-sale (PoS) computer systems, had a big problem. Its customers – department stores and supermarkets – didn’t want to dig up their floors each time they changed their store layout.

A recent ruling that had been made by the FCC, which opened up certain frequency bands as free to use, inspired what would be a game-changing idea. By using wireless connections in the unlicensed spectrum (rather than conventional wireline connections), electronic cash registers and PoS systems could be easily moved around a store without the retailer having to perform major renovation work.

Soon after this, NCR allocated the project to an engineering team out of its Netherlands office. They were set the challenge of creating a wireless communication protocol. These engineers succeeded in developing ‘WaveLAN’, which would be recognized as the precursor to Wi-Fi. Rather than preserving this as a purely proprietary protocol, NCR could see that by establishing it as a standard, the company would be able to position itself as a leader in the wireless connectivity market as it emerged. By 1990, the IEEE 802.11 working group had been formed, based on wireless communication in unlicensed spectra.

Using what were at the time innovative spread spectrum techniques to reduce interference and improve signal integrity in noisy environments, the original incarnation of Wi-Fi was finally formally standardized in 1997. It operated with a throughput of just 2 Mbits/s, but it set the foundations of what was to come.

Wireless Ethernet
Though the 802.11 wireless standard was released in 1997, it didn’t take off immediately. Slow speeds and expensive hardware hampered its mass market appeal for quite a while – but things were destined to change. 10 Mbit/s Ethernet was the networking standard of the day. The IEEE 802.11 working group knew that if they could equal that, they would have a worthy wireless competitor. In 1999, they succeeded, creating 802.11b. This used the same 2.4 GHz ISM frequency band as the original 802.11 wireless standard, but it raised the throughput supported considerably, reaching 11 Mbits/s. Wireless Ethernet was finally a reality.

Soon after 802.11b was established, the IEEE working group also released 802.11a, an even faster standard. Rather than using the increasingly crowded 2.4 GHz band, it ran on the 5 GHz band and offered speeds up to a lofty 54 Mbits/s.

Because it occupied the 5 GHz frequency band, away from the popular (and thus congested) 2.4 GHz band, it had better performance in noisy environments; however, the higher carrier frequency also meant it had reduced range compared to 2.4 GHz wireless connectivity. Thanks to cheaper equipment and better nominal ranges, 802.11b proved to be the most popular wireless standard by far. But, while it was more cost effective than 802.11a, 802.11b still wasn’t at a low enough price bracket for the average consumer. Routers and network adapters would still cost hundreds of dollars.

That all changed following a phone call from Steve Jobs. Apple was launching a new line of computers at that time and wanted to make wireless networking functionality part of it. The terms set were tough – Apple expected to have the cards at a $99 price point, but of course the volumes involved could potentially be huge. Lucent Technologies, which had acquired NCR by this stage, agreed.

While it was a difficult pill to swallow initially, the Apple deal finally put Wi-Fi in the hands of consumers and pushed it into the mainstream. PC makers saw Apple computers beating them to the punch and wanted wireless networking as well. Soon, key PC hardware makers including Dell, Toshiba, HP and IBM were all offering Wi-Fi.

Microsoft also got on the Wi-Fi bandwagon with Windows XP. Working with engineers from Lucent, Microsoft made Wi-Fi connectivity native to the operating system. Users could get wirelessly connected without having to install third party drivers or software. With the release of Windows XP, Wi-Fi was now natively supported on millions of computers worldwide – it had officially made it into the ‘big time’.

This blog post is the first in a series that charts the eventful history of Wi-Fi. The second part, which is coming soon, will bring things up to date and look at current Wi-Fi implementations.

 

September 18th, 2017

Modular Networks Drive Cost Efficiencies in Data Center Upgrades

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

Exponential growth in data center usage has been responsible for driving a huge amount of investment in the networking infrastructure used to connect virtualized servers to the multiple services they now need to accommodate. To support the server-to-server traffic that virtualized data centers require, the networking spine will generally rely on high capacity 40 Gbit/s and 100 Gbit/s switch fabrics with aggregate throughputs now hitting 12.8 Tbit/s. But the ‘one size fits all’ approach being employed to develop these switch fabrics quickly leads to a costly misalignment for data center owners. They need to find ways to match the interfaces on individual storage units and server blades that have already been installed with the switches they are buying to support their scale-out plans.

The top-of-rack (ToR) switch provides one way to match the demands of the server equipment and the network infrastructure. The switch can aggregate the data from lower speed network interfaces and so act as a front-end to the core network fabric. But such switches tend to be far more complex than is actually needed – often derived from older generations of core switch fabric. They perform a level of switching that is unnecessary and, as a result, are not cost effective when they are primarily aggregating traffic on its way to the core network’s 12.8 Tbits/s switching engines. The heightened expense manifests itself not only in terms of hardware complexity and the issues of managing an extra network tier, but also in relation to power and air-conditioning. It is not unusual to find five or more fans inside each unit being used to cool the silicon switch. There is another way to support the requirements of data center operators which consumes far less power and money, while also offering greater modularity and flexibility too.

Providing a means by which to overcome the high power and cost associated with traditional ToR switch designs, the IEEE 802.1BR standard for port extenders makes it possible to implement a bridge between a core network interface and a number of port extenders that break out connections to individual edge devices. An attractive feature of this standard is the ability to allow port extenders to be cascaded, for even greater levels of modularity. As a result, many lower speed ports, of 1 Gbit/s and 10 Gbits/s, can be served by one core network port (supporting 40 Gbits/s or 100 Gbits/s operation) through a single controlling bridge device.

With a simpler, more modular approach, the passive intelligent port extender (PIPE) architecture that has been developed by Marvell leads to next generation rack units which no longer call for the inclusion of any fans for thermal management purposes. Reference designs have already been built that use a simple 65W open-frame power supply to feed all the devices required even in a high-capacity, 48-ports of 10 Gbits/s. Furthermore, the equipment dispenses with the need for external management. The management requirements can move to the core 12.8 Tbit/s switch fabric, providing further savings in terms of operational expenditure. It is a demonstration of exactly how a more modular approach can greatly improve the efficiency of today’s and tomorrow’s data center implementations.

August 31st, 2017

Securing Embedded Storage with Hardware Encryption

By Jeroen Dorgelo, Director of Strategy, Marvell Storage Group

For industrial, military and a multitude of modern business applications, data security is of course incredibly important. While software based encryption often works well for consumer and some enterprise environments, in the context of the embedded systems used in industrial and military applications, something that is of a simpler nature and is intrinsically more robust is usually going to be needed.

Self encrypting drives utilize on-board cryptographic processors to secure data at the drive level. This not only increases drive security automatically, but does so transparently to the user and host operating system. By automatically encrypting data in the background, they thus provide the simple to use, resilient data security that is required by embedded systems.

Embedded vs Enterprise Data Security

Both embedded and enterprise storage often require strong data security. Depending on the industry sectors involved this is often related to the securing of customer (or possibly patient) privacy, military data or business data. However that is where the similarities end. Embedded storage is often used in completely different ways from enterprise storage, thereby leading to distinctly different approaches to how data security is addressed.

Enterprise storage usually consists of racks of networked disk arrays in a data center, while embedded storage is often simply a solid state drive (SSD) installed into an embedded computer or device. The physical security of the data center can be controlled by the enterprise, and software access control to enterprise networks (or applications) is also usually implemented. Embedded devices, on the other hand – such as tablets, industrial computers, smartphones, or medical devices – are often used in the field, in what are comparatively unsecure environments. Data security in this context has no choice but to be implemented down at the device level.

Hardware Based Full Disk Encryption

For embedded applications where access control is far from guaranteed, it is all about securing the data as automatically and transparently as possible. Full disk, hardware based encryption has shown itself to be the best way of achieving this goal.

Full disk encryption (FDE) achieves high degrees of both security and transparency by encrypting everything on a drive automatically. Whereas file based encryption requires users to choose files or folders to encrypt, and also calls for them to provide passwords or keys to decrypt them, FDE works completely transparently. All data written to the drive is encrypted, yet, once authenticated, a user can access the drive as easily as an unencrypted one. This not only makes FDE much easier to use, but also means that it is a more reliable method of encryption, as all data is automatically secured. Files that the user forgets to encrypt or doesn’t have access to (such as hidden files, temporary files and swap space) are all nonetheless automatically secured.

While FDE can be achieved through software techniques, hardware based FDE performs better, and is inherently more secure. Hardware based FDE is implemented at the drive level, in the form of a self encrypting SSD. The SSD controller contains a hardware cryptographic engine, and also stores private keys on the drive itself.

Because software based FDE relies on the host processor to perform encryption, it is usually slower – whereas hardware based FDE has much lower overhead as it can take advantage of the drive’s integrated crypto-processor. Hardware based FDE is also able to encrypt the master boot record of the drive, which conversely software based encryption is unable to do.

Hardware centric FDEs are transparent to not only the user, but also the host operating system. They work transparently in the background and no special software is needed to run them. Besides helping to maximize ease of use, this also means sensitive encryption keys are kept separate from the host operating system and memory, as all private keys are stored on the drive itself.

Improving Data Security

Besides providing the transparent, easy to use encryption that is now being sought, hardware- based FDE also has specific benefits for data security in modern SSDs. NAND cells have a finite service life and modern SSDs use advanced wear leveling algorithms to extend this as much as possible. Instead of overwriting the NAND cells as data is updated, write operations are constantly moved around a drive, often resulting in multiple copies of a piece of data being spread across an SSD as a file is updated. This wear leveling technique is extremely effective, but it makes file based encryption and data erasure much more difficult to accomplish, as there are now multiple copies of data to encrypt or erase.

FDE solves both these encryption and erasure issues for SSDs. Since all data is encrypted, there are not any concerns about the presence of unencrypted data remnants. In addition, since the encryption method used (which is generally 256-bit AES) is extremely secure, erasing the drive is as simple to do as erasing the private keys.

Solving Embedded Data Security

Embedded devices often present considerable security challenges to IT departments, as these devices are often used in uncontrolled environments, possibly by unauthorized personnel. Whereas enterprise IT has the authority to implement enterprise wide data security policies and access control, it is usually much harder to implement these techniques for embedded devices situated in industrial environments or used out in the field.

The simple solution for data security in embedded applications of this kind is hardware based FDE. Self encrypting drives with hardware crypto-processors have minimal processing overhead and operate completely in the background, transparent to both users and host operating systems. Their ease of use also translates into improved security, as administrators do not need to rely on users to implement security policies, and private keys are never exposed to software or operating systems.

August 2nd, 2017

Wireless Technology Set to Enable an Automotive Revolution

By Avinash Ghirnikar, Director of Technical Marketing of Connectivity Business Group

The automotive industry has always been a keen user of wireless technology. In the early 1980s, Renault made it possible to lock and unlock the doors on its Fuego model utilizing a radio transmitter. Within a decade, other vehicle manufacturers embraced the idea of remote key-less entry and not long after that it became a standard feature. Now, wireless technology is about to reshape the world of driving.

The first key-less entry systems were based on infra-red (IR) signals, borrowing the technique from automatic garage door openers. But the industry swiftly moved to RF technology, in order to make it easier to use. Although each manufacturer favored its own protocol and coding system, they adopted standard low-power RF frequency bands, such as 315 MHz in the US and 433 MHz in Europe. As concerns about theft emerged, they incorporated encryption and other security features to fend off potential attacks. They have further refreshed this technology as new threats appeared, as well as adding features such as proximity detection to remove the need to even press the key-fob remote’s button.

The next stage in favor of convenience was to employ Bluetooth instead of custom radios on the sub-1GHz frequency band so as to dispense with the keyfob altogether. With Bluetooth, an app on the user’s smartphone can not only unlock the car doors, but also handle tasks such as starting the heater or air-conditioning to make the vehicle comfortable ready for when the driver and passengers actually get in.

Bluetooth itself has become a key feature on many models over the past decade as automobile manufacturers have looked to open up their infotainment systems. Access to the functions located on dashboard through Bluetooth has made it possible for vehicle occupants to hook up their phone handsets easily. Initially, it was to support legal phone calls through hands-free operation without forcing the owner to buy and install a permanent phone in the vehicle itself. But the wireless connection is just as good at relaying high-quality audio so that the passengers can listen to their favorite music (stored on portable devices). We have clearly move a long way from the CD auto-changer located in the trunk.

Bluetooth is a prime example of the way in which RF technology, once in place, can support many different applications – with plenty of potential for use cases that have not yet been considered. Through use of a suitable relay device in the car, Bluetooth also provides the means by which to send vehicle diagnostics information to relevant smartphone apps. The use of the technology for diagnostics gateway points to an emerging use for Bluetooth in improving the overall safety of car transportation.

But now Wi-Fi is also primed to become as ubiquitous in vehicles as Bluetooth. Wi-Fi is able to provide a more robust data pipe, thus enabling even richer applications and a tighter integration with smartphone handsets. One use case that seems destined to change the cockpit experience for users is the emergence of screen projection technologies. Through the introduction of such mechanisms it will be possible to create a seamless transition for drivers from their smartphones to their cars. This will not necessarily even need to be their own car, it could be any car that they may rent from anywhere in the world.

One of the key enabling technologies for self-driving vehicles is communication. This can encompass vehicle-to-vehicle (V2V) links, vehicle-to-infrastructure (V2I) messages and, through technologies such as Bluetooth and Wi-Fi, vehicle-to-anything (V2X).

V2V provides the ability for vehicles on the road to signal their intentions to others and warn of hazards ahead. If a pothole opens up or cars have to break suddenly to avoid an obstacle, they can send out wireless messages to nearby vehicles to let them know about the situation. Those other vehicles can then slow down or change lane accordingly.

The key enabling technology for V2V is a form of the IEEE 802.11 Wi-Fi protocol, re-engineered for much lower latency and better reliability. IEEE 802.11p Wireless Access in Vehicular Environments (WAVE) operates in the 5.9 GHz region of the RF spectrum, and is capable of supporting data rates of up to 27 Mbit/s. One of the key additions for transportation is scheduling feature that let vehicles share access to wireless channels based on time. Each vehicle uses the Coordinated Universal Time (UTC) reading, usually provided by the GPS receiver, to help ensure all nearby transceivers are synchronised to the same schedule.

A key challenge for any transceiver is the Doppler Effect. On a freeway, the relative velocity of an approaching transmitter can exceed 150 mph. Such a transmitter may be in range for only a few seconds at most, making ultra-low latency crucial. But, with the underlying RF technology for V2V in place, advanced navigation applications can be deployed relatively easily and extended to deal with many other objects and even people.

V2I transactions make it possible for roadside controllers to update vehicles on their status. Traffic signals, for example, can let vehicles know when they are likely to change state. Vehicles leaving the junction can relay that data to approaching cars, which may slow down in response. By slowing down, they avoid the need to stop at a red signal – and thereby cross just as it is turning to green. The overall effect is a significant saving in fuel, as well as less wear and tear on the brakes. In the future, such wireless-enabled signals will make it possible improve the flow of autonomous vehicles considerably. The traffic signals will monitor the junction to check whether conditions are safe and usher the autonomous vehicle through to the other side, while other road users without the same level of computer control are held at a stop.

Although many V2X applications were conceived for use with a dedicated RF protocol, such as WAVE for example, there is a place for Bluetooth and, potentially, other wireless standards like conventional Wi-Fi. Pedestrians and cyclists may signal their presence on the road with the help of their own Bluetooth devices. The messages picked up by passing vehicles can be relayed using V2V communications over WAVE to extend the range of the warnings. Roadside beacons using Bluetooth technology can pass on information about local points of interest – and this can be provide to passengers who can subsequently look up more details on the Internet using the vehicle’s built-in Wi-Fi hotspot.

One thing seems to be clear, the world of automotive design will be a heterogeneous RF environment that takes traditional Wi-Fi technology and brings it together with WAVE, Bluetooth and GPS. It clearly makes sense to incorporate the right set of radios together onto one single chipset, which will thereby ease the integration process, and also ensure optimal performance is achieved. This will not only be beneficial in terms of the design of new vehicles, but will also facilitate the introduction of aftermarket V2X modules. In this way, existing cars will be able to participate in the emerging information-rich superhighway.