Marvell Blog

Featuring technology ideas and solutions worth sharing

Marvell

Latest Articles

October 10th, 2017

Celebrating 20 Years of Wi-Fi – Part II

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

This is the second instalment in a series of blogs covering the history of Wi-Fi®. While the first part looked at the origins of Wi-Fi, this part will look at how the technology has progressed to the high speed connection we know today.

Wireless Revolution

By the early years of the new millennium, Wi-Fi quickly had started to gain widespread popularity, as the benefits of wireless connectivity became clear. Hotspots began popping up at coffee shops, airports and hotels as businesses and consumers started to realize the potential for Wi-Fi to enable early forms of what we now know as mobile computing. Home users, many of whom were starting to get broadband Internet, were able to easily share their connections throughout the house.

Thanks to the IEEE® 802.11 working group’s efforts, a proprietary wireless protocol that was originally designed simply for connecting cash registers (see previous blog) had become the basis for a wireless networking standard that was changing the whole fabric of society.

Improving Speeds

The advent of 802.11b, in 1999, set the stage for Wi-Fi mass adoption. Its cheaper price point made it accessible for consumers, and its 11 Mbit/s speeds made it fast enough to replace wired Ethernet connections for enterprise users. Driven by the broadband internet explosion in the early years post 2000, 802.11b became a great success. Both consumers and businesses found wireless was a great way to easily share the newfound high speed connections that DSL, cable and other broadband technologies gave them.

As broadband speeds became the norm, consumer’s computer usage habits changed accordingly. Higher bandwidth applications such as music/movie sharing and streaming audio started to see increasing popularity within the consumer space.

Meanwhile, in the enterprise market, wireless had even greater speed demands to contend with, as it was competing with fast local networking over Ethernet. Business use cases (such as VoIP, file sharing and printer sharing, as well as desktop virtualization) needed to work seamlessly if wireless was to be adopted.

Even in the early 2000’s, the speed that 802.11b could support was far from cutting edge. On the wired side of things, 10/100 Ethernet was already a widespread standard. At 100 Mbit/s, it was almost 10 times faster than 802.11b’s nominal 11 Mbit/s speed. 802.11b’s protocol overhead meant that, in fact, maximum theoretical speeds were 5.9 Mbit/s. In practice though, as 802.11b used the increasingly popular 2.4 GHz band, speeds proved to be lower than that still. Interference from microwave ovens, cordless phones and other consumer electronics, meant that real world speeds often didn’t reach the 5.9 Mbit/s mark (sometimes not even close).

802.11g

To address speed concerns, in 2003 the IEEE 802.11 working group came out with 802.11g. Though 802.11g would use the 2.4 GHz frequency band just like 802.11b, it was able to achieve speeds of up to 54 Mbit/s. Even after speed decreases due to protocol overhead, its theoretical maximum of 31.4 Mbit/s was enough bandwidth to accommodate increasingly fast household broadband speeds.

Actually 802.11g was not the first 802.11 wireless standard to achieve 54 Mbit/s. That crown goes to 802.11a, which had done it back in 1999. However, 802.11a used a separate 5.8 GHz frequency to achieve its fast speeds. While 5.8 GHz had the benefit of less radio interference from consumer electronics, it also meant incompatibility with 802.11b. That fact, along with more expensive equipment, meant that 802.11a was only ever popular within the business market segment and never saw proliferation into the higher volume domestic/consumer arena.

By using 2.4 GHz to reach 54 Mbit/s, 802.11g was able to achieve high speeds while retaining full backwards compatibility with 802.11b. This was crucial, as 802.11b had already established itself as the main wireless standard for consumer devices by this point. Its backwards compatibility, along with cheaper hardware compared to 802.11a, were big selling points, and 802.11g soon became the new, faster wireless standard for consumer and, increasingly, even business related applications.

802.11n

Introduced in 2009, 802.11n made further speed improvements upon 802.11g and 802.11a. Operating on either 2.4 GHz or 5.8 GHz frequency bands (though not simultaneously), 802.11n improved transfer efficiency through frame aggregation, and also introduced optional MIMO and 40 Hz channels – double the channel width of 802.11g.

802.11n offered significantly faster network speeds. At the low end, if it was operating in the same type of single antenna, 20 Hz channel width configuration as an 802.11g network, the 802.11n network could achieve 72 Mbit/s. If, in addition, the double width 40 Hz channel was used, with multiple antennas, then data rates could be much faster – up to 600 Mbit/s (for a four antenna configuration).

The third and final blog in this series will take us right up to the modern day and will also look at the potential of Wi-Fi in the future.

 

October 3rd, 2017

Celebrating 20 Years of Wi-Fi – Part I

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

You can’t see it, touch it, or hear it – yet Wi-Fi® has had a tremendous impact on the modern world – and will continue to do so. From our home wireless networks, to offices and public spaces, the ubiquity of high speed connectivity without reliance on cables has radically changed the way computing happens. It would not be much of an exaggeration to say that because of ready access to Wi-Fi, we are consequently able to lead better lives – using our laptops, tablets and portable electronics goods in a far more straightforward, simplistic manner with a high degree of mobility, no longer having to worry about a complex tangle of wires tying us down.

Though it may be hard to believe, it is now two decades since the original 802.11 standard was ratified by the IEEE®. This first in a series of blogs will look at the history of Wi-Fi to see how it has overcome numerous technical challenges and evolved into the ultra-fast, highly convenient wireless standard that we know today. We will then go on to discuss what it may look like tomorrow.

Unlicensed Beginnings
While we now think of 802.11 wireless technology as predominantly connecting our personal computing devices and smartphones to the Internet, it was in fact initially invented as a means to connect up humble cash registers. In the late 1980s, NCR Corporation, a maker of retail hardware and point-of-sale (PoS) computer systems, had a big problem. Its customers – department stores and supermarkets – didn’t want to dig up their floors each time they changed their store layout.

A recent ruling that had been made by the FCC, which opened up certain frequency bands as free to use, inspired what would be a game-changing idea. By using wireless connections in the unlicensed spectrum (rather than conventional wireline connections), electronic cash registers and PoS systems could be easily moved around a store without the retailer having to perform major renovation work.

Soon after this, NCR allocated the project to an engineering team out of its Netherlands office. They were set the challenge of creating a wireless communication protocol. These engineers succeeded in developing ‘WaveLAN’, which would be recognized as the precursor to Wi-Fi. Rather than preserving this as a purely proprietary protocol, NCR could see that by establishing it as a standard, the company would be able to position itself as a leader in the wireless connectivity market as it emerged. By 1990, the IEEE 802.11 working group had been formed, based on wireless communication in unlicensed spectra.

Using what were at the time innovative spread spectrum techniques to reduce interference and improve signal integrity in noisy environments, the original incarnation of Wi-Fi was finally formally standardized in 1997. It operated with a throughput of just 2 Mbits/s, but it set the foundations of what was to come.

Wireless Ethernet
Though the 802.11 wireless standard was released in 1997, it didn’t take off immediately. Slow speeds and expensive hardware hampered its mass market appeal for quite a while – but things were destined to change. 10 Mbit/s Ethernet was the networking standard of the day. The IEEE 802.11 working group knew that if they could equal that, they would have a worthy wireless competitor. In 1999, they succeeded, creating 802.11b. This used the same 2.4 GHz ISM frequency band as the original 802.11 wireless standard, but it raised the throughput supported considerably, reaching 11 Mbits/s. Wireless Ethernet was finally a reality.

Soon after 802.11b was established, the IEEE working group also released 802.11a, an even faster standard. Rather than using the increasingly crowded 2.4 GHz band, it ran on the 5 GHz band and offered speeds up to a lofty 54 Mbits/s.

Because it occupied the 5 GHz frequency band, away from the popular (and thus congested) 2.4 GHz band, it had better performance in noisy environments; however, the higher carrier frequency also meant it had reduced range compared to 2.4 GHz wireless connectivity. Thanks to cheaper equipment and better nominal ranges, 802.11b proved to be the most popular wireless standard by far. But, while it was more cost effective than 802.11a, 802.11b still wasn’t at a low enough price bracket for the average consumer. Routers and network adapters would still cost hundreds of dollars.

That all changed following a phone call from Steve Jobs. Apple was launching a new line of computers at that time and wanted to make wireless networking functionality part of it. The terms set were tough – Apple expected to have the cards at a $99 price point, but of course the volumes involved could potentially be huge. Lucent Technologies, which had acquired NCR by this stage, agreed.

While it was a difficult pill to swallow initially, the Apple deal finally put Wi-Fi in the hands of consumers and pushed it into the mainstream. PC makers saw Apple computers beating them to the punch and wanted wireless networking as well. Soon, key PC hardware makers including Dell, Toshiba, HP and IBM were all offering Wi-Fi.

Microsoft also got on the Wi-Fi bandwagon with Windows XP. Working with engineers from Lucent, Microsoft made Wi-Fi connectivity native to the operating system. Users could get wirelessly connected without having to install third party drivers or software. With the release of Windows XP, Wi-Fi was now natively supported on millions of computers worldwide – it had officially made it into the ‘big time’.

This blog post is the first in a series that charts the eventful history of Wi-Fi. The second part, which is coming soon, will bring things up to date and look at current Wi-Fi implementations.

 

September 18th, 2017

Modular Networks Drive Cost Efficiencies in Data Center Upgrades

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

Exponential growth in data center usage has been responsible for driving a huge amount of investment in the networking infrastructure used to connect virtualized servers to the multiple services they now need to accommodate. To support the server-to-server traffic that virtualized data centers require, the networking spine will generally rely on high capacity 40 Gbit/s and 100 Gbit/s switch fabrics with aggregate throughputs now hitting 12.8 Tbit/s. But the ‘one size fits all’ approach being employed to develop these switch fabrics quickly leads to a costly misalignment for data center owners. They need to find ways to match the interfaces on individual storage units and server blades that have already been installed with the switches they are buying to support their scale-out plans.

The top-of-rack (ToR) switch provides one way to match the demands of the server equipment and the network infrastructure. The switch can aggregate the data from lower speed network interfaces and so act as a front-end to the core network fabric. But such switches tend to be far more complex than is actually needed – often derived from older generations of core switch fabric. They perform a level of switching that is unnecessary and, as a result, are not cost effective when they are primarily aggregating traffic on its way to the core network’s 12.8 Tbits/s switching engines. The heightened expense manifests itself not only in terms of hardware complexity and the issues of managing an extra network tier, but also in relation to power and air-conditioning. It is not unusual to find five or more fans inside each unit being used to cool the silicon switch. There is another way to support the requirements of data center operators which consumes far less power and money, while also offering greater modularity and flexibility too.

Providing a means by which to overcome the high power and cost associated with traditional ToR switch designs, the IEEE 802.1BR standard for port extenders makes it possible to implement a bridge between a core network interface and a number of port extenders that break out connections to individual edge devices. An attractive feature of this standard is the ability to allow port extenders to be cascaded, for even greater levels of modularity. As a result, many lower speed ports, of 1 Gbit/s and 10 Gbits/s, can be served by one core network port (supporting 40 Gbits/s or 100 Gbits/s operation) through a single controlling bridge device.

With a simpler, more modular approach, the passive intelligent port extender (PIPE) architecture that has been developed by Marvell leads to next generation rack units which no longer call for the inclusion of any fans for thermal management purposes. Reference designs have already been built that use a simple 65W open-frame power supply to feed all the devices required even in a high-capacity, 48-ports of 10 Gbits/s. Furthermore, the equipment dispenses with the need for external management. The management requirements can move to the core 12.8 Tbit/s switch fabric, providing further savings in terms of operational expenditure. It is a demonstration of exactly how a more modular approach can greatly improve the efficiency of today’s and tomorrow’s data center implementations.

August 31st, 2017

Securing Embedded Storage with Hardware Encryption

By Jeroen Dorgelo, Director of Strategy, Marvell Storage Group

For industrial, military and a multitude of modern business applications, data security is of course incredibly important. While software based encryption often works well for consumer and some enterprise environments, in the context of the embedded systems used in industrial and military applications, something that is of a simpler nature and is intrinsically more robust is usually going to be needed.

Self encrypting drives utilize on-board cryptographic processors to secure data at the drive level. This not only increases drive security automatically, but does so transparently to the user and host operating system. By automatically encrypting data in the background, they thus provide the simple to use, resilient data security that is required by embedded systems.

Embedded vs Enterprise Data Security

Both embedded and enterprise storage often require strong data security. Depending on the industry sectors involved this is often related to the securing of customer (or possibly patient) privacy, military data or business data. However that is where the similarities end. Embedded storage is often used in completely different ways from enterprise storage, thereby leading to distinctly different approaches to how data security is addressed.

Enterprise storage usually consists of racks of networked disk arrays in a data center, while embedded storage is often simply a solid state drive (SSD) installed into an embedded computer or device. The physical security of the data center can be controlled by the enterprise, and software access control to enterprise networks (or applications) is also usually implemented. Embedded devices, on the other hand – such as tablets, industrial computers, smartphones, or medical devices – are often used in the field, in what are comparatively unsecure environments. Data security in this context has no choice but to be implemented down at the device level.

Hardware Based Full Disk Encryption

For embedded applications where access control is far from guaranteed, it is all about securing the data as automatically and transparently as possible. Full disk, hardware based encryption has shown itself to be the best way of achieving this goal.

Full disk encryption (FDE) achieves high degrees of both security and transparency by encrypting everything on a drive automatically. Whereas file based encryption requires users to choose files or folders to encrypt, and also calls for them to provide passwords or keys to decrypt them, FDE works completely transparently. All data written to the drive is encrypted, yet, once authenticated, a user can access the drive as easily as an unencrypted one. This not only makes FDE much easier to use, but also means that it is a more reliable method of encryption, as all data is automatically secured. Files that the user forgets to encrypt or doesn’t have access to (such as hidden files, temporary files and swap space) are all nonetheless automatically secured.

While FDE can be achieved through software techniques, hardware based FDE performs better, and is inherently more secure. Hardware based FDE is implemented at the drive level, in the form of a self encrypting SSD. The SSD controller contains a hardware cryptographic engine, and also stores private keys on the drive itself.

Because software based FDE relies on the host processor to perform encryption, it is usually slower – whereas hardware based FDE has much lower overhead as it can take advantage of the drive’s integrated crypto-processor. Hardware based FDE is also able to encrypt the master boot record of the drive, which conversely software based encryption is unable to do.

Hardware centric FDEs are transparent to not only the user, but also the host operating system. They work transparently in the background and no special software is needed to run them. Besides helping to maximize ease of use, this also means sensitive encryption keys are kept separate from the host operating system and memory, as all private keys are stored on the drive itself.

Improving Data Security

Besides providing the transparent, easy to use encryption that is now being sought, hardware- based FDE also has specific benefits for data security in modern SSDs. NAND cells have a finite service life and modern SSDs use advanced wear leveling algorithms to extend this as much as possible. Instead of overwriting the NAND cells as data is updated, write operations are constantly moved around a drive, often resulting in multiple copies of a piece of data being spread across an SSD as a file is updated. This wear leveling technique is extremely effective, but it makes file based encryption and data erasure much more difficult to accomplish, as there are now multiple copies of data to encrypt or erase.

FDE solves both these encryption and erasure issues for SSDs. Since all data is encrypted, there are not any concerns about the presence of unencrypted data remnants. In addition, since the encryption method used (which is generally 256-bit AES) is extremely secure, erasing the drive is as simple to do as erasing the private keys.

Solving Embedded Data Security

Embedded devices often present considerable security challenges to IT departments, as these devices are often used in uncontrolled environments, possibly by unauthorized personnel. Whereas enterprise IT has the authority to implement enterprise wide data security policies and access control, it is usually much harder to implement these techniques for embedded devices situated in industrial environments or used out in the field.

The simple solution for data security in embedded applications of this kind is hardware based FDE. Self encrypting drives with hardware crypto-processors have minimal processing overhead and operate completely in the background, transparent to both users and host operating systems. Their ease of use also translates into improved security, as administrators do not need to rely on users to implement security policies, and private keys are never exposed to software or operating systems.

August 2nd, 2017

Wireless Technology Set to Enable an Automotive Revolution

By Avinash Ghirnikar, Director of Technical Marketing of Connectivity Business Group

The automotive industry has always been a keen user of wireless technology. In the early 1980s, Renault made it possible to lock and unlock the doors on its Fuego model utilizing a radio transmitter. Within a decade, other vehicle manufacturers embraced the idea of remote key-less entry and not long after that it became a standard feature. Now, wireless technology is about to reshape the world of driving.

The first key-less entry systems were based on infra-red (IR) signals, borrowing the technique from automatic garage door openers. But the industry swiftly moved to RF technology, in order to make it easier to use. Although each manufacturer favored its own protocol and coding system, they adopted standard low-power RF frequency bands, such as 315 MHz in the US and 433 MHz in Europe. As concerns about theft emerged, they incorporated encryption and other security features to fend off potential attacks. They have further refreshed this technology as new threats appeared, as well as adding features such as proximity detection to remove the need to even press the key-fob remote’s button.

The next stage in favor of convenience was to employ Bluetooth instead of custom radios on the sub-1GHz frequency band so as to dispense with the keyfob altogether. With Bluetooth, an app on the user’s smartphone can not only unlock the car doors, but also handle tasks such as starting the heater or air-conditioning to make the vehicle comfortable ready for when the driver and passengers actually get in.

Bluetooth itself has become a key feature on many models over the past decade as automobile manufacturers have looked to open up their infotainment systems. Access to the functions located on dashboard through Bluetooth has made it possible for vehicle occupants to hook up their phone handsets easily. Initially, it was to support legal phone calls through hands-free operation without forcing the owner to buy and install a permanent phone in the vehicle itself. But the wireless connection is just as good at relaying high-quality audio so that the passengers can listen to their favorite music (stored on portable devices). We have clearly move a long way from the CD auto-changer located in the trunk.

Bluetooth is a prime example of the way in which RF technology, once in place, can support many different applications – with plenty of potential for use cases that have not yet been considered. Through use of a suitable relay device in the car, Bluetooth also provides the means by which to send vehicle diagnostics information to relevant smartphone apps. The use of the technology for diagnostics gateway points to an emerging use for Bluetooth in improving the overall safety of car transportation.

But now Wi-Fi is also primed to become as ubiquitous in vehicles as Bluetooth. Wi-Fi is able to provide a more robust data pipe, thus enabling even richer applications and a tighter integration with smartphone handsets. One use case that seems destined to change the cockpit experience for users is the emergence of screen projection technologies. Through the introduction of such mechanisms it will be possible to create a seamless transition for drivers from their smartphones to their cars. This will not necessarily even need to be their own car, it could be any car that they may rent from anywhere in the world.

One of the key enabling technologies for self-driving vehicles is communication. This can encompass vehicle-to-vehicle (V2V) links, vehicle-to-infrastructure (V2I) messages and, through technologies such as Bluetooth and Wi-Fi, vehicle-to-anything (V2X).

V2V provides the ability for vehicles on the road to signal their intentions to others and warn of hazards ahead. If a pothole opens up or cars have to break suddenly to avoid an obstacle, they can send out wireless messages to nearby vehicles to let them know about the situation. Those other vehicles can then slow down or change lane accordingly.

The key enabling technology for V2V is a form of the IEEE 802.11 Wi-Fi protocol, re-engineered for much lower latency and better reliability. IEEE 802.11p Wireless Access in Vehicular Environments (WAVE) operates in the 5.9 GHz region of the RF spectrum, and is capable of supporting data rates of up to 27 Mbit/s. One of the key additions for transportation is scheduling feature that let vehicles share access to wireless channels based on time. Each vehicle uses the Coordinated Universal Time (UTC) reading, usually provided by the GPS receiver, to help ensure all nearby transceivers are synchronised to the same schedule.

A key challenge for any transceiver is the Doppler Effect. On a freeway, the relative velocity of an approaching transmitter can exceed 150 mph. Such a transmitter may be in range for only a few seconds at most, making ultra-low latency crucial. But, with the underlying RF technology for V2V in place, advanced navigation applications can be deployed relatively easily and extended to deal with many other objects and even people.

V2I transactions make it possible for roadside controllers to update vehicles on their status. Traffic signals, for example, can let vehicles know when they are likely to change state. Vehicles leaving the junction can relay that data to approaching cars, which may slow down in response. By slowing down, they avoid the need to stop at a red signal – and thereby cross just as it is turning to green. The overall effect is a significant saving in fuel, as well as less wear and tear on the brakes. In the future, such wireless-enabled signals will make it possible improve the flow of autonomous vehicles considerably. The traffic signals will monitor the junction to check whether conditions are safe and usher the autonomous vehicle through to the other side, while other road users without the same level of computer control are held at a stop.

Although many V2X applications were conceived for use with a dedicated RF protocol, such as WAVE for example, there is a place for Bluetooth and, potentially, other wireless standards like conventional Wi-Fi. Pedestrians and cyclists may signal their presence on the road with the help of their own Bluetooth devices. The messages picked up by passing vehicles can be relayed using V2V communications over WAVE to extend the range of the warnings. Roadside beacons using Bluetooth technology can pass on information about local points of interest – and this can be provide to passengers who can subsequently look up more details on the Internet using the vehicle’s built-in Wi-Fi hotspot.

One thing seems to be clear, the world of automotive design will be a heterogeneous RF environment that takes traditional Wi-Fi technology and brings it together with WAVE, Bluetooth and GPS. It clearly makes sense to incorporate the right set of radios together onto one single chipset, which will thereby ease the integration process, and also ensure optimal performance is achieved. This will not only be beneficial in terms of the design of new vehicles, but will also facilitate the introduction of aftermarket V2X modules. In this way, existing cars will be able to participate in the emerging information-rich superhighway.

August 1st, 2017

Connectivity Will Drive the Cars of the Future

By Avinash Ghirnikar, Director of Technical Marketing of Connectivity Business Group

The growth of electronics content inside the automobile has already had a dramatic effect on the way in which vehicle models are designed and built. As a direct consequence of this, the biggest technical change is now beginning to happen – one that overturns the traditional relationship between the car manufacturer and the car owner.

With many subsystems now controlled by microprocessors running software, it is now possible to alter the behavior of the vehicle with an update and introduce completely new features and functionality by merely updating software. The high profile Tesla brand of high performance electric vehicles has been one of the companies pioneering this approach by releasing software and firmware updates that give existing models the ability to drive themselves. Instead of buying a car with a specific, fixed set of features, vehicles are being upgraded via firmware over the air (FOTA) without the need to visit a dealership.

Faced with so many electronic subsystems now in the vehicle, high data rates are essential. Without the ability to download and program devices quickly, the car could potentially become unusable for hours at a time. On the wireless side, this is requiring 802.11ac Wi-Fi speeds and very soon this will be ramped up to 802.11ax speeds that can potentially exceed Gigabit/second data rates.

Automotive Ethernet that can support Gigabit speeds is also now being fitted so that updates can be delivered as fast as possible to the many electronic control units (ECUs) around the car. The same Ethernet backbone is proving just as essential for day-to-day use. The network provides high resolution, real-time data from cameras, LiDAR, radar, tire pressure monitors and various other sensors fitted around the body, each of which is likely to have their own dedicated microprocessor. The result is a high performance computer based on distributed intelligence. And this, in turn, can tap into the distributed intelligence now being deployed in the cloud.

The beauty of distributed intelligence is that it is an architecture that can support applications that in many cases have not even been thought of yet. The same wireless communication networks that provide the over-the-air updates can relay real-time information on traffic patterns in the vicinity, weather data, disruptions due to accidents and many other pieces of data that the onboard computers can then use to plan the journey and make it safer. This rapid shift towards high speed intra- and inter-vehicle connectivity, and the vehicle-to-anything (V2X) communication capabilities that have thus resulted will enable applications to be benefitted from that would have been considered pure fantasy just a few years ago,

The V2X connectivity can stop traffic lights from being an apparent obstacle and turn them into devices that provide the vehicle with hints to save fuel. If the lights send out signals on their stop-go cycle approaching vehicles can use them to determine whether it is better to decelerate and arrive just in time for them to turn green instead of braking all the way to a stop. Sensors at the junction can also warn of hazards that the car then flags up to the driver. When the vehicle is able to run autonomously, it can take care of such actions itself. Similarly, cars can report to each other when they are planning to change lanes in order to leave the freeway, or when they see a slow-moving vehicle ahead and need to decelerate. The result is considerably smoother braking patterns that avoid the logjam effect we so often see on today’s crowded roads. The enablement of such applications will require multiple radios in the vehicle, which will need to work cooperatively in a fail-safe manner.

Such connectivity will also give OEMs unprecedented access to real-time diagnostic data, which a car could be uploading opportunistically to the cloud for analysis purposes. This will provide information that could lead to customized maintenance services that could be planned in advance, thereby cutting down diagnostic time at the workshop and meaning that technical problems are preemptively dealt with, rather than waiting for them to become more serious over time.

There is no need for automobile manufacturers to build any of these features into their vehicle models today. As many computations can be offloaded to servers in the cloud, the key to unlocking advanced functionality is not wholly dependent on what is present in the car itself. The fundamental requirement is access to an effective means of communications, and that is available right now through high speed Ethernet within the vehicle plus Wi-Fi and V2X-compatible wireless for transfers going beyond the chassis. Both can be supplied so that they are compliant with the AEC-Q100 automotive standard – thus ensuring quality and reliability. With those tools in place, we don’t need to see all the way ahead to the future. We just know we have the capability to get there.

July 17th, 2017

Rightsizing Ethernet

By George Hervey, Principal Architect, Marvell

Implementation of cloud infrastructure is occurring at a phenomenal rate, outpacing Moore’s Law. Annual growth is believed to be 30x and as much 100x in some cases. In order to keep up, cloud data centers are having to scale out massively, with hundreds, or even thousands of servers becoming a common sight.

At this scale, networking becomes a serious challenge. More and more switches are required, thereby increasing capital costs, as well as management complexity. To tackle the rising expense issues, network disaggregation has become an increasingly popular approach. By separating the switch hardware from the software that runs on it, vendor lock-in is reduced or even eliminated. OEM hardware could be used with software developed in-house, or from third party vendors, so that cost savings can be realized.

Though network disaggregation has tackled the immediate problem of hefty capital expenditures, it must be recognized that operating expenditures are still high. The number of managed switches basically stays the same. To reduce operating costs, the issue of network complexity has to also be tackled.

Network Disaggregation
Almost every application we use today, whether at home or in the work environment, connects to the cloud in some way. Our email providers, mobile apps, company websites, virtualized desktops and servers, all run on servers in the cloud.

For these cloud service providers, this incredible growth has been both a blessing and a challenge. As demand increases, Moore’s law has struggled to keep up. Scaling data centers today involves scaling out – buying more compute and storage capacity, and subsequently investing in the networking to connect it all. The cost and complexity of managing everything can quickly add up.

Until recently, networking hardware and software had often been tied together. Buying a switch, router or firewall from one vendor would require you to run their software on it as well. Larger cloud service providers saw an opportunity. These players often had no shortage of skilled software engineers. At the massive scales they ran at, they found that buying commodity networking hardware and then running their own software on it would save them a great deal in terms of Capex.

This disaggregation of the software from the hardware may have been financially attractive, however it did nothing to address the complexity of the network infrastructure. There was still a great deal of room to optimize further.

802.1BR
Today’s cloud data centers rely on a layered architecture, often in a fat-tree or leaf-spine structural arrangement. Rows of racks, each with top-of-rack (ToR) switches, are then connected to upstream switches on the network spine. The ToR switches are, in fact, performing simple aggregation of network traffic. Using relatively complex, energy consuming switches for this task results in a significant capital expense, as well as management costs and no shortage of headaches.

Through the port extension approach, outlined within the IEEE 802.1BR standard, the aim has been to streamline this architecture. By replacing ToR switches with port extenders, port connectivity is extended directly from the rack to the upstream. Management is consolidated to the fewer number of switches which are located at the upper layer network spine, eliminating the dozens or possibly hundreds of switches at the rack level.

The reduction in switch management complexity of the port extender approach has been widely recognized, and various network switches on the market now comply with the 802.1BR standard. However, not all the benefits of this standard have actually been realized.

The Next Step in Network Disaggregation
Though many of the port extenders on the market today fulfill 802.1BR functionality, they do so using legacy components. Instead of being optimized for 802.1BR itself, they rely on traditional switches. This, as a consequence impacts upon the potential cost and power benefits that the new architecture offers.

Designed from the ground up for 802.1BR, Marvell’s Passive Intelligent Port Extender (PIPE) offering is specifically optimized for this architecture. PIPE is interoperable with 802.1BR compliant upstream bridge switches from all the industry’s leading OEMs. It enables fan-less, cost efficient port extenders to be deployed, which thereby provide upfront savings as well as ongoing operational savings for cloud data centers. Power consumption is lowered and switch management complexity is reduced by an order of magnitude

The first wave in network disaggregation was separating switch software from the hardware that it ran on. 802.1BR’s port extender architecture is bringing about the second wave, where ports are decoupled from the switches which manage them. The modular approach to networking discussed here will result in lower costs, reduced energy consumption and greatly simplified network management.