Marvell Blog

Featuring technology ideas and solutions worth sharing

Marvell

Latest Articles

October 26th, 2017

Marvell Demonstrates Powerful Security Software & Implementation Support at OpenWrt Summit via Collaboration with Sentinel & Sartura

By Maen Suleiman, Senior Software Product Line Manager at Marvell

Thanks to its collaboration with leading players in the OpenWrt and security space, Marvell will be able to show those attending the OpenWrt Summit (Prague, Czech Republic, 26-27th October) new beneficial developments with regard to its Marvell ARMADA® multi-core processors. In collaboration with contributors Sartura and Sentinel, these developments will be demonstrated on Marvell’s portfolio of networking community boards that support the 64-bit Arm® based Marvell ARMADA processor devices, by running the increasingly popular and highly versatile OpenWrt operating system, plus the latest advances in security software. We expect these new offerings will assist engineers in mitigating the major challenges they face when constructing next-generation customer-premises equipment (CPE) and uCPE platforms.

On display at the event at both the Sentinel and Sartura booths will be examples of the Marvell MACCHIATObin™ board (with a quad-core ARMADA 8040 that can deliver up to 2GHz operation) and the Marvell ESPRESSObin™ board (with a dual-core ARMADA 3700 lower power processor running at 1.2GHz).

The boards located at the Sartura booth will demonstrate the open source OpenWrt offering of the Marvell MACCHIATObin/ESPRESSObin platforms and will show how engineers can benefit from this company’s OpenWrt integration capabilities. The capabilities have proven invaluable in helping engineers expedite their development projects more quickly and allow the full realization of initial goals set for such projects. The Sartura team can take engineers’ original CPE designs incorporating ARMADA and provide production level software needed for inclusion in end products.

Marvell will also have MACCHIATObin/ESPRESSObin boards demonstrated at the Sentinel booth. These will feature highly optimized security software. Using this security software, companies looking to employ ARMADA based hardware in their designs will be able to ensure that they have ample protection against the threat posed by malware and harmful files – like WannaCry and Nyetya ransomware, as well as Petya malware, etc. This protection relies upon Sentinel’s File Validation Service (FVS), which inspects all HTTP, POP and IMAP files as they pass through the device toward the client. Any files deemed to be malicious are then blocked. This security technology is very well suited to CPE networking infrastructure and edge computing, as well as IoT deployments. Sentinel’s FVS technology can also be implemented on vCPE/uCPE as a security virtual network function (VNF), in addition to native implementation over physical CPEs – providing similar protection levels due to its extremely lightweight architecture and very low latency. FVS is responsible for identifying download requests and subsequently analyzing the data being downloaded. This software package can run on all Linux-based embedded operating systems for CPE and NFV devices which meet minimum hardware requirements and offer the necessary features.

Through collaborations such as those described above, Marvell is building an extensive ecosystem around its ARMADA products. As a result, Marvell will be able to support future development of secure, high performance CPE and uCPE/vCPE systems that exhibit much greater differentiation.

October 20th, 2017

Long-Term Prospects for Ethernet in the Automotive Sector

By Tim Lau, Senior Director Automotive Product Management, Marvell

The automobile is encountering possibly the biggest changes in its technological progression since the invention of the internal combustion engine nearly 150 years ago. Increasing levels of autonomy will reshape how we think about cars and car travel. It won’t be just a matter of getting from point A to point B while doing very little else — we will be able to keep on doing what we want while in the process of getting there.

As it is, the modern car already incorporates large quantities of complex electronics – making sure the ride is comfortable, the engine runs smoothly and efficiently, and providing infotainment for the driver and passengers. In addition, the features and functionality being incorporated into vehicles we are now starting to buy are no longer of a fixed nature. It is increasingly common for engine control and infotainment systems to require updates over the course of the vehicle’s operational lifespan.

Such an update is the one issue that proved instrumental in first bringing Ethernet connectivity into the vehicle domain. Leading automotive brands, such as BMW and VW, found they could dramatically increase the speed of uploads performed by mechanics at service centers by installing small Ethernet networks into the chassis of their vehicle models instead of trying to use the established, but much slower, Controller Area Network (CAN) bus. As a result, transfer times were cut from hours to minutes.

As an increasing number of upgradeable Electronic Control Units (ECUs) have appeared (thereby putting greater strain on existing in-vehicle networking technology), the Ethernet network has itself expanded. In response, the semiconductor industry has developed solutions that have made the networking standard, which was initially developed for the relatively electrically clean environment of the office, much more robust and suitable for the stringent requirements of automobile manufacturers. The CAN and Media Oriented Systems Transport (MOST) buses have persisted as the main carriers of real-time information for in-vehicle electronics – although, now, they are beginning to fade as Ethernet evolves into a role as the primary network inside the car, being used for both real-time communications and updating tasks.

In an environment where implementation of weight savings are crucial to improving fuel economy, the ability to have communications run over a single network (especially one that needs just a pair of relatively light copper cables) is a huge operational advantage. In addition, a small connector footprint is vital in the context of increasing deployment of sensors (such as cameras, radar and LiDAR transceivers), which are now being mounted all around the car for driver assistance/semi-autonomous driving purposes. This is supported by the adoption of unshielded, twisted-pair cabling.

Image sensing, radar and LiDAR functions will all produce copious amounts of data. So data-transfer capacity is going to be a critical element of in-vehicle Ethernet networks, now and into the future. The industry has responded quickly by first delivering 100 Mbit/s transceivers and following up with more capacious standards-compliant 1000 Mbit/s offerings.

But providing more bandwidth is simply not enough on its own. So that car manufacturers do not need to sacrifice the real-time behavior necessary for reliable control, the relevant international standards committees have developed protocols to guarantee the timely delivery of data. Time Sensitive Networking (TNS) provides applications with the ability to use reserved bandwidth on virtual channels in order to ensure delivery within a predictable timeframe. Less important traffic can make use of the best-effort service of conventional Ethernet with the remaining unreserved bandwidth.

The industry’s more forward-thinking semiconductor vendors, Marvell among them, have further enhanced real-time performance with features such as Deep Packet Inspection (DPI), employing Ternary Content-Addressable Memory (TCAM), in their automotive-optimized Ethernet switches. The DPI mechanism makes it possible for hardware to look deep into each packet as it arrives at a switch input and instantly decide exactly how the message should be handled. The packet inspection supports real-time debugging processes by trapping messages of a certain type, and markedly reduces application latency experienced within the deployment by avoiding processor intervention.

Support from remote management frames is another significant protocol innovation in automotive Ethernet. These frames make it possible for a system controller to control the switch state directly. For example, a system controller can automatically power down I/O ports when they are not needed – a feature that preserves precious battery life.

The result of these adaptations to the core Ethernet standard, as well as the increased resilience it now delivers, is the emergence of an expansive feature set that is well positioned for the ongoing transformation of the car, taking it from just being a mode of transportation into the data-rich, autonomous mobile platform it is envisaged to become in the future.

 

 

October 19th, 2017

Celebrating 20 Years of Wi-Fi – Part III

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

Standardized in 1997, Wi-Fi has changed the way that we compute. Today, almost every one of us uses a Wi-Fi connection on a daily basis, whether it’s for watching a show on a tablet at home, using our laptops at work, or even transferring photos from a camera. Millions of Wi-Fi-enabled products are being shipped each week, and it seems this technology is constantly finding its way into new device categories.

Since its humble beginnings, Wi-Fi has progressed at a rapid pace. While the initial standard allowed for just 2 Mbit/s data rates, today’s Wi-Fi implementations allow for speeds in the order of Gigabits to be supported. This last in our three part blog series covering the history of Wi-Fi will look at what is next for the wireless standard.

Gigabit Wireless

The latest 802.11 wireless technology to be adopted at scale is 802.11ac. It extends 802.11n, enabling improvements specifically in the 5.8 GHz band, with 802.11n technology used in the 2.4 GHz band for backwards compatibility.

By sticking to the 5.8 GHz band, 802.11ac is able to benefit from a huge 160 Hz channel bandwidth which would be impossible in the already crowded 2.4 GHz band. In addition, beamforming and support for up to 8 MIMO streams raises the speeds that can be supported. Depending on configuration, data rates can range from a minimum of 433 Mbit/s to multiple Gigabits in cases where both the router and the end-user device have multiple antennas.

If that’s not fast enough, the even more cutting edge 802.11ad standard (which is now starting to appear on the market) uses 60 GHz ‘millimeter wave’ frequencies to achieve data rates up to 7 Gbit/s, even without MIMO propagation. The major catch with this is that at 60 GHz frequencies, wireless range and penetration are greatly reduced.

Looking Ahead

Now that we’ve achieved Gigabit speeds, what’s next? Besides high speeds, the IEEE 802.11 working group has recognized that low speed, power efficient communication is in fact also an area with a great deal of potential for growth. While Wi-Fi has traditionally been a relatively power-hungry standard, the upcoming protocols will have attributes that will allow it to target areas like the Internet of Things (IoT) market with much more energy efficient communication.

20 Years and Counting

Although it has been around for two whole decades as a standard, Wi-Fi has managed to constantly evolve and keep up with the times. From the dial-up era to broadband adoption, to smartphones and now as we enter the early stages of IoT, Wi-Fi has kept on developing new technologies to adapt to the needs of the market. If history can be used to give us any indication, then it seems certain that Wi-Fi will remain with us for many years to come.

October 17th, 2017

Unleashing the Potential of Flash Storage with NVMe

By Jeroen Dorgelo, Director of Strategy, Marvell Storage Group

The dirty little secret of flash drives today is that many of them are running on yesterday’s interfaces. While SATA and SAS have undergone several iterations since they were first introduced, they are still based on decades-old concepts and were initially designed with rotating disks in mind. These legacy protocols are bottlenecking the potential speeds possible from today’s SSDs.

NVMe is the latest storage interface standard designed specifically for SSDs. With its massively parallel architecture, it enables the full performance capabilities of today’s SSDs to be realized. Because of price and compatibility, NVMe has taken a while to see uptake, but now it is finally coming into its own.

Serial Attached Legacy

Currently, SATA is the most common storage interface. Whether a hard drive or increasingly common flash storage, chances are it is running through a SATA interface. The latest generation of SATA – SATA III – has a 600 MB/s bandwidth limit. While this is adequate for day-to-day consumer applications, it is not enough for enterprise servers. Even I/O intensive consumer use cases, such as video editing, can run into this limit.

The SATA standard was originally released in 2000 as a serial-based successor to the older PATA standard, a parallel interface. SATA uses the advanced host controller interface (AHCI) which has a single command queue with a depth of 32 commands. This command queuing architecture is well-suited to conventional rotating disk storage, though more limiting when used with flash.

Whereas SATA is the standard storage interface for consumer drives, SAS is much more common in the enterprise world. Released originally in 2004, SAS is also a serial replacement to an older parallel standard SCSI. Designed for enterprise applications, SAS storage is usually more expensive to implement than SATA, but it has significant advantages over SATA for data center use – such as longer cable lengths, multipath IO, and better error reporting. SAS also has a higher bandwidth limit of 1200MB/s.

Just like SATA, SAS, has a single command queue, although the queue depth of SAS goes to 254 commands instead of 32 commands. While the larger command queue and higher bandwidth limit make it better performing than SATA, SAS is still far from being the ideal flash interface.

NVMe – Massive Parallelism

Introduced in 2011, NVMe was designed from the ground up for addressing the needs of flash storage. Developed by a consortium of storage companies, its key objective is specifically to overcome the bottlenecks on flash performance imposed by SATA and SAS.

Whereas SATA is restricted to 600MB/s and SAS to 1200MB/s (as mentioned above), NVMe runs over the PCIe bus and its bandwidth is theoretically limited only by the PCIe bus speed. With current PCIe standards providing 1GB/s or more per lane, and PCIe connections generally offering multiple lanes, bus speed almost never represents a bottleneck for NVMe-based SSDs.

NVMe is designed to deliver massive parallelism, offering 64,000 command queues, each with a queue depth of 64,000 commands. This parallelism fits in well with the random access nature of flash storage, as well as the multi-core, multi-threaded processors in today’s computers. NVMe’s protocol is streamlined, with an optimized command set that does more in fewer operations compared to AHCI. IO operations often need fewer commands than with SATA or SAS, allowing latency to be reduced. For enterprise customers, NVMe also supports many enterprise storage features, such as multi-path IO and robust error reporting and management.

Pure speed and low latency, plus the ability to deal with high IOPs have made NVMe SSDs a hit in enterprise data centers. Companies that particularly value low latency and high IOPs, such as high-frequency trading firms and  database and web application hosting companies, have been some of the first and most avid endorsers of NVMe SSDs.

Barriers to Adoption

While NVMe is high performance, historically speaking it has also been considered relatively high cost. This cost has negatively affected its popularity in the consumer-class storage sector. Relatively few operating systems supported NVMe when it first came out, and its high price made it less attractive for ordinary consumers, many of whom could not fully take advantage of its faster speeds anyway.

However, all this is changing. NVMe prices are coming down and, in some cases, achieving price parity with SATA drives. This is due not only to market forces but also to new innovations, such as DRAM-less NVMe SSDs.

As DRAM is a significant bill of materials (BoM) cost for SSDs, DRAM-less SSDs are able to achieve lower, more attractive price points. Since NVMe 1.2, host memory buffer (HMB) support has allowed DRAM-less SSDs to borrow host system memory as the SSD’s DRAM buffer for better performance. DRAM-less SSDs that take advantage of HMB support can achieve performance similar to that of DRAM-based SSDs, while simultaneously saving cost, space and energy.

NVMe SSDs are also more power-efficient than ever. While the NVMe protocol itself is already efficient, the PCIe link it runs over can consume significant levels of idle power. Newer NVMe SSDs support highly efficient, autonomous sleep state transitions, which allow them to achieve energy consumption on par or lower than SATA SSDs.

All this means that NVMe is more viable than ever for a variety of use cases, from large data centers that can save on capital expenditures due to lower cost SSDs and operating expenditures as a result of lower power consumption, as well as power-sensitive mobile/portable applications such as laptops, tablets and smartphones, which can now consider using NVMe.

Addressing the Need for Speed

While the need for speed is well recognized in enterprise applications, is the speed offered by NVMe actually needed in the consumer world? For anyone who has ever installed more memory, bought a larger hard drive (or SSD), or ordered a faster Internet connection, the answer is obvious.

Today’s consumer use cases generally do not yet test the limits of SATA drives, and part of the reason is most likely because SATA is still the most common interface for consumer storage. Today’s video recording and editing, gaming and file server applications are already pushing the limits of consumer SSDs, and tomorrow’s use cases are only destined to push them further. With NVMe now achieving price points that are comparable with SATA, there is no reason not to build future-proof storage today.

October 11th, 2017

Bringing IoT intelligence to the enterprise edge by supporting Google Cloud IoT Core Public Beta on ESPRESSObin and MACCHIATObin community platforms

By Aviad Enav Zagha, Sr. Director Embedded Processors Product Line Manager, Networking Group at Marvell

Though the projections made by market analysts still differ to a considerable degree, there is little doubt about the huge future potential that implementation of Internet of Things (IoT) technology has within an enterprise context. It is destined to lead to billions of connected devices being in operation, all sending captured data back to the cloud, from which analysis can be undertaken or actions initiated. This will make existing business/industrial/metrology processes more streamlined and allow a variety of new services to be delivered.

With large numbers of IoT devices to deal with in any given enterprise network, the challenges of efficiently and economically managing them all without any latency issues, and ensuring that elevated levels of security are upheld, are going to prove daunting. In order to put the least possible strain on cloud-based resources, we believe the best approach is to divest some intelligence outside the core and place it at the enterprise edge, rather than following a purely centralized model. This arrangement places computing functionality much nearer to where the data is being acquired and makes a response to it considerably easier. IoT devices will then have a local edge hub that can reduce the overhead of real-time communication over the network. Rather than relying on cloud servers far away from the connected devices to take care of the ‘heavy lifting’, these activities can be done closer to home. Deterministic operation is maintained due to lower latency, bandwidth is conserved (thus saving money), and the likelihood of data corruption or security breaches is dramatically reduced.

Sensors and data collectors in the enterprise, industrial and smart city segments are expected to generate more than 1GB per day of information, some needing a response within a matter of seconds. Therefore, in order for the network to accommodate the large amount of data, computing functionalities will migrate from the cloud to the network edge, forming a new market of edge computing.

In order to accelerate the widespread propagation of IoT technology within the enterprise environment, Marvell now supports the multifaceted Google Cloud IoT Core platform. Cloud IoT Core is a fully managed service mechanism through which the management and secure connection of devices can be accomplished on the large scales that will be characteristic of most IoT deployments.

Through its IoT enterprise edge gateway technology, Marvell is able to provide the necessary networking and compute capabilities required (as well as the prospect of localized storage) to act as mediator between the connected devices within the network and the related cloud functions. By providing the control element needed, as well as collecting real-time data from IoT devices, the IoT enterprise gateway technology serves as a key consolidation point for interfacing with the cloud and also has the ability to temporarily control managed devices if an event occurs that makes cloud services unavailable. In addition, the IoT enterprise gateway can perform the role of a proxy manager for lightweight, rudimentary IoT devices that (in order to keep power consumption and unit cost down) may not possess any intelligence. Through the introduction of advanced ARM®-based community platforms, Marvell is able to facilitate enterprise implementations using Cloud IoT Core. The recently announced Marvell MACCHIATObin™ and Marvell ESPRESSObin™ community boards support open source applications, local storage and networking facilities. At the heart of each of these boards is Marvell’s high performance ARMADA® system-on-chip (SoC) that supports Google Cloud IoT Core Public Beta.

Via Cloud IoT Core, along with other related Google Cloud services (including Pub/Sub, Dataflow, Bigtable, BigQuery, Data Studio), enterprises can benefit from an all-encompassing IoT solution that addresses the collection, processing, evaluation and visualization of real-time data in a highly efficient manner. Cloud IoT Core features certificate-based authentication and transport layer security (TLS), plus an array of sophisticated analytical functions.

Over time, the enterprise edge is going to become more intelligent. Consequently, mediation between IoT devices and the cloud will be needed, as will cost-effective processing and management. With the combination of Marvell’s proprietary IoT gateway technology and Google Cloud IoT Core, it is now possible to migrate a portion of network intelligence to the enterprise edge, leading to various major operational advantages.

Please visit MACCHIATObin Wiki and ESPRESSObin Wiki for instructions on how to connect to Google’s Cloud IoT Core Public Beta platform.

October 10th, 2017

Celebrating 20 Years of Wi-Fi – Part II

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

This is the second instalment in a series of blogs covering the history of Wi-Fi®. While the first part looked at the origins of Wi-Fi, this part will look at how the technology has progressed to the high speed connection we know today.

Wireless Revolution

By the early years of the new millennium, Wi-Fi quickly had started to gain widespread popularity, as the benefits of wireless connectivity became clear. Hotspots began popping up at coffee shops, airports and hotels as businesses and consumers started to realize the potential for Wi-Fi to enable early forms of what we now know as mobile computing. Home users, many of whom were starting to get broadband Internet, were able to easily share their connections throughout the house.

Thanks to the IEEE® 802.11 working group’s efforts, a proprietary wireless protocol that was originally designed simply for connecting cash registers (see previous blog) had become the basis for a wireless networking standard that was changing the whole fabric of society.

Improving Speeds

The advent of 802.11b, in 1999, set the stage for Wi-Fi mass adoption. Its cheaper price point made it accessible for consumers, and its 11 Mbit/s speeds made it fast enough to replace wired Ethernet connections for enterprise users. Driven by the broadband internet explosion in the early years post 2000, 802.11b became a great success. Both consumers and businesses found wireless was a great way to easily share the newfound high speed connections that DSL, cable and other broadband technologies gave them.

As broadband speeds became the norm, consumer’s computer usage habits changed accordingly. Higher bandwidth applications such as music/movie sharing and streaming audio started to see increasing popularity within the consumer space.

Meanwhile, in the enterprise market, wireless had even greater speed demands to contend with, as it was competing with fast local networking over Ethernet. Business use cases (such as VoIP, file sharing and printer sharing, as well as desktop virtualization) needed to work seamlessly if wireless was to be adopted.

Even in the early 2000’s, the speed that 802.11b could support was far from cutting edge. On the wired side of things, 10/100 Ethernet was already a widespread standard. At 100 Mbit/s, it was almost 10 times faster than 802.11b’s nominal 11 Mbit/s speed. 802.11b’s protocol overhead meant that, in fact, maximum theoretical speeds were 5.9 Mbit/s. In practice though, as 802.11b used the increasingly popular 2.4 GHz band, speeds proved to be lower than that still. Interference from microwave ovens, cordless phones and other consumer electronics, meant that real world speeds often didn’t reach the 5.9 Mbit/s mark (sometimes not even close).

802.11g

To address speed concerns, in 2003 the IEEE 802.11 working group came out with 802.11g. Though 802.11g would use the 2.4 GHz frequency band just like 802.11b, it was able to achieve speeds of up to 54 Mbit/s. Even after speed decreases due to protocol overhead, its theoretical maximum of 31.4 Mbit/s was enough bandwidth to accommodate increasingly fast household broadband speeds.

Actually 802.11g was not the first 802.11 wireless standard to achieve 54 Mbit/s. That crown goes to 802.11a, which had done it back in 1999. However, 802.11a used a separate 5.8 GHz frequency to achieve its fast speeds. While 5.8 GHz had the benefit of less radio interference from consumer electronics, it also meant incompatibility with 802.11b. That fact, along with more expensive equipment, meant that 802.11a was only ever popular within the business market segment and never saw proliferation into the higher volume domestic/consumer arena.

By using 2.4 GHz to reach 54 Mbit/s, 802.11g was able to achieve high speeds while retaining full backwards compatibility with 802.11b. This was crucial, as 802.11b had already established itself as the main wireless standard for consumer devices by this point. Its backwards compatibility, along with cheaper hardware compared to 802.11a, were big selling points, and 802.11g soon became the new, faster wireless standard for consumer and, increasingly, even business related applications.

802.11n

Introduced in 2009, 802.11n made further speed improvements upon 802.11g and 802.11a. Operating on either 2.4 GHz or 5.8 GHz frequency bands (though not simultaneously), 802.11n improved transfer efficiency through frame aggregation, and also introduced optional MIMO and 40 Hz channels – double the channel width of 802.11g.

802.11n offered significantly faster network speeds. At the low end, if it was operating in the same type of single antenna, 20 Hz channel width configuration as an 802.11g network, the 802.11n network could achieve 72 Mbit/s. If, in addition, the double width 40 Hz channel was used, with multiple antennas, then data rates could be much faster – up to 600 Mbit/s (for a four antenna configuration).

The third and final blog in this series will take us right up to the modern day and will also look at the potential of Wi-Fi in the future.

 

October 3rd, 2017

Celebrating 20 Years of Wi-Fi – Part I

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

You can’t see it, touch it, or hear it – yet Wi-Fi® has had a tremendous impact on the modern world – and will continue to do so. From our home wireless networks, to offices and public spaces, the ubiquity of high speed connectivity without reliance on cables has radically changed the way computing happens. It would not be much of an exaggeration to say that because of ready access to Wi-Fi, we are consequently able to lead better lives – using our laptops, tablets and portable electronics goods in a far more straightforward, simplistic manner with a high degree of mobility, no longer having to worry about a complex tangle of wires tying us down.

Though it may be hard to believe, it is now two decades since the original 802.11 standard was ratified by the IEEE®. This first in a series of blogs will look at the history of Wi-Fi to see how it has overcome numerous technical challenges and evolved into the ultra-fast, highly convenient wireless standard that we know today. We will then go on to discuss what it may look like tomorrow.

Unlicensed Beginnings
While we now think of 802.11 wireless technology as predominantly connecting our personal computing devices and smartphones to the Internet, it was in fact initially invented as a means to connect up humble cash registers. In the late 1980s, NCR Corporation, a maker of retail hardware and point-of-sale (PoS) computer systems, had a big problem. Its customers – department stores and supermarkets – didn’t want to dig up their floors each time they changed their store layout.

A recent ruling that had been made by the FCC, which opened up certain frequency bands as free to use, inspired what would be a game-changing idea. By using wireless connections in the unlicensed spectrum (rather than conventional wireline connections), electronic cash registers and PoS systems could be easily moved around a store without the retailer having to perform major renovation work.

Soon after this, NCR allocated the project to an engineering team out of its Netherlands office. They were set the challenge of creating a wireless communication protocol. These engineers succeeded in developing ‘WaveLAN’, which would be recognized as the precursor to Wi-Fi. Rather than preserving this as a purely proprietary protocol, NCR could see that by establishing it as a standard, the company would be able to position itself as a leader in the wireless connectivity market as it emerged. By 1990, the IEEE 802.11 working group had been formed, based on wireless communication in unlicensed spectra.

Using what were at the time innovative spread spectrum techniques to reduce interference and improve signal integrity in noisy environments, the original incarnation of Wi-Fi was finally formally standardized in 1997. It operated with a throughput of just 2 Mbits/s, but it set the foundations of what was to come.

Wireless Ethernet
Though the 802.11 wireless standard was released in 1997, it didn’t take off immediately. Slow speeds and expensive hardware hampered its mass market appeal for quite a while – but things were destined to change. 10 Mbit/s Ethernet was the networking standard of the day. The IEEE 802.11 working group knew that if they could equal that, they would have a worthy wireless competitor. In 1999, they succeeded, creating 802.11b. This used the same 2.4 GHz ISM frequency band as the original 802.11 wireless standard, but it raised the throughput supported considerably, reaching 11 Mbits/s. Wireless Ethernet was finally a reality.

Soon after 802.11b was established, the IEEE working group also released 802.11a, an even faster standard. Rather than using the increasingly crowded 2.4 GHz band, it ran on the 5 GHz band and offered speeds up to a lofty 54 Mbits/s.

Because it occupied the 5 GHz frequency band, away from the popular (and thus congested) 2.4 GHz band, it had better performance in noisy environments; however, the higher carrier frequency also meant it had reduced range compared to 2.4 GHz wireless connectivity. Thanks to cheaper equipment and better nominal ranges, 802.11b proved to be the most popular wireless standard by far. But, while it was more cost effective than 802.11a, 802.11b still wasn’t at a low enough price bracket for the average consumer. Routers and network adapters would still cost hundreds of dollars.

That all changed following a phone call from Steve Jobs. Apple was launching a new line of computers at that time and wanted to make wireless networking functionality part of it. The terms set were tough – Apple expected to have the cards at a $99 price point, but of course the volumes involved could potentially be huge. Lucent Technologies, which had acquired NCR by this stage, agreed.

While it was a difficult pill to swallow initially, the Apple deal finally put Wi-Fi in the hands of consumers and pushed it into the mainstream. PC makers saw Apple computers beating them to the punch and wanted wireless networking as well. Soon, key PC hardware makers including Dell, Toshiba, HP and IBM were all offering Wi-Fi.

Microsoft also got on the Wi-Fi bandwagon with Windows XP. Working with engineers from Lucent, Microsoft made Wi-Fi connectivity native to the operating system. Users could get wirelessly connected without having to install third party drivers or software. With the release of Windows XP, Wi-Fi was now natively supported on millions of computers worldwide – it had officially made it into the ‘big time’.

This blog post is the first in a series that charts the eventful history of Wi-Fi. The second part, which is coming soon, will bring things up to date and look at current Wi-Fi implementations.