Marvell Blog

Featuring technology ideas and solutions worth sharing

Marvell

Archive for the ‘Wireless’ Category

October 10th, 2017

Celebrating 20 Years of Wi-Fi – Part II

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

This is the second instalment in a series of blogs covering the history of Wi-Fi®. While the first part looked at the origins of Wi-Fi, this part will look at how the technology has progressed to the high speed connection we know today.

Wireless Revolution

By the early years of the new millennium, Wi-Fi quickly had started to gain widespread popularity, as the benefits of wireless connectivity became clear. Hotspots began popping up at coffee shops, airports and hotels as businesses and consumers started to realize the potential for Wi-Fi to enable early forms of what we now know as mobile computing. Home users, many of whom were starting to get broadband Internet, were able to easily share their connections throughout the house.

Thanks to the IEEE® 802.11 working group’s efforts, a proprietary wireless protocol that was originally designed simply for connecting cash registers (see previous blog) had become the basis for a wireless networking standard that was changing the whole fabric of society.

Improving Speeds

The advent of 802.11b, in 1999, set the stage for Wi-Fi mass adoption. Its cheaper price point made it accessible for consumers, and its 11 Mbit/s speeds made it fast enough to replace wired Ethernet connections for enterprise users. Driven by the broadband internet explosion in the early years post 2000, 802.11b became a great success. Both consumers and businesses found wireless was a great way to easily share the newfound high speed connections that DSL, cable and other broadband technologies gave them.

As broadband speeds became the norm, consumer’s computer usage habits changed accordingly. Higher bandwidth applications such as music/movie sharing and streaming audio started to see increasing popularity within the consumer space.

Meanwhile, in the enterprise market, wireless had even greater speed demands to contend with, as it was competing with fast local networking over Ethernet. Business use cases (such as VoIP, file sharing and printer sharing, as well as desktop virtualization) needed to work seamlessly if wireless was to be adopted.

Even in the early 2000’s, the speed that 802.11b could support was far from cutting edge. On the wired side of things, 10/100 Ethernet was already a widespread standard. At 100 Mbit/s, it was almost 10 times faster than 802.11b’s nominal 11 Mbit/s speed. 802.11b’s protocol overhead meant that, in fact, maximum theoretical speeds were 5.9 Mbit/s. In practice though, as 802.11b used the increasingly popular 2.4 GHz band, speeds proved to be lower than that still. Interference from microwave ovens, cordless phones and other consumer electronics, meant that real world speeds often didn’t reach the 5.9 Mbit/s mark (sometimes not even close).

802.11g

To address speed concerns, in 2003 the IEEE 802.11 working group came out with 802.11g. Though 802.11g would use the 2.4 GHz frequency band just like 802.11b, it was able to achieve speeds of up to 54 Mbit/s. Even after speed decreases due to protocol overhead, its theoretical maximum of 31.4 Mbit/s was enough bandwidth to accommodate increasingly fast household broadband speeds.

Actually 802.11g was not the first 802.11 wireless standard to achieve 54 Mbit/s. That crown goes to 802.11a, which had done it back in 1999. However, 802.11a used a separate 5.8 GHz frequency to achieve its fast speeds. While 5.8 GHz had the benefit of less radio interference from consumer electronics, it also meant incompatibility with 802.11b. That fact, along with more expensive equipment, meant that 802.11a was only ever popular within the business market segment and never saw proliferation into the higher volume domestic/consumer arena.

By using 2.4 GHz to reach 54 Mbit/s, 802.11g was able to achieve high speeds while retaining full backwards compatibility with 802.11b. This was crucial, as 802.11b had already established itself as the main wireless standard for consumer devices by this point. Its backwards compatibility, along with cheaper hardware compared to 802.11a, were big selling points, and 802.11g soon became the new, faster wireless standard for consumer and, increasingly, even business related applications.

802.11n

Introduced in 2009, 802.11n made further speed improvements upon 802.11g and 802.11a. Operating on either 2.4 GHz or 5.8 GHz frequency bands (though not simultaneously), 802.11n improved transfer efficiency through frame aggregation, and also introduced optional MIMO and 40 Hz channels – double the channel width of 802.11g.

802.11n offered significantly faster network speeds. At the low end, if it was operating in the same type of single antenna, 20 Hz channel width configuration as an 802.11g network, the 802.11n network could achieve 72 Mbit/s. If, in addition, the double width 40 Hz channel was used, with multiple antennas, then data rates could be much faster – up to 600 Mbit/s (for a four antenna configuration).

The third and final blog in this series will take us right up to the modern day and will also look at the potential of Wi-Fi in the future.

 

October 3rd, 2017

Celebrating 20 Years of Wi-Fi – Part I

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

You can’t see it, touch it, or hear it – yet Wi-Fi® has had a tremendous impact on the modern world – and will continue to do so. From our home wireless networks, to offices and public spaces, the ubiquity of high speed connectivity without reliance on cables has radically changed the way computing happens. It would not be much of an exaggeration to say that because of ready access to Wi-Fi, we are consequently able to lead better lives – using our laptops, tablets and portable electronics goods in a far more straightforward, simplistic manner with a high degree of mobility, no longer having to worry about a complex tangle of wires tying us down.

Though it may be hard to believe, it is now two decades since the original 802.11 standard was ratified by the IEEE®. This first in a series of blogs will look at the history of Wi-Fi to see how it has overcome numerous technical challenges and evolved into the ultra-fast, highly convenient wireless standard that we know today. We will then go on to discuss what it may look like tomorrow.

Unlicensed Beginnings
While we now think of 802.11 wireless technology as predominantly connecting our personal computing devices and smartphones to the Internet, it was in fact initially invented as a means to connect up humble cash registers. In the late 1980s, NCR Corporation, a maker of retail hardware and point-of-sale (PoS) computer systems, had a big problem. Its customers – department stores and supermarkets – didn’t want to dig up their floors each time they changed their store layout.

A recent ruling that had been made by the FCC, which opened up certain frequency bands as free to use, inspired what would be a game-changing idea. By using wireless connections in the unlicensed spectrum (rather than conventional wireline connections), electronic cash registers and PoS systems could be easily moved around a store without the retailer having to perform major renovation work.

Soon after this, NCR allocated the project to an engineering team out of its Netherlands office. They were set the challenge of creating a wireless communication protocol. These engineers succeeded in developing ‘WaveLAN’, which would be recognized as the precursor to Wi-Fi. Rather than preserving this as a purely proprietary protocol, NCR could see that by establishing it as a standard, the company would be able to position itself as a leader in the wireless connectivity market as it emerged. By 1990, the IEEE 802.11 working group had been formed, based on wireless communication in unlicensed spectra.

Using what were at the time innovative spread spectrum techniques to reduce interference and improve signal integrity in noisy environments, the original incarnation of Wi-Fi was finally formally standardized in 1997. It operated with a throughput of just 2 Mbits/s, but it set the foundations of what was to come.

Wireless Ethernet
Though the 802.11 wireless standard was released in 1997, it didn’t take off immediately. Slow speeds and expensive hardware hampered its mass market appeal for quite a while – but things were destined to change. 10 Mbit/s Ethernet was the networking standard of the day. The IEEE 802.11 working group knew that if they could equal that, they would have a worthy wireless competitor. In 1999, they succeeded, creating 802.11b. This used the same 2.4 GHz ISM frequency band as the original 802.11 wireless standard, but it raised the throughput supported considerably, reaching 11 Mbits/s. Wireless Ethernet was finally a reality.

Soon after 802.11b was established, the IEEE working group also released 802.11a, an even faster standard. Rather than using the increasingly crowded 2.4 GHz band, it ran on the 5 GHz band and offered speeds up to a lofty 54 Mbits/s.

Because it occupied the 5 GHz frequency band, away from the popular (and thus congested) 2.4 GHz band, it had better performance in noisy environments; however, the higher carrier frequency also meant it had reduced range compared to 2.4 GHz wireless connectivity. Thanks to cheaper equipment and better nominal ranges, 802.11b proved to be the most popular wireless standard by far. But, while it was more cost effective than 802.11a, 802.11b still wasn’t at a low enough price bracket for the average consumer. Routers and network adapters would still cost hundreds of dollars.

That all changed following a phone call from Steve Jobs. Apple was launching a new line of computers at that time and wanted to make wireless networking functionality part of it. The terms set were tough – Apple expected to have the cards at a $99 price point, but of course the volumes involved could potentially be huge. Lucent Technologies, which had acquired NCR by this stage, agreed.

While it was a difficult pill to swallow initially, the Apple deal finally put Wi-Fi in the hands of consumers and pushed it into the mainstream. PC makers saw Apple computers beating them to the punch and wanted wireless networking as well. Soon, key PC hardware makers including Dell, Toshiba, HP and IBM were all offering Wi-Fi.

Microsoft also got on the Wi-Fi bandwagon with Windows XP. Working with engineers from Lucent, Microsoft made Wi-Fi connectivity native to the operating system. Users could get wirelessly connected without having to install third party drivers or software. With the release of Windows XP, Wi-Fi was now natively supported on millions of computers worldwide – it had officially made it into the ‘big time’.

This blog post is the first in a series that charts the eventful history of Wi-Fi. The second part, which is coming soon, will bring things up to date and look at current Wi-Fi implementations.

 

August 2nd, 2017

Wireless Technology Set to Enable an Automotive Revolution

By Avinash Ghirnikar, Director of Technical Marketing of Connectivity Business Group

The automotive industry has always been a keen user of wireless technology. In the early 1980s, Renault made it possible to lock and unlock the doors on its Fuego model utilizing a radio transmitter. Within a decade, other vehicle manufacturers embraced the idea of remote key-less entry and not long after that it became a standard feature. Now, wireless technology is about to reshape the world of driving.

The first key-less entry systems were based on infra-red (IR) signals, borrowing the technique from automatic garage door openers. But the industry swiftly moved to RF technology, in order to make it easier to use. Although each manufacturer favored its own protocol and coding system, they adopted standard low-power RF frequency bands, such as 315 MHz in the US and 433 MHz in Europe. As concerns about theft emerged, they incorporated encryption and other security features to fend off potential attacks. They have further refreshed this technology as new threats appeared, as well as adding features such as proximity detection to remove the need to even press the key-fob remote’s button.

The next stage in favor of convenience was to employ Bluetooth instead of custom radios on the sub-1GHz frequency band so as to dispense with the keyfob altogether. With Bluetooth, an app on the user’s smartphone can not only unlock the car doors, but also handle tasks such as starting the heater or air-conditioning to make the vehicle comfortable ready for when the driver and passengers actually get in.

Bluetooth itself has become a key feature on many models over the past decade as automobile manufacturers have looked to open up their infotainment systems. Access to the functions located on dashboard through Bluetooth has made it possible for vehicle occupants to hook up their phone handsets easily. Initially, it was to support legal phone calls through hands-free operation without forcing the owner to buy and install a permanent phone in the vehicle itself. But the wireless connection is just as good at relaying high-quality audio so that the passengers can listen to their favorite music (stored on portable devices). We have clearly move a long way from the CD auto-changer located in the trunk.

Bluetooth is a prime example of the way in which RF technology, once in place, can support many different applications – with plenty of potential for use cases that have not yet been considered. Through use of a suitable relay device in the car, Bluetooth also provides the means by which to send vehicle diagnostics information to relevant smartphone apps. The use of the technology for diagnostics gateway points to an emerging use for Bluetooth in improving the overall safety of car transportation.

But now Wi-Fi is also primed to become as ubiquitous in vehicles as Bluetooth. Wi-Fi is able to provide a more robust data pipe, thus enabling even richer applications and a tighter integration with smartphone handsets. One use case that seems destined to change the cockpit experience for users is the emergence of screen projection technologies. Through the introduction of such mechanisms it will be possible to create a seamless transition for drivers from their smartphones to their cars. This will not necessarily even need to be their own car, it could be any car that they may rent from anywhere in the world.

One of the key enabling technologies for self-driving vehicles is communication. This can encompass vehicle-to-vehicle (V2V) links, vehicle-to-infrastructure (V2I) messages and, through technologies such as Bluetooth and Wi-Fi, vehicle-to-anything (V2X).

V2V provides the ability for vehicles on the road to signal their intentions to others and warn of hazards ahead. If a pothole opens up or cars have to break suddenly to avoid an obstacle, they can send out wireless messages to nearby vehicles to let them know about the situation. Those other vehicles can then slow down or change lane accordingly.

The key enabling technology for V2V is a form of the IEEE 802.11 Wi-Fi protocol, re-engineered for much lower latency and better reliability. IEEE 802.11p Wireless Access in Vehicular Environments (WAVE) operates in the 5.9 GHz region of the RF spectrum, and is capable of supporting data rates of up to 27 Mbit/s. One of the key additions for transportation is scheduling feature that let vehicles share access to wireless channels based on time. Each vehicle uses the Coordinated Universal Time (UTC) reading, usually provided by the GPS receiver, to help ensure all nearby transceivers are synchronised to the same schedule.

A key challenge for any transceiver is the Doppler Effect. On a freeway, the relative velocity of an approaching transmitter can exceed 150 mph. Such a transmitter may be in range for only a few seconds at most, making ultra-low latency crucial. But, with the underlying RF technology for V2V in place, advanced navigation applications can be deployed relatively easily and extended to deal with many other objects and even people.

V2I transactions make it possible for roadside controllers to update vehicles on their status. Traffic signals, for example, can let vehicles know when they are likely to change state. Vehicles leaving the junction can relay that data to approaching cars, which may slow down in response. By slowing down, they avoid the need to stop at a red signal – and thereby cross just as it is turning to green. The overall effect is a significant saving in fuel, as well as less wear and tear on the brakes. In the future, such wireless-enabled signals will make it possible improve the flow of autonomous vehicles considerably. The traffic signals will monitor the junction to check whether conditions are safe and usher the autonomous vehicle through to the other side, while other road users without the same level of computer control are held at a stop.

Although many V2X applications were conceived for use with a dedicated RF protocol, such as WAVE for example, there is a place for Bluetooth and, potentially, other wireless standards like conventional Wi-Fi. Pedestrians and cyclists may signal their presence on the road with the help of their own Bluetooth devices. The messages picked up by passing vehicles can be relayed using V2V communications over WAVE to extend the range of the warnings. Roadside beacons using Bluetooth technology can pass on information about local points of interest – and this can be provide to passengers who can subsequently look up more details on the Internet using the vehicle’s built-in Wi-Fi hotspot.

One thing seems to be clear, the world of automotive design will be a heterogeneous RF environment that takes traditional Wi-Fi technology and brings it together with WAVE, Bluetooth and GPS. It clearly makes sense to incorporate the right set of radios together onto one single chipset, which will thereby ease the integration process, and also ensure optimal performance is achieved. This will not only be beneficial in terms of the design of new vehicles, but will also facilitate the introduction of aftermarket V2X modules. In this way, existing cars will be able to participate in the emerging information-rich superhighway.

August 1st, 2017

Connectivity Will Drive the Cars of the Future

By Avinash Ghirnikar, Director of Technical Marketing of Connectivity Business Group

The growth of electronics content inside the automobile has already had a dramatic effect on the way in which vehicle models are designed and built. As a direct consequence of this, the biggest technical change is now beginning to happen – one that overturns the traditional relationship between the car manufacturer and the car owner.

With many subsystems now controlled by microprocessors running software, it is now possible to alter the behavior of the vehicle with an update and introduce completely new features and functionality by merely updating software. The high profile Tesla brand of high performance electric vehicles has been one of the companies pioneering this approach by releasing software and firmware updates that give existing models the ability to drive themselves. Instead of buying a car with a specific, fixed set of features, vehicles are being upgraded via firmware over the air (FOTA) without the need to visit a dealership.

Faced with so many electronic subsystems now in the vehicle, high data rates are essential. Without the ability to download and program devices quickly, the car could potentially become unusable for hours at a time. On the wireless side, this is requiring 802.11ac Wi-Fi speeds and very soon this will be ramped up to 802.11ax speeds that can potentially exceed Gigabit/second data rates.

Automotive Ethernet that can support Gigabit speeds is also now being fitted so that updates can be delivered as fast as possible to the many electronic control units (ECUs) around the car. The same Ethernet backbone is proving just as essential for day-to-day use. The network provides high resolution, real-time data from cameras, LiDAR, radar, tire pressure monitors and various other sensors fitted around the body, each of which is likely to have their own dedicated microprocessor. The result is a high performance computer based on distributed intelligence. And this, in turn, can tap into the distributed intelligence now being deployed in the cloud.

The beauty of distributed intelligence is that it is an architecture that can support applications that in many cases have not even been thought of yet. The same wireless communication networks that provide the over-the-air updates can relay real-time information on traffic patterns in the vicinity, weather data, disruptions due to accidents and many other pieces of data that the onboard computers can then use to plan the journey and make it safer. This rapid shift towards high speed intra- and inter-vehicle connectivity, and the vehicle-to-anything (V2X) communication capabilities that have thus resulted will enable applications to be benefitted from that would have been considered pure fantasy just a few years ago,

The V2X connectivity can stop traffic lights from being an apparent obstacle and turn them into devices that provide the vehicle with hints to save fuel. If the lights send out signals on their stop-go cycle approaching vehicles can use them to determine whether it is better to decelerate and arrive just in time for them to turn green instead of braking all the way to a stop. Sensors at the junction can also warn of hazards that the car then flags up to the driver. When the vehicle is able to run autonomously, it can take care of such actions itself. Similarly, cars can report to each other when they are planning to change lanes in order to leave the freeway, or when they see a slow-moving vehicle ahead and need to decelerate. The result is considerably smoother braking patterns that avoid the logjam effect we so often see on today’s crowded roads. The enablement of such applications will require multiple radios in the vehicle, which will need to work cooperatively in a fail-safe manner.

Such connectivity will also give OEMs unprecedented access to real-time diagnostic data, which a car could be uploading opportunistically to the cloud for analysis purposes. This will provide information that could lead to customized maintenance services that could be planned in advance, thereby cutting down diagnostic time at the workshop and meaning that technical problems are preemptively dealt with, rather than waiting for them to become more serious over time.

There is no need for automobile manufacturers to build any of these features into their vehicle models today. As many computations can be offloaded to servers in the cloud, the key to unlocking advanced functionality is not wholly dependent on what is present in the car itself. The fundamental requirement is access to an effective means of communications, and that is available right now through high speed Ethernet within the vehicle plus Wi-Fi and V2X-compatible wireless for transfers going beyond the chassis. Both can be supplied so that they are compliant with the AEC-Q100 automotive standard – thus ensuring quality and reliability. With those tools in place, we don’t need to see all the way ahead to the future. We just know we have the capability to get there.

June 21st, 2017

Making Better Use of Legacy Infrastructure

By Ron Cates, Senior Director, Product Marketing, Networking Business Unit

The flexibility offered by wireless networking is revolutionizing the enterprise space. High-speed Wi-Fi®, provided by standards such as IEEE 802.11ac and 802.11ax, makes it possible to deliver next-generation services and applications to users in the office, no matter where they are working.

However, the higher wireless speeds involved are putting pressure on the cabling infrastructure that supports the Wi-Fi access points around an office environment. The 1 Gbit/s Ethernet was more than adequate for older wireless standards and applications. Now, with greater reliance on the new generation of Wi-Fi access points and their higher uplink rate speeds, the older infrastructure is starting to show strain. At the same time, in the server room itself, demand for high-speed storage and faster virtualized servers is placing pressure on the performance levels offered by the core Ethernet cabling that connects these systems together and to the wider enterprise infrastructure.

One option is to upgrade to a 10 Gbit/s Ethernet infrastructure. But this is a migration that can be prohibitively expensive. The Cat 5e cabling that exists in many office and industrial environments is not designed to cope with such elevated speeds. To make use of 10 Gbit/s equipment, that old cabling needs to come out and be replaced by a new copper infrastructure based on Cat 6a standards. Cat 6a cabling can support 10 Gbit/s Ethernet at the full range of 100 meters, and you would be lucky to run 10 Gbit/s at half that distance over a Cat 5e cable.

In contrast to data-center environments that are designed to cope easily with both server and networking infrastructure upgrades, enterprise cabling lying in ducts, in ceilings and below floors is hard to reach and swap out. This is especially true if you want to keep the business running while the switchover takes place.

Help is at hand with the emergence of the IEEE 802.3bz™ and NBASE-T® set of standards and the transceiver technology that goes with them. 802.3bz and NBASE-T make it possible to transmit at speeds of 2.5 Gbit/s or 5 Gbit/s across conventional Cat 5e or Cat 6 at distances up to the full 100 meters. The transceiver technology leverages advances in digital signal processing (DSP) to make these higher speeds possible without demanding a change in the cabling infrastructure.

The NBASE-T technology, a companion to the IEEE 802.3bz standard, incorporates novel features such as downshift, which responds dynamically to interference from other sources in the cable bundle. The result is lower speed. But the downshift technology has the advantage that it does not cut off communication unexpectedly, providing time to diagnose the problem interferer in the bundle and perhaps reroute it to sit alongside less sensitive cables that may carry lower-speed signals. This is where the new generation of high-density transceivers come in.

There are now transceivers coming onto the market that support data rates all the way from legacy 10 Mbit/s Ethernet up to the full 5 Gbit/s of 802.3bz/NBASE-T – and will auto-negotiate the most appropriate data rate with the downstream device. This makes it easy for enterprise users to upgrade the routers and switches that support their core network without demanding upgrades to all the client devices. Further features, such as Virtual Cable Tester® functionality, makes it easier to diagnose faults in the cabling infrastructure without resorting to the use of specialized network instrumentation.

Transceivers and PHYs designed for switches can now support eight 802.3bz/NBASE-T ports in one chip, thanks to the integration made possible by leading-edge processes. These transceivers are designed not only to be more cost-effective, they also consume far less power and PCB real estate than PHYs that were designed for 10 Gbit/s networks. This means they present a much more optimized solution with numerous benefits from a financial, thermal and a logistical perspective.

The result is a networking standard that meshes well with the needs of modern enterprise networks – and lets that network and the equipment evolve at its own pace.

May 31st, 2017

Further Empowerment of the Wireless Office

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

In order to benefit from the greater convenience offered for employees and more straightforward implementation, office environments are steadily migrating towards wholesale wireless connectivity. Thanks to this, office staff will no longer be limited by where there are cables/ports available, resulting in a much higher degree of mobility. It will mean that they can remain constantly connected and their work activities won’t be hindered – whether they are at their desk, in a meeting or even in the cafeteria. This will make enterprises much better aligned with our modern working culture – where hot desking and bring your own device (BYOD) are becoming increasingly commonplace.

The main dynamic which is going to be responsible for accelerating this trend will be the emergence of 802.11ac Wave 2 Wi-Fi technology. With the prospect of exploiting Gigabit data rates (thereby enabling the streaming of video content, faster download speeds, higher quality video conferencing, etc.), it is clearly going to have considerable appeal. In addition, this protocol offers extended range and greater bandwidth through multi-user MIMO operation – so that a larger number of users can be supported simultaneously. This will be advantageous to the enterprise, as less access points per users will be required.

Pipe

An example of the office floorplan for an enterprise/campus is described in Figure 1 (showing a large number of cubicles and also some meeting rooms too). Though scenarios vary, generally speaking an enterprise/campus is likely to occupy a total floor space of between 20,000 and 45,000 square feet. With one 802.11ac access point able to cover an area of 3000 to 4000 square feet, a wireless office would need a total of about 8 to 12 access points to be fully effective. This density should be more than acceptable for average voice and data needs. Supporting these access points will be a high capacity wireline backbone.

Increasingly, rather than employing traditional 10 Gigabit Ethernet infrastructure, the enterprise/campus backbone is going to be based on 25 Gigabit Ethernet technology. It is expected that this will see widespread uptake in newly constructed office buildings over the next 2-3 years as the related optics continue to become more affordable. Clearly enterprises want to tap into the enhanced performance offered by 802.11ac, but they have to do this while also adhering to stringent budgetary constraints too. As the data capacity at the backbone gets raised upwards, so will the complexity of the hierarchical structure that needs to be placed underneath it, consisting of extensive intermediary switching technology. Well that’s what conventional thinking would tell us.

Before embarking on a 25 Gigabit Ethernet/802.11ac implementation, enterprises have to be fully aware of what all this entails. As well as the initial investment associated with the hardware heavy arrangement just outlined, there is also the ongoing operational costs to consider. By aggregating the access points into a port extender that is then connecting directly to the 25 Gigabit Ethernet backbone instead towards a central control bridge switch, it is possible to significantly simplify the hierarchical structure – effectively eliminating a layer of unneeded complexity from the system.

Through its Passive Intelligent Port Extender (PIPE) technology Marvell is doing just that. This product offering is unique to the market, as other port extenders currently available were not originally designed for that purpose and therefore exhibit compromises in their performance, price and power. PIPE is, in contrast, an optimized solution that is able to fully leverage the IEEE 802.1BR bridge port extension standard – dispensing with the need for expensive intermediary switches between the control bridge and the access point level and reducing the roll-out costs as a result. It delivers markedly higher throughput, as the aggregating of multiple 802.11ac access points to 10 Gigabit Ethernet switches has been avoided. With fewer network elements to manage, there is some reduction in terms of the ongoing running costs too.

PIPE means that enterprises can future proof their office data communication infrastructure – starting with 10 Gigabit Ethernet, then upgrading to a 25 Gigabit Ethernet when it is needed. The number of ports that it incorporates are a good match for the number of access points that an enterprise/campus will need to address the wireless connectivity demands of their work force. It enables dual homing functionality, so that elevated service reliability and resiliency are both assured through system redundancy. In addition, supporting Power-over-Ethernet (PoE), allows access points to connect to both a power supply and the data network through a single cable – further facilitating the deployment process.

April 27th, 2017

The Challenges Of 11ac Wave 2 and 11ax in Wi-Fi Deployments: How to Cost-Effectively Upgrade to 2.5GBASE-T and 5GBASE-T

By Nick Ilyadis, VP of Portfolio Technology, Marvell

The Insatiable Need for Bandwidth: Standards Trying to Keep Up

With the push for more and more Wi-Fi bandwidth, the WLAN industry, its standards committees and the Ethernet switch manufacturers are having a hard time keeping up with the need for more speed. As the industry prepares for upgrading to 802.11ac Wave 2 and the promise of 11ax, the ability of Ethernet over existing copper wiring to meet the increased transfer speeds is being challenged. And what really can’t keep up are the budgets that would be needed to physically rewire the millions of miles of cabling in the world today.

The Latest on the Latest Wireless Networking Standards: IEEE 802.11ac Wave 2 and 802.11ax

The latest 802.11ac IEEE standard is now in Wave 2. According to Webopedia’s definition: the 802.11ac -2013 update, or 802.11ac Wave 2, is an addendum to the original 802.11ac wireless specification that utilizes Multi-User, Multiple-Input, Multiple-Output (MU-MIMO) technology and other advancements to help increase theoretical maximum wireless speeds from 3.47 gigabits-per-second (Gbps), in the original spec, to 6.93 Gbps in 802.11ac Wave 2. The original 802.11ac spec itself served as a performance boost over the 802.11n specification that preceded it, increasing wireless speeds by up to 3x. As with the initial specification, 802.11ac Wave 2 also provides backward compatibility with previous 802.11 specs, including 802.11n.

IEEE has also noted that in the past two decades, the IEEE 802.11 wireless local area networks (WLANs) have also experienced tremendous growth with the proliferation of IEEE 802.11 devices, as a major Internet access for mobile computing. Therefore, the IEEE 802.11ax specification is under development as well.  Giving equal time to Wikipedia, its definition of 802.11ax is: a type of WLAN designed to improve overall spectral efficiency in dense deployment scenarios, with a predicted top speed of around 10 Gbps. It works in 2.4GHz or 5GHz and in addition to MIMO and MU-MIMO, it introduces Orthogonal Frequency-Division Multiple Access (OFDMA) technique to improve spectral efficiency and also higher order 1024 Quadrature Amplitude Modulation (QAM) modulation support for better throughputs. Though the nominal data rate is just 37 percent higher compared to 802.11ac, the new amendment will allow a 4X increase of user throughput. This new specification is due to be publicly released in 2019.

Faster “Cats” Cat 5, 5e, 6, 6e and on

And yes, even cabling is moving up to keep up. You’ve got Cat 5, 5e, 6, 6e and 7 (search: Differences between CAT5, CAT5e, CAT6 and CAT6e Cables for specifics), but suffice it to say, each iteration is capable of moving more data faster, starting with the ubiquitous Cat 5 at 100Mbps at 100MHz over 100 meters of cabling to Cat 6e reaching 10,000 Mbps at 500MHz over 100 meters. Cat 7 can operate at 600MHz over 100 meters, with more “Cats” on the way. All of this of course, is to keep up with streaming, communications, mega data or anything else being thrown at the network.

How to Keep Up Cost-Effectively with 2.5BASE-T and 5BASE-T

What this all boils down to is this: no matter how fast the network standards or cables get, the migration to new technologies will always be balanced with the cost of attaining those speeds and technologies in the physical realm. In other words, balancing the physical labor costs associated to upgrade all those millions of miles of cabling in buildings throughout the world, as well as the switches or other access points. The labor costs alone, are a reason why companies often seek out to stay in the wiring closet as long as possible, where the physical layer (PHY) devices, such access and switches, remain easier and more cost effective to switch out, than replacing existing cabling.

This is where Marvell steps in with a whole solution. Marvell’s products, including the Avastar wireless products, Alaska PHYs and Prestera switches, provide an optimized solution that will help support up to 2.5 and 5.0 Gbps speeds, using existing cabling. For example, the Marvell Avastar 88W8997 wireless processor was the industry’s first 28nm, 11ac (wave-2), 2×2 MU-MIMO combo with full support for Bluetooth 4.2, and future BT5.0. To address switching, Marvell created the Marvell® Prestera® DX family of packet processors, which enables secure, high-density and intelligent 10GbE/2.5GbE/1GbE switching solutions at the access/edge and aggregation layers of Campus, Industrial, Small Medium Business (SMB) and Service Provider networks. And finally, the Marvell Alaska family of Ethernet transceivers are PHY devices which feature the industry’s lowest power, highest performance and smallest form factor.

These transceivers help optimize form factors, as well as multiple port and cable options, with efficient power consumption and simple plug-and-play functionality to offer the most advanced and complete PHY products to the broadband market to support 2.5G and 5G data rate over Cat5e and Cat6 cables.

You mean, I don’t have to leave the wiring closet?

The longer changes can be made at the wiring closet vs. the electricians and cabling needed to rewire, the better companies can balance faster throughput at lower cost. The Marvell Avastar, Prestera and Alaska product families are ways to help address the upgrade to 2.5G- and 5GBASE-T over existing copper wire to keep up with that insatiable demand for throughput, without taking you out of the wiring closet. See you inside!

# # #