Author Archive

Posted on

The Challenges Of 11ac Wave 2 and 11ax in Wi-Fi Deployments: How to Cost-Effectively Upgrade to 2.5GBASE-T and 5GBASE-T

By Nick Ilyadis, VP of Portfolio Technology, Marvell

The Insatiable Need for Bandwidth: Standards Trying to Keep Up

With the push for more and more Wi-Fi bandwidth, the WLAN industry, its standards committees and the Ethernet switch manufacturers are having a hard time keeping up with the need for more speed. As the industry prepares for upgrading to 802.11ac Wave 2 and the promise of 11ax, the ability of Ethernet over existing copper wiring to meet the increased transfer speeds is being challenged. And what really can’t keep up are the budgets that would be needed to physically rewire the millions of miles of cabling in the world today.

The Latest on the Latest Wireless Networking Standards: IEEE 802.11ac Wave 2 and 802.11ax

The latest 802.11ac IEEE standard is now in Wave 2. According to Webopedia’s definition: the 802.11ac -2013 update, or 802.11ac Wave 2, is an addendum to the original 802.11ac wireless specification that utilizes Multi-User, Multiple-Input, Multiple-Output (MU-MIMO) technology and other advancements to help increase theoretical maximum wireless speeds from 3.47 gigabits-per-second (Gbps), in the original spec, to 6.93 Gbps in 802.11ac Wave 2. The original 802.11ac spec itself served as a performance boost over the 802.11n specification that preceded it, increasing wireless speeds by up to 3x. As with the initial specification, 802.11ac Wave 2 also provides backward compatibility with previous 802.11 specs, including 802.11n.

IEEE has also noted that in the past two decades, the IEEE 802.11 wireless local area networks (WLANs) have also experienced tremendous growth with the proliferation of IEEE 802.11 devices, as a major Internet access for mobile computing. Therefore, the IEEE 802.11ax specification is under development as well.  Giving equal time to Wikipedia, its definition of 802.11ax is: a type of WLAN designed to improve overall spectral efficiency in dense deployment scenarios, with a predicted top speed of around 10 Gbps. It works in 2.4GHz or 5GHz and in addition to MIMO and MU-MIMO, it introduces Orthogonal Frequency-Division Multiple Access (OFDMA) technique to improve spectral efficiency and also higher order 1024 Quadrature Amplitude Modulation (QAM) modulation support for better throughputs. Though the nominal data rate is just 37 percent higher compared to 802.11ac, the new amendment will allow a 4X increase of user throughput. This new specification is due to be publicly released in 2019.

Faster “Cats” Cat 5, 5e, 6, 6e and on

And yes, even cabling is moving up to keep up. You’ve got Cat 5, 5e, 6, 6e and 7 (search: Differences between CAT5, CAT5e, CAT6 and CAT6e Cables for specifics), but suffice it to say, each iteration is capable of moving more data faster, starting with the ubiquitous Cat 5 at 100Mbps at 100MHz over 100 meters of cabling to Cat 6e reaching 10,000 Mbps at 500MHz over 100 meters. Cat 7 can operate at 600MHz over 100 meters, with more “Cats” on the way. All of this of course, is to keep up with streaming, communications, mega data or anything else being thrown at the network.

How to Keep Up Cost-Effectively with 2.5BASE-T and 5BASE-T

What this all boils down to is this: no matter how fast the network standards or cables get, the migration to new technologies will always be balanced with the cost of attaining those speeds and technologies in the physical realm. In other words, balancing the physical labor costs associated to upgrade all those millions of miles of cabling in buildings throughout the world, as well as the switches or other access points. The labor costs alone, are a reason why companies often seek out to stay in the wiring closet as long as possible, where the physical layer (PHY) devices, such access and switches, remain easier and more cost effective to switch out, than replacing existing cabling.

This is where Marvell steps in with a whole solution. Marvell’s products, including the Avastar wireless products, Alaska PHYs and Prestera switches, provide an optimized solution that will help support up to 2.5 and 5.0 Gbps speeds, using existing cabling. For example, the Marvell Avastar 88W8997 wireless processor was the industry’s first 28nm, 11ac (wave-2), 2×2 MU-MIMO combo with full support for Bluetooth 4.2, and future BT5.0. To address switching, Marvell created the Marvell® Prestera® DX family of packet processors, which enables secure, high-density and intelligent 10GbE/2.5GbE/1GbE switching solutions at the access/edge and aggregation layers of Campus, Industrial, Small Medium Business (SMB) and Service Provider networks. And finally, the Marvell Alaska family of Ethernet transceivers are PHY devices which feature the industry’s lowest power, highest performance and smallest form factor.

These transceivers help optimize form factors, as well as multiple port and cable options, with efficient power consumption and simple plug-and-play functionality to offer the most advanced and complete PHY products to the broadband market to support 2.5G and 5G data rate over Cat5e and Cat6 cables.

You mean, I don’t have to leave the wiring closet?

The longer changes can be made at the wiring closet vs. the electricians and cabling needed to rewire, the better companies can balance faster throughput at lower cost. The Marvell Avastar, Prestera and Alaska product families are ways to help address the upgrade to 2.5G- and 5GBASE-T over existing copper wire to keep up with that insatiable demand for throughput, without taking you out of the wiring closet. See you inside!

# # #

Posted on

Challenges of Autonomous Vehicles: How Ethernet in Automobiles Can Overcome Bandwidth Issues in Self-Driving Vehicles

By Nick Ilyadis, VP of Portfolio Technology, Marvell

Drivers are already getting used to what used to be “cool new features,” that have now become “can’t live without” technologies, such as the backup camera, blind spot alert or parking assist. Each of these technologies stream information, or data, within the car, and as automotive technology evolves, more and more features will be added. But when it comes to autonomous vehicles, the amount of technology and data streams coming into the car to be processed increases exponentially. Autonomous vehicles gather multiple streams of information/data from sensors, radar, radios, IR sensors and cameras. This goes beyond the current Advanced Driver Assist Systems (ADAS) or In-Vehicle Infotainment (IVI). The autonomous car will be acutely aware of its surroundings running sophisticated algorithms that will make decisions in order to drive the vehicle. However, self-driving cars will also be processing vehicle-to-vehicle communications, as well as connecting to a number of external devices that will be installed in the highway of the future, as automotive communication infrastructures develop. All of these features and processes require bandwidth-and a lot of it: Start the car; drive; turn; red light, stop; – PEDESTRIAN – BRAKE! This would be a very bad time for the internal vehicle networks to run out of bandwidth.

Add to the driving functions the simultaneous infotainment streams for each passenger, vehicle Internet capabilities, etc. and the current 100 megabits-per-second (mbps) 100BASE-T1 Ethernet bandwidth used in automotive, is quickly strained. This is paving the way (pun intended) for 1000BASE-T1 Gigabit Ethernet (GbE) for automotive networks. Ethernet has long been the economical volume workhorse with millions of miles of cabling in buildings the world over. Therefore, the IEEE 802.3 Ethernet Working Group has endorsed iGbE as the next network bandwidth standard in automotive.

From Car-jacking to Car-hacking—Security Critical

Another major factor for automotive networking is security. In addition to the many technology features and processes needed for driving and entertainment, security is a major concern for cars, especially autonomous cars.  Science Fiction movies where cars are hacked overriding the driver’s capabilities are scary enough, but in real life, would be beyond a nightmare. Automotive security to prevent spyware, whether planted from a rogue mechanic or roving hack, will require strong authentication to protect privacy, and passenger safety. Cars of the future will be able to reject any devices added that aren’t authenticated, as well as any external intrusion through the open communication channels of the vehicle.

This is why companies like Marvell, have taken a leadership role with organizations like IEEE to help create open standards, such as GbE for automotive, to keep moving automotive technologies forward. (See IEEE 2014 Automotive Day presentation by Alex Tan on the Benefits of Designing 1000BASE-T1 into Automotive Architectures http://standards.ieee.org/events/automotive/2014/02_Designing_1000BASE-T1_Into_Automotive_Architectures.pdf.)

Technology to Drive Next-Generation Automotive Networking

Marvell’s Automotive Ethernet Networking technology is capable of taking what used to be the separate domains of the car — infotainment, driver assist, body electronics and control — and connecting them together to provide a high-bandwidth standards-based data backbone for the vehicle. For example, the Marvell 88Q2112 is the industry’s first 1000BASE-T1 automotive Ethernet PHY transceiver compliant with the IEEE 802.3bp 1000BASE-T1 standard. The Marvell 88Q2112 supports the market’s highest in-vehicle connectivity bandwidth and is designed to meet the rigorous EMI requirements of an automotive system. The 1000BASE-T1 standard allows high-speed and bi-directional data traffic and in-vehicle uncompressed 720p30 camera video for multiple HD video streams, including 4K resolution, all over a lightweight, low-cost single pair cable. The Marvell 88Q1010 low-power PHY device supports 100BASE-T1 and compressed 1080p60 video for infotainment, data transport and camera systems.  And finally to round out its automotive networking solutions, Marvell also offers a series of 7-port Ethernet switches.

Harnessing the low cost and high bandwidth of Ethernet brings many advantages to next-generation automotive architecture, including the flexibility to add new applications. In other words, allowing the possibility to build for features that haven’t even been thought up yet. Because while the car of the future may drive itself, it takes a consortium of technology leaders to pave the way.

# # #

 

 

 

Posted on

Top Eight Data Center Trends For Keeping up with High Data Bandwidth Demand

By Nick Ilyadis, VP of Portfolio Technology, Marvell

IoT devices, online video streaming, increased throughput for servers and storage solutions – all have contributed to the massive explosion of data circulating through data centers and the increasing need for greater bandwidth. IT teams have been chartered with finding the solutions to support higher bandwidth to attain faster data speeds, yet must do it in the most cost-efficient way – a formidable task indeed. Marvell recently shared with eWeek about what it sees as the top trends in data centers as they try to keep up with the unprecedented demand for higher and higher bandwidth. Below are the top eight data center trends Marvell has identified as IT teams develop the blueprint for achieving high bandwidth, cost-effective solutions to keep up with explosive data growth.

 

CloudComputing

 

1.) Higher Adoption of 25GbE

To support this increased need for high bandwidth, companies are evaluating whether to adopt 40GbE to the server as an upgrade from 10GbE. 25GbE provides more cost effective throughput than 40GbE since 40GbE requires more power and costlier cables. Therefore, 25GbE is becoming acknowledged as an optimal next-generation Ethernet speed for connecting servers as data centers seek to balance cost/performance tradeoffs.

2.) The Ability to Bundle and Unbundle Channels

Historically, data centers have upgraded to higher link speeds by aggregating multiple single-lane 10GbE network physical layers. Today, 100Gbps can be achieved by bundling four 25Gbps links together or alternatively, 100GbE can also be unbundled into four independent 25GbE channels. The ability to bundle and unbundle 100GbE gives IT teams wider flexibility in moving data across their network and in adapting to changing customer needs.

3.)  Big Data Analytics

Increased data means increased traffic. Real-time analytics allow organizations to monitor and make adjustments as needed to effectively allocate precious network bandwidth and resources. Leveraging analytics has become a key tool for data center operators to maximize their investment.

datacenter2

 

 

4.) Growing Demand for Higher-Density Switches

Advances in semiconductor processes to 28nm and 16nm have allowed network switches to become smaller and smaller. In the past, a 48-port switch required two chips with advanced port configurations. But today, the same result can be achieved on a single chip, which not only keeps costs down, but improves power efficiency.

5.) Power Efficiency Needed to Keep Costs Down

Energy costs are often among the highest costs incurred by data centers.  Ethernet solutions designed with greater power efficiency help data centers transition to the higher GbE rates needed to keep up with the higher bandwidth demands, while keeping energy costs in check.

datacenter3

 

 

6.) More Outsourcing of IT to the Cloud

IT organizations are not only adopting 25GbE to address increasing bandwidth demands, they are also turning to the cloud. By outsourcing IT to the cloud, organizations are able to secure more space on their network, while maintaining bandwidth speeds.

7.) Using NVM Express-based Storage to Maximize Performance

NVM Express® (NVMe™) is a scalable host controller interface designed to address the needs of enterprise, data center and client systems that utilize PCI-e based solid-state drives (SSDs.) By using the NVMe protocol, data centers can exploit the full performance of SSDs, creating new compute models that no longer have the limitations of legacy rotational media. SSD performance can be maximized, while server clusters can be enabled to pool storage and share data access throughout the network.

datacenter4

8.) Transition from Servers to Network Storage

With the growing amount of data transferred across networks, more data centers are deploying storage on networks vs. servers. Ethernet technologies are being leveraged to attach storage to the network instead of legacy storage interconnects as the data center transitions from a traditional server model to networked storage.

As shown above, IT teams are using a variety of technologies and methods to keep up with the explosive increase in data and higher needs for data center bandwidth. What methods are you employing to keep pace with the ever-increasing demands on the data center, and how do you try to keep energy usage and costs down?

# # #

Posted on

NVMe-based Work Fabrics Blow Through Legacy Rotational Media Limitations in the Data Center: Speed and Cost Benefits of NVMe SSD Shared Storage Now in Its Second Generation

By Nick Ilyadis, VP of Portfolio Technology, Marvell

Marvell Debuts 88SS1092 Second-Gen NVM Express SSD Controller at OCP Summit  

88SS1092_C-sized
SSDs in the Data Center: NVMe and Where We’ve Been
When solid-state drives (SSDs) were first introduced into the data center, the infrastructure mandated they work within the confines of the then current bus technology, such as Serial ATA (SATA) and Serial Attached SCSI (SAS), developed for rotational media. Even the fastest hard disk drives (HDDs) of course, couldn’t keep up with an SSD, but neither could their current pipelines, which created a bottleneck that hampered the full exploitation of SSD technology. PCI Express (PCIe) offered a suitable high-bandwidth bus technology already in place as a transport layer for networking, graphics and other add-in cards. It became the next viable option, but the PCIe interface still relied on old HDD-based SCSI or SATA protocols. Thus the NVM Express (NVMe) industry working group was formed to create a standardized set of protocols and commands developed for the PCIe bus, in order to allow multiple paths that could take advantage of the full benefits of SSDs in the data center. The NVMe specification was designed from the ground up to deliver high-bandwidth and low-latency storage access for current and future NVM technologies.

The NVMe interface provides an optimized command issue and completion path. It includes support for parallel operation by supporting up to 64K commands within a single I/O queue to the device. Additionally, support was added for many Enterprise capabilities like end-to-end data protection (compatible with T10 DIF and DIX standards), enhanced error reporting and virtualization. All-in-all, NVMe is a scalable host controller interface designed to address the needs of Enterprise, Data Center and Client systems that utilize PCIe-based solid-state drives to help maximize SSD performance.

SSD Network Fabrics
New NVMe controllers from companies like Marvell allowed the data center to share storage data to further maximize cost and performance efficiencies. By creating SSD network fabrics, a cluster of SSDs can be formed to pool storage from individual servers and maximize overall data center storage. In addition, by creating a common enclosure for additional servers, data can be transported for shared data access. These new compute models therefore allow data centers to not only fully optimize the fast performance of SSDs, but more economically deploy those SSDs throughout the data center, lowering overall cost and streamlining maintenance. Instead of adding additional SSDs to individual servers, under-deployed SSDs can be tapped into and redeployed for use by over-allocated servers.

Here’s a simple example of how these network fabrics work: If a system has ten servers, each with an SSD sitting on the PCIe bus, an SSD cluster can be formed from each of the SSDs to provide not only a means for additional storage, but also a method to pool and share data access. If, let’s say one server is only 10 percent utilized, while another is over allocated, that SSD cluster will allow more storage for the over-allocated server without having to add SSDs to the individual servers. When the example is multiplied by hundreds of servers, you can see that cost, maintenance and performance efficiencies skyrocket.

Marvell helped pave the way for these new types of compute models for the data center when it introduced its first NVMe SSD controller. That product supported up to four lanes of PCIe 3.0, and was suitable for full 4GB/s or 2GB/s end points depending on host system customization. It enabled unparalleled IOPS performance using the NVMe advanced Command Handling. In order to fully utilize the high-speed PCIe connection, Marvell’s innovative NVMe design facilitated PCIe link data flows by deploying massive hardware automation. This helped to alleviate the legacy host control bottlenecks and unleash the true Flash performance.

Second-Generation NVMe Controllers are Here!
This first product has now been followed up with the introduction of the Marvell 88SS1092 second-generation NVMe SSD controller, which has passed through in-house SSD validation and third-party OS/platform compatibility testing. Therefore, the Marvell® 88SS1092 is ready to go to boost next-generation Storage and Datacenter systems, and is being debuted at the Open Computing Project (OCP) Summit March 8 and 9 in San Jose, Calif.

The Marvell 88SS1092 is Marvell’s second-generation NVMe SSD controller capable of PCIe 3.0 X 4 end points to provide full 4GB/s interface to the host and help remove performance bottlenecks. While the new controller advances a solid-state storage system to a more fully flash-optimized architecture for greater performance, it also includes Marvell’s third-generation error-correcting, low-density parity check (LDPC) technology for the additional reliability enhancement, endurance boost and TLC NAND device support on top of MLC NAND.

Today, the speed and cost benefits of NVMe SSD shared storage is not only a reality, but is now in its second generation. The network paradigm has been shifted. By using the NVMe protocol, designed from the ground up to exploit the full performance of SSDs, new compute models are being created without the limitations of legacy rotational media. SSD performance can be maximized, while SSD clusters and new network fabrics enable pooled storage and shared data access. The hard work of the NVMe working group is becoming a reality for today’s data center, as new controllers and technology help optimize performance and cost efficiencies of SSD technology.

Marvell 88SS1092 Second-Generation NVMe SSD Controller
New process and advanced NAND controller design includes:
88SS1092-chart-sized