Author Archive

Posted on

Modular Networks Drive Cost Efficiencies in Data Center Upgrades

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

Exponential growth in data center usage has been responsible for driving a huge amount of investment in the networking infrastructure used to connect virtualized servers to the multiple services they now need to accommodate. To support the server-to-server traffic that virtualized data centers require, the networking spine will generally rely on high capacity 40 Gbit/s and 100 Gbit/s switch fabrics with aggregate throughputs now hitting 12.8 Tbit/s. But the ‘one size fits all’ approach being employed to develop these switch fabrics quickly leads to a costly misalignment for data center owners. They need to find ways to match the interfaces on individual storage units and server blades that have already been installed with the switches they are buying to support their scale-out plans.

The top-of-rack (ToR) switch provides one way to match the demands of the server equipment and the network infrastructure. The switch can aggregate the data from lower speed network interfaces and so act as a front-end to the core network fabric. But such switches tend to be far more complex than is actually needed – often derived from older generations of core switch fabric. They perform a level of switching that is unnecessary and, as a result, are not cost effective when they are primarily aggregating traffic on its way to the core network’s 12.8 Tbits/s switching engines. The heightened expense manifests itself not only in terms of hardware complexity and the issues of managing an extra network tier, but also in relation to power and air-conditioning. It is not unusual to find five or more fans inside each unit being used to cool the silicon switch. There is another way to support the requirements of data center operators which consumes far less power and money, while also offering greater modularity and flexibility too.

Providing a means by which to overcome the high power and cost associated with traditional ToR switch designs, the IEEE 802.1BR standard for port extenders makes it possible to implement a bridge between a core network interface and a number of port extenders that break out connections to individual edge devices. An attractive feature of this standard is the ability to allow port extenders to be cascaded, for even greater levels of modularity. As a result, many lower speed ports, of 1 Gbit/s and 10 Gbits/s, can be served by one core network port (supporting 40 Gbits/s or 100 Gbits/s operation) through a single controlling bridge device.

With a simpler, more modular approach, the passive intelligent port extender (PIPE) architecture that has been developed by Marvell leads to next generation rack units which no longer call for the inclusion of any fans for thermal management purposes. Reference designs have already been built that use a simple 65W open-frame power supply to feed all the devices required even in a high-capacity, 48-ports of 10 Gbits/s. Furthermore, the equipment dispenses with the need for external management. The management requirements can move to the core 12.8 Tbit/s switch fabric, providing further savings in terms of operational expenditure. It is a demonstration of exactly how a more modular approach can greatly improve the efficiency of today’s and tomorrow’s data center implementations.

Posted on

Extending the Lifecycle of 3.2T Switch-Based Architecture

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

and Yaniv Kopelman, Networking and Connectivity CTO, Marvell

The growth witnessed in the expanse of data centers has been completely unprecedented. This has been driven by the exponential increases in cloud computing and cloud storage demand that is now being witnessed. While Gigabit switches proved more than sufficient just a few years ago, today, even 3.2 Terabit (3.2T) switches, which currently serve as the fundamental building blocks upon which data center infrastructure is constructed, are being pushed to their full capacity.

While network demands have increased, Moore’s law (which effectively defines the semiconductor industry) has not been able to keep up. Instead of scaling at the silicon level, data centers have had to scale out. This has come at a cost though, with ever increasing capital, operational expenditure and greater latency all resulting. Facing this challenging environment, a different approach is going to have to be taken. In order to accommodate current expectations economically, while still also having the capacity for future growth, data centers (as we will see) need to move towards a modularized approach.

switching-blogpost

Scaling out the datacenter

Data centers are destined to have to contend with demands for substantially heightened network capacity – as a greater number of services, plus more data storage, start migrating to the cloud. This increase in network capacity, in turn, results in demand for more silicon to support it.

To meet increasing networking capacity, data centers are buying ever more powerful Top-of-Rack (ToR) leaf switches. In turn these are consuming more power – which impacts on the overall power budget and means that less power is available for the data center servers. Not only does this lead to power being unnecessarily wasted, in addition it will push the associated thermal management costs and the overall Opex upwards. As these data centers scale out to meet demand, they’re often having to add more complex hierarchical structures to their architecture as well – thereby increasing latencies for both north-south and east-west traffic in the process.

The price of silicon per gate is not going down either. We used to enjoy cost reductions as process sizes decreased from 90 nm, to 65 nm, to 40 nm. That is no longer strictly true however. As we see process sizes go down from 28 nm node sizes, yields are decreasing and prices are consequently going up. To address the problems of cloud-scale data centers, traditional methods will not be applicable. Instead, we need to take a modularized approach to networking.

PIPEs and Bridges

Today’s data centers often run on a multi-tiered leaf and spine hierarchy. Racks with ToR switches connect to the network spine switches. These, in turn, connect to core switches, which subsequently connect to the Internet. Both the spine and the top of the rack layer elements contain full, managed switches.

By following a modularized approach, it is possible to remove the ToR switches and replace them with simple IO devices – port extenders specifically. This effectively extends the IO ports of the spine switch all the way down to the ToR. What results is a passive ToR that is unmanaged. It simply passes the packets to the spine switch. Furthermore, by taking a whole layer out of the management hierarchy, the network becomes flatter and is thus considerably easier to manage.

The spine switch now acts as the controlling bridge. It is able to manage the layer which was previously taken care of by the ToR switch. This means that, through such an arrangement, it is possible to disaggregate the IO ports of the network that were previously located at the ToR switch, from the logic at the spine switch which manages them. This innovative modularized approach is being facilitated by the increasing number of Port Extenders and Control Bridges now being made available from Marvell that are compatible with the IEEE 802.1BR bridge port extension standard.

Solving Data Center Scaling Challenges

The modularized port-extender and control bridge approach allows data centers to address the full length and breadth of scaling challenges. Port extenders solve the latency by flattening the hierarchy. Instead of having conventional ‘leaf’ and ‘spine’ tiers, the port extender acts to simply extend the IO ports of the spine switch to the ToR. Each server in the rack has a near-direct connection to the managing switch. This improves latency for north-south bound traffic.

The port extender also functions to aggregate traffic from 10 Gbit Ethernet ports into higher throughput outputs, allowing for terabit switches which only have 25, 40, or 100 Gbit Ethernet ports, to communicate directly with 10 Gbit Ethernet edge devices. The passive port extender is a greatly simplified device compared to a managed switch. This means lower up-front costs as well as lower power consumption and a simpler network management scheme are all derived. Rather than dealing with both leaf and spine switches, network administration simply needs to focus on the managed switches at the spine layer.

With no end in sight to the ongoing progression of network capacity, cloud-scale data centers will always have ever-increasing scaling challenges to attend to. The modularized approach described here makes those challenges solvable.

Posted on

Further Empowerment of the Wireless Office

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

In order to benefit from the greater convenience offered for employees and more straightforward implementation, office environments are steadily migrating towards wholesale wireless connectivity. Thanks to this, office staff will no longer be limited by where there are cables/ports available, resulting in a much higher degree of mobility. It will mean that they can remain constantly connected and their work activities won’t be hindered – whether they are at their desk, in a meeting or even in the cafeteria. This will make enterprises much better aligned with our modern working culture – where hot desking and bring your own device (BYOD) are becoming increasingly commonplace.

The main dynamic which is going to be responsible for accelerating this trend will be the emergence of 802.11ac Wave 2 Wi-Fi technology. With the prospect of exploiting Gigabit data rates (thereby enabling the streaming of video content, faster download speeds, higher quality video conferencing, etc.), it is clearly going to have considerable appeal. In addition, this protocol offers extended range and greater bandwidth through multi-user MIMO operation – so that a larger number of users can be supported simultaneously. This will be advantageous to the enterprise, as less access points per users will be required.

Pipe

An example of the office floorplan for an enterprise/campus is described in Figure 1 (showing a large number of cubicles and also some meeting rooms too). Though scenarios vary, generally speaking an enterprise/campus is likely to occupy a total floor space of between 20,000 and 45,000 square feet. With one 802.11ac access point able to cover an area of 3000 to 4000 square feet, a wireless office would need a total of about 8 to 12 access points to be fully effective. This density should be more than acceptable for average voice and data needs. Supporting these access points will be a high capacity wireline backbone.

Increasingly, rather than employing traditional 10 Gigabit Ethernet infrastructure, the enterprise/campus backbone is going to be based on 25 Gigabit Ethernet technology. It is expected that this will see widespread uptake in newly constructed office buildings over the next 2-3 years as the related optics continue to become more affordable. Clearly enterprises want to tap into the enhanced performance offered by 802.11ac, but they have to do this while also adhering to stringent budgetary constraints too. As the data capacity at the backbone gets raised upwards, so will the complexity of the hierarchical structure that needs to be placed underneath it, consisting of extensive intermediary switching technology. Well that’s what conventional thinking would tell us.

Before embarking on a 25 Gigabit Ethernet/802.11ac implementation, enterprises have to be fully aware of what all this entails. As well as the initial investment associated with the hardware heavy arrangement just outlined, there is also the ongoing operational costs to consider. By aggregating the access points into a port extender that is then connecting directly to the 25 Gigabit Ethernet backbone instead towards a central control bridge switch, it is possible to significantly simplify the hierarchical structure – effectively eliminating a layer of unneeded complexity from the system.

Through its Passive Intelligent Port Extender (PIPE) technology Marvell is doing just that. This product offering is unique to the market, as other port extenders currently available were not originally designed for that purpose and therefore exhibit compromises in their performance, price and power. PIPE is, in contrast, an optimized solution that is able to fully leverage the IEEE 802.1BR bridge port extension standard – dispensing with the need for expensive intermediary switches between the control bridge and the access point level and reducing the roll-out costs as a result. It delivers markedly higher throughput, as the aggregating of multiple 802.11ac access points to 10 Gigabit Ethernet switches has been avoided. With fewer network elements to manage, there is some reduction in terms of the ongoing running costs too.

PIPE means that enterprises can future proof their office data communication infrastructure – starting with 10 Gigabit Ethernet, then upgrading to a 25 Gigabit Ethernet when it is needed. The number of ports that it incorporates are a good match for the number of access points that an enterprise/campus will need to address the wireless connectivity demands of their work force. It enables dual homing functionality, so that elevated service reliability and resiliency are both assured through system redundancy. In addition, supporting Power-over-Ethernet (PoE), allows access points to connect to both a power supply and the data network through a single cable – further facilitating the deployment process.