Marvell Blog

Featuring technology ideas and solutions worth sharing

Marvell

Latest Articles

July 17th, 2017

Rightsizing Ethernet

By George Hervey, Principal Architect, Marvell

Implementation of cloud infrastructure is occurring at a phenomenal rate, outpacing Moore’s Law. Annual growth is believed to be 30x and as much 100x in some cases. In order to keep up, cloud data centers are having to scale out massively, with hundreds, or even thousands of servers becoming a common sight.

At this scale, networking becomes a serious challenge. More and more switches are required, thereby increasing capital costs, as well as management complexity. To tackle the rising expense issues, network disaggregation has become an increasingly popular approach. By separating the switch hardware from the software that runs on it, vendor lock-in is reduced or even eliminated. OEM hardware could be used with software developed in-house, or from third party vendors, so that cost savings can be realized.

Though network disaggregation has tackled the immediate problem of hefty capital expenditures, it must be recognized that operating expenditures are still high. The number of managed switches basically stays the same. To reduce operating costs, the issue of network complexity has to also be tackled.

Network Disaggregation
Almost every application we use today, whether at home or in the work environment, connects to the cloud in some way. Our email providers, mobile apps, company websites, virtualized desktops and servers, all run on servers in the cloud.

For these cloud service providers, this incredible growth has been both a blessing and a challenge. As demand increases, Moore’s law has struggled to keep up. Scaling data centers today involves scaling out – buying more compute and storage capacity, and subsequently investing in the networking to connect it all. The cost and complexity of managing everything can quickly add up.

Until recently, networking hardware and software had often been tied together. Buying a switch, router or firewall from one vendor would require you to run their software on it as well. Larger cloud service providers saw an opportunity. These players often had no shortage of skilled software engineers. At the massive scales they ran at, they found that buying commodity networking hardware and then running their own software on it would save them a great deal in terms of Capex.

This disaggregation of the software from the hardware may have been financially attractive, however it did nothing to address the complexity of the network infrastructure. There was still a great deal of room to optimize further.

802.1BR
Today’s cloud data centers rely on a layered architecture, often in a fat-tree or leaf-spine structural arrangement. Rows of racks, each with top-of-rack (ToR) switches, are then connected to upstream switches on the network spine. The ToR switches are, in fact, performing simple aggregation of network traffic. Using relatively complex, energy consuming switches for this task results in a significant capital expense, as well as management costs and no shortage of headaches.

Through the port extension approach, outlined within the IEEE 802.1BR standard, the aim has been to streamline this architecture. By replacing ToR switches with port extenders, port connectivity is extended directly from the rack to the upstream. Management is consolidated to the fewer number of switches which are located at the upper layer network spine, eliminating the dozens or possibly hundreds of switches at the rack level.

The reduction in switch management complexity of the port extender approach has been widely recognized, and various network switches on the market now comply with the 802.1BR standard. However, not all the benefits of this standard have actually been realized.

The Next Step in Network Disaggregation
Though many of the port extenders on the market today fulfill 802.1BR functionality, they do so using legacy components. Instead of being optimized for 802.1BR itself, they rely on traditional switches. This, as a consequence impacts upon the potential cost and power benefits that the new architecture offers.

Designed from the ground up for 802.1BR, Marvell’s Passive Intelligent Port Extender (PIPE) offering is specifically optimized for this architecture. PIPE is interoperable with 802.1BR compliant upstream bridge switches from all the industry’s leading OEMs. It enables fan-less, cost efficient port extenders to be deployed, which thereby provide upfront savings as well as ongoing operational savings for cloud data centers. Power consumption is lowered and switch management complexity is reduced by an order of magnitude

The first wave in network disaggregation was separating switch software from the hardware that it ran on. 802.1BR’s port extender architecture is bringing about the second wave, where ports are decoupled from the switches which manage them. The modular approach to networking discussed here will result in lower costs, reduced energy consumption and greatly simplified network management.

July 7th, 2017

Extending the Lifecycle of 3.2T Switch-Based Architecture

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

and Yaniv Kopelman, Networking and Connectivity CTO, Marvell

The growth witnessed in the expanse of data centers has been completely unprecedented. This has been driven by the exponential increases in cloud computing and cloud storage demand that is now being witnessed. While Gigabit switches proved more than sufficient just a few years ago, today, even 3.2 Terabit (3.2T) switches, which currently serve as the fundamental building blocks upon which data center infrastructure is constructed, are being pushed to their full capacity.

While network demands have increased, Moore’s law (which effectively defines the semiconductor industry) has not been able to keep up. Instead of scaling at the silicon level, data centers have had to scale out. This has come at a cost though, with ever increasing capital, operational expenditure and greater latency all resulting. Facing this challenging environment, a different approach is going to have to be taken. In order to accommodate current expectations economically, while still also having the capacity for future growth, data centers (as we will see) need to move towards a modularized approach.

switching-blogpost

Scaling out the datacenter

Data centers are destined to have to contend with demands for substantially heightened network capacity – as a greater number of services, plus more data storage, start migrating to the cloud. This increase in network capacity, in turn, results in demand for more silicon to support it.

To meet increasing networking capacity, data centers are buying ever more powerful Top-of-Rack (ToR) leaf switches. In turn these are consuming more power – which impacts on the overall power budget and means that less power is available for the data center servers. Not only does this lead to power being unnecessarily wasted, in addition it will push the associated thermal management costs and the overall Opex upwards. As these data centers scale out to meet demand, they’re often having to add more complex hierarchical structures to their architecture as well – thereby increasing latencies for both north-south and east-west traffic in the process.

The price of silicon per gate is not going down either. We used to enjoy cost reductions as process sizes decreased from 90 nm, to 65 nm, to 40 nm. That is no longer strictly true however. As we see process sizes go down from 28 nm node sizes, yields are decreasing and prices are consequently going up. To address the problems of cloud-scale data centers, traditional methods will not be applicable. Instead, we need to take a modularized approach to networking.

PIPEs and Bridges

Today’s data centers often run on a multi-tiered leaf and spine hierarchy. Racks with ToR switches connect to the network spine switches. These, in turn, connect to core switches, which subsequently connect to the Internet. Both the spine and the top of the rack layer elements contain full, managed switches.

By following a modularized approach, it is possible to remove the ToR switches and replace them with simple IO devices – port extenders specifically. This effectively extends the IO ports of the spine switch all the way down to the ToR. What results is a passive ToR that is unmanaged. It simply passes the packets to the spine switch. Furthermore, by taking a whole layer out of the management hierarchy, the network becomes flatter and is thus considerably easier to manage.

The spine switch now acts as the controlling bridge. It is able to manage the layer which was previously taken care of by the ToR switch. This means that, through such an arrangement, it is possible to disaggregate the IO ports of the network that were previously located at the ToR switch, from the logic at the spine switch which manages them. This innovative modularized approach is being facilitated by the increasing number of Port Extenders and Control Bridges now being made available from Marvell that are compatible with the IEEE 802.1BR bridge port extension standard.

Solving Data Center Scaling Challenges

The modularized port-extender and control bridge approach allows data centers to address the full length and breadth of scaling challenges. Port extenders solve the latency by flattening the hierarchy. Instead of having conventional ‘leaf’ and ‘spine’ tiers, the port extender acts to simply extend the IO ports of the spine switch to the ToR. Each server in the rack has a near-direct connection to the managing switch. This improves latency for north-south bound traffic.

The port extender also functions to aggregate traffic from 10 Gbit Ethernet ports into higher throughput outputs, allowing for terabit switches which only have 25, 40, or 100 Gbit Ethernet ports, to communicate directly with 10 Gbit Ethernet edge devices. The passive port extender is a greatly simplified device compared to a managed switch. This means lower up-front costs as well as lower power consumption and a simpler network management scheme are all derived. Rather than dealing with both leaf and spine switches, network administration simply needs to focus on the managed switches at the spine layer.

With no end in sight to the ongoing progression of network capacity, cloud-scale data centers will always have ever-increasing scaling challenges to attend to. The modularized approach described here makes those challenges solvable.

June 21st, 2017

Making Better Use of Legacy Infrastructure

By Ron Cates, Senior Director, Product Marketing, Networking Business Unit

The flexibility offered by wireless networking is revolutionizing the enterprise space. High-speed Wi-Fi®, provided by standards such as IEEE 802.11ac and 802.11ax, makes it possible to deliver next-generation services and applications to users in the office, no matter where they are working.

However, the higher wireless speeds involved are putting pressure on the cabling infrastructure that supports the Wi-Fi access points around an office environment. The 1 Gbit/s Ethernet was more than adequate for older wireless standards and applications. Now, with greater reliance on the new generation of Wi-Fi access points and their higher uplink rate speeds, the older infrastructure is starting to show strain. At the same time, in the server room itself, demand for high-speed storage and faster virtualized servers is placing pressure on the performance levels offered by the core Ethernet cabling that connects these systems together and to the wider enterprise infrastructure.

One option is to upgrade to a 10 Gbit/s Ethernet infrastructure. But this is a migration that can be prohibitively expensive. The Cat 5e cabling that exists in many office and industrial environments is not designed to cope with such elevated speeds. To make use of 10 Gbit/s equipment, that old cabling needs to come out and be replaced by a new copper infrastructure based on Cat 6a standards. Cat 6a cabling can support 10 Gbit/s Ethernet at the full range of 100 meters, and you would be lucky to run 10 Gbit/s at half that distance over a Cat 5e cable.

In contrast to data-center environments that are designed to cope easily with both server and networking infrastructure upgrades, enterprise cabling lying in ducts, in ceilings and below floors is hard to reach and swap out. This is especially true if you want to keep the business running while the switchover takes place.

Help is at hand with the emergence of the IEEE 802.3bz™ and NBASE-T® set of standards and the transceiver technology that goes with them. 802.3bz and NBASE-T make it possible to transmit at speeds of 2.5 Gbit/s or 5 Gbit/s across conventional Cat 5e or Cat 6 at distances up to the full 100 meters. The transceiver technology leverages advances in digital signal processing (DSP) to make these higher speeds possible without demanding a change in the cabling infrastructure.

The NBASE-T technology, a companion to the IEEE 802.3bz standard, incorporates novel features such as downshift, which responds dynamically to interference from other sources in the cable bundle. The result is lower speed. But the downshift technology has the advantage that it does not cut off communication unexpectedly, providing time to diagnose the problem interferer in the bundle and perhaps reroute it to sit alongside less sensitive cables that may carry lower-speed signals. This is where the new generation of high-density transceivers come in.

There are now transceivers coming onto the market that support data rates all the way from legacy 10 Mbit/s Ethernet up to the full 5 Gbit/s of 802.3bz/NBASE-T – and will auto-negotiate the most appropriate data rate with the downstream device. This makes it easy for enterprise users to upgrade the routers and switches that support their core network without demanding upgrades to all the client devices. Further features, such as Virtual Cable Tester® functionality, makes it easier to diagnose faults in the cabling infrastructure without resorting to the use of specialized network instrumentation.

Transceivers and PHYs designed for switches can now support eight 802.3bz/NBASE-T ports in one chip, thanks to the integration made possible by leading-edge processes. These transceivers are designed not only to be more cost-effective, they also consume far less power and PCB real estate than PHYs that were designed for 10 Gbit/s networks. This means they present a much more optimized solution with numerous benefits from a financial, thermal and a logistical perspective.

The result is a networking standard that meshes well with the needs of modern enterprise networks – and lets that network and the equipment evolve at its own pace.

June 20th, 2017

Autonomous Vehicles and Digital Features Make the Car of the Future a “Data Center on Wheels”

By Donna Yasay, VP of Worldwide Business Development

Advanced digital features, autonomous vehicles and new auto safety legislation are all amongst the many “drivers” escalating the number of chips and technology found in next-generation automobiles.  The wireless, connectivity, storage and security technologies needed for the internal and external vehicle communications in cars today and in the future, leverage technologies used in a data center—in fact, you could say the automobile is becoming—a Data Center on Wheels.

Here are some interesting data points supporting the evolution of the Data Center on Wheels:

  • The National Highway Traffic Safety Administration (NHTSA) mandates that by May 2018, all new cars in the U.S. to have backup cameras. The agency reports that half of all new vehicles sold today already have backup cameras, showing widespread acceptance even without the NHTSA mandate.
  • Some luxury brands provide panoramic 360-degree surround views using multiple cameras. NVIDIA, which made its claim to fame in graphics processing chips for computers and video games, is a leading provider in the backup and surround view digital platforms, translating its digital expertise into the hottest of new vehicle trends. At the latest 2017 International CES, NVIDIA showcased its latest NVIDIA PX2, an Artificial Intelligence (AI) Car Computer for Self-Driving Vehicles, which enables automakers and their tier 1 suppliers to accelerate production of automated and autonomous vehicles.
  • According to an Intel presentation at CES reported in Network World, just one autonomous car will use 4,000GB (or 4 Terabytes) of data per day.
  • A January study by Strategy Analytics reported that by 2020, new cars are expected to have approximately 1,000 chips per vehicle.

Advanced Driver Assist Systems (ADAS), In-Vehicle Infotainment (IVI), autonomous vehicles—will rely on digital information streamed internally within the vehicle and externally from the vehicle to other vehicles or third-party services via chips, sensors, network and wireless connectivity.  All of this data will need to be processed, stored or transmitted seamlessly and securely, because a LoJack® isn’t necessarily going to help with a car hack.

This is why auto makers are turning to the high tech and semiconductor industries to support the move to more digitized, automated cars. Semiconductor leaders in wireless, connectivity, storage, and networking are all being tapped to design and manage the Data Center on Wheels.  For example, Marvell recently announced the first automotive grade system-on-chip (SoC) that integrates the latest Wi-Fi, Bluetooth, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) capabilities.  Another technology product being offered for automotive use is the InnoDisk SATA 3ME4 Solid-State Drive (SSD) series. Originally designed for industrial systems integrations, these storage drives can withstand the varied temperature ranges of a car, as well as shock and vibration under rugged conditions. Both of these products integrate state-of-the-art encryption to not only keep and store information needed for data-driven vehicles, but keep that information secure from unwanted intrusion.

Marvell and others are working to form standards and adapt secure digital solutions in wireless, connectivity, networking and storage specifically for the automobile, which is even more paramount in self-driving vehicles. Current data center standards, such as Gigabit Ethernet are being developed for automobiles and the industry is stepping up to help make sure that these Data Centers on Wheels are not only safe, but secure.

June 17th, 2017

Marvell Technology Instrumental in Ground-Breaking New Open Source NAS Solution

By Maen Suleiman, Software Product Line Manager, Marvell Semiconductor Inc.

The quantity of data storage that each individual now expects to be able to have access to has ramped up dramatically over the course of the last few years. This has been predominantly fueled by society’s ravenous hunger for various forms of multimedia entertainment and more immersive gaming, plus our growing obsession with taking photos or videos of all manner of things that we experience during an average day.

The emergence of the ‘connected home’ phenomenon, along with greater use of wearable technology and the enhanced functionality being incorporated into each new generation of smartphone handset have all contributed to our increasingly data oriented lives. As a result each of us is generating, downloading and transferring larger amounts of data-heavy content than would have been even conceivable a relatively short while in the past. For example, market research firm InfoTrends has estimated that consumers worldwide will be responsible for taking over 1.2 trillion new photos during 2017 (that is more than double the figure from 5 years ago). Furthermore, there are certainly no indications that the dynamics that are driving this will weaken and everything will start to slow down. On the contrary, it is likely that the pace will only continue to accelerate.

If individuals are to keep on amassing personal data at current rates, then it is clear that they will need access to a new form of flexible storage solution that is up to the job. In a report compiled by industry analysts at Technavio, the global consumer network attached storage (NAS) market is predicted to grow accordingly – witnessing an impressive 11% compound annual growth between now and the end of this decade.

Though, it must be acknowledged, that we are shifting an increasing proportion of our overall data storage needs to the cloud, the synching of large media files for use in the home environments can often prove to be impractical, because of latency issues arising. Also there are serious security issues associated with relying on cloud-based storage when it comes to keeping certain personal data and these need to be given due consideration.

Start-up company Kobol has recently initiated a crowdfunding campaign to garner financial backing for its Helios4 offering. The first of its kind – this is an open source, open hardware NAS solution that will allow the storing and sharing of music, photos and movies through connection to the user’s home network. It presents consumers with a secure, flexible and rapidly accessible data storage reserve with a capacity of up to 40 TeraBytes (which equates to around 700,000 hours of music, 20,000 hours of movies or 12 million photos).

Helios4 has small dimensions. Built-in RAID redundancy is included in order for ongoing reliability to be assured. This means that even if one of the 4 hard drives (each delivering 10 TeraBytes) were to crash, the user’s content would remain safely stored, as the data is mirrored onto another of its drives. The result is a compact, cost effective and energy saving storage solution, which acts like a ‘personal cloud’.

 

Figure 1: Schematic showing the interface structure of Helios4 powered by ARMADA 388 SoC

Figure 1: Schematic showing the interface structure of Helios4 powered by ARMADA
388 SoC

 

Figure 2: The component parts that make up the Helios4 kit

Figure 2: The component parts that make up the Helios4 kit

Inspired by the open hardware, collaborative philosophy, Helios4 can be supplied as a simple to assemble kit that engineers can then assemble themselves. Otherwise, for those with less engineering experience it comes as a straightforward to use out-of-the-box solution. It offers a high degree of flexibility and a broad array of different connectivity options.

At the heart of the Helios4’s design is a sophisticated ARMADA 388 32-bit ARM-based system-on-chip (SoC) from Marvell, which combines high performance benchmarks with power frugal operation. Based on 28nm node, low power semiconductor technology, its dual-core ARM Cortex-A9 processing resource is capable of running at speeds of up to 1.8 GHz. USB 3.0 SuperSpeed and SATA 3.0 ports are included so that elevated connectivity levels can be supported. Cryptographic mechanisms are also integrated to maintain superior system security.

By clicking on the following link you can learn more about the Helios4 Kickstarter campaign. For those interested in getting involved, the deadline to make a contribution is 19th June.

 

June 7th, 2017

Community Platform Allows Easy Adoption of ARM 64-bit in Data Center, Networking and Storage Ecosystems

By Maen Suleiman, Software Product Line Manager, Marvell Semiconductor Inc.

Marvell MACCHIATObin community board is first-of-its-kind, high-end ARM 64-bit networking and storage community board

The increasing availability of high-speed internet services is connecting people in novel and often surprising ways, and creating a raft of applications for data centers. Cloud computing, Big Data and the Internet of Things (IoT) are all starting to play a major role within the industry.

These opportunities call for innovative solutions to handle the challenges they present, many of which have not been encountered before in IT. The industry is answering that call through technologies and concepts such as software defined networking (SDN), network function virtualization (NFV) and distributed storage. Making the most of these technologies and unleashing the potential of the new applications requires a collaborative approach. The distributed nature and complexity of the solutions calls for input from many different market participants.

A key way to foster such collaboration is through open-source ecosystems. The rise of Linux has demonstrated the effectiveness of such ecosystems and has helped steer the industry towards adopting open-source solutions. (Examples: AT&T Runs Open Source White Box Switch in its Live Network, SnapRoute and Dell EMC to Help Advance Linux Foundation’s OpenSwitch Project, Nokia launches AirFrame Data Center for the Open Platform NFV community)

Communities have come together through Linux to provide additional value for the ecosystem. One example is the Linux Foundation Organization which currently sponsors more than 50 open source projects. Its activities cover various parts of the industry from IoT ( IoTivity , EdgeX Foundry ) to full NFV solutions, such as the Open Platform for NFV (OPNFV). This is something that would have been hard to conceive even a couple of years ago without the wide market acceptance of open-source communities and solutions.

Although there are numerous important open-source software projects for data-center applications, the hardware on which to run them and evaluate solutions has been in short supply. There are many ARM® development boards that have been developed and manufactured, but they primarily focus on simple applications.

All these open source software ecosystems require a development platform that can provide a high-performance central processing unit (CPU), high-speed network connectivity and large memory support. But they also need to be accessible and affordable to ARM developers. Marvell MACCHIATObin® is the first ARM 64-bit community platform for open-source software communities that provides solutions for, among others, SDN, NFV and Distributed Storage.

A high-performance ARM 64-bit community platform

A high-performance ARM 64-bit community platform

The Marvell MACCHIATObin community board is a mini-ITX form-factor ARM 64-bit network and storage oriented community platform. It is based on the Marvell hyperscale SBSA-compliant ARMADA® 8040 system on chip (SoC) that features four high-performance Cortex®-A72 ARM 64-bit CPUs. ARM Cortex-A72 CPU is the latest and most powerful ARM 64-bit CPU available and supports virtualization, an increasingly important aspect for data center applications.

Together with the quad-core platform, the ARMADA 8040 SoC provides two 10G Ethernet interfaces, three SATA 3.0 interfaces and support for up to 16GB of DDR4 memory to handle highly complex applications. This power does not come at the cost of affordability: the Marvell MACCHIATObin community board is priced at $349. As a result, the Marvell MACCHIATObin community board is the first affordable high-performance ARM 64-bit networking and storage community platform of its kind.

CPU

SolidRun (https://www.solid-run.com/) started shipping the Marvell MACCHIATObin community board in March 2017, providing an early access of the hardware to open-source communities.

 The Marvell MACCHIATObin community board is easy to deploy. It uses the compact mini-ITX form factor, enabling developers to purchase one of the many cases based on the popular standard mini-ITX case to meet their requirements. The ARMADA 8040 SoC itself is SBSA-compliant (http://infocenter.arm.com/help/topic/com.arm.doc.den0029/) to offer unified extensible firmware interface (UEFI) support.

The ARMADA 8040 SoC includes an advanced network packet processor that supports features such as parsing, classification, QoS mapping, shaping and metering. In addition, the SoC provides two security engines that can perform full IPSEC, DTL and other protocol-offload functions at 10G rates. To handle high-performance RAID 5/6 support, the ARMADA 8040 SoC employs high-speed DMA and XOR engines.

For hardware expansion, the Marvell MACCHIATObin community board provides one PCIex4 3.0 slot and a USB3.0 host connector. For non-volatile storage, options include a built-in eMMC device and a micro-SD card connector. Mass storage is available through three SATA 3.0 connectors. For debug, developers can access the board’s processors through a choice of a virtual UART running over the microUSB connector, 20-pin connector for JTAG access or two UART headers. The Marvell MACCHIATObin community board technical specifications can be found here: MACCHIATObin Specification.

Open source software enables advanced applications

The Marvell MACCHIATObin community board comes with rich open source software that includes ARM Trusted Firmware (ATF), U-Boot, UEFI, Linux Kernel, Yocto, OpenWrt, OpenDataPlane (ODP) , Data Plane Development Kit (DPDK), netmap and others; many of the Marvell MACCHIATObin open source software core components are available at: https://github.com/orgs/MarvellEmbeddedProcessors/.

To provide the Marvell MACCHIATObin community board with ready-made support for the open-source platforms used at the edge and data centers for SDN, NFV and similar applications, standard operating systems like Suse Linux Enterprise, CentOS, Ubuntu and others should boot and run seamlessly on the Marvell MACCHIATObin community board.

As the ARMADA 8040 SoC is SBSA compliant and supports UEFI with ACPI, along with Marvell’s upstreaming of Linux kernel support, standard operating systems can be enabled on the Marvell MACCHIATObin community board without the need of special porting.

On top of this core software, a wide variety of ecosystem applications needed for the data center and edge applications can be assembled.

For example, using the ARMADA 8040 SoC high-speed networking and security engine will enable the kernel netdev community to develop and maintain features such as XDP or other kernel network features on ARM 64-bit platforms. The ARMADA 8040 SoC security engine will enable many other Linux kernel open-source communities to implement new offloads.

Thanks to the virtualization support available on the ARM Cortex A72 processors, virtualization technology projects such as KVM and XEN can be enabled on the platform; container technologies like LXC  and Docker can also be enabled to maximize data center flexibility and enable a virtual CPE ecosystem where the Marvell MACCHIATObin community board can be used to develop edge applications on a 64-bit ARM platform.

In addition to the mainline Linux kernel, Marvell is upstreaming U-Boot and UEFI, and is set to upstream and open the Marvell MACCHIATObin ODP and DPDK support. This makes the Marvell MACCHIATObin board an ideal community platform for both communities, and will open the door to related communities who have based their ecosystems on ODP or DPDK. These may be user-space network-stack communities such as OpenFastPath and FD.io or virtual switching technologies that can make use of both the ARMADA 8040 SoC virtualization support and networking capabilities such as Open vSwitch (OVS) or Vector Packet Processing (VPP).  Similar to ODP and DPDK, Marvell MACCHIATObin netmap support can enable VALE virtual switching technology or security ecosystem such as pfsense.

CPU2

 

Thanks to its hardware features and upstreamed software support, the Marvell MACCHIATObin community board is not limited to data center SDN and NFV applications. It is highly suited as a development platform for network and security products and applications such as network routers, security appliances, IoT gateways, industrial computing, home customer-provided equipment (CPE) platforms and wireless backhaul controllers; a new level of scalable and modular solutions can be further achieved when combining the Marvell MACCHIATObin community board with Marvell switches and PHY products.

Summary

The Marvell MACCHIATObin is the first of its kind: a high-performance, cost-effective networking community platform. The board supports a rich software ecosystem and has made available high-performance, high-speed networking ARM 64-bit community platforms at a price that is affordable for the majority of ARM developers, software vendors and other interested companies. It makes ARM 64-bit far more accessible than ever before for developers of solutions for use in data centers, networking and storage.

 

May 31st, 2017

Further Empowerment of the Wireless Office

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

In order to benefit from the greater convenience offered for employees and more straightforward implementation, office environments are steadily migrating towards wholesale wireless connectivity. Thanks to this, office staff will no longer be limited by where there are cables/ports available, resulting in a much higher degree of mobility. It will mean that they can remain constantly connected and their work activities won’t be hindered – whether they are at their desk, in a meeting or even in the cafeteria. This will make enterprises much better aligned with our modern working culture – where hot desking and bring your own device (BYOD) are becoming increasingly commonplace.

The main dynamic which is going to be responsible for accelerating this trend will be the emergence of 802.11ac Wave 2 Wi-Fi technology. With the prospect of exploiting Gigabit data rates (thereby enabling the streaming of video content, faster download speeds, higher quality video conferencing, etc.), it is clearly going to have considerable appeal. In addition, this protocol offers extended range and greater bandwidth through multi-user MIMO operation – so that a larger number of users can be supported simultaneously. This will be advantageous to the enterprise, as less access points per users will be required.

Pipe

An example of the office floorplan for an enterprise/campus is described in Figure 1 (showing a large number of cubicles and also some meeting rooms too). Though scenarios vary, generally speaking an enterprise/campus is likely to occupy a total floor space of between 20,000 and 45,000 square feet. With one 802.11ac access point able to cover an area of 3000 to 4000 square feet, a wireless office would need a total of about 8 to 12 access points to be fully effective. This density should be more than acceptable for average voice and data needs. Supporting these access points will be a high capacity wireline backbone.

Increasingly, rather than employing traditional 10 Gigabit Ethernet infrastructure, the enterprise/campus backbone is going to be based on 25 Gigabit Ethernet technology. It is expected that this will see widespread uptake in newly constructed office buildings over the next 2-3 years as the related optics continue to become more affordable. Clearly enterprises want to tap into the enhanced performance offered by 802.11ac, but they have to do this while also adhering to stringent budgetary constraints too. As the data capacity at the backbone gets raised upwards, so will the complexity of the hierarchical structure that needs to be placed underneath it, consisting of extensive intermediary switching technology. Well that’s what conventional thinking would tell us.

Before embarking on a 25 Gigabit Ethernet/802.11ac implementation, enterprises have to be fully aware of what all this entails. As well as the initial investment associated with the hardware heavy arrangement just outlined, there is also the ongoing operational costs to consider. By aggregating the access points into a port extender that is then connecting directly to the 25 Gigabit Ethernet backbone instead towards a central control bridge switch, it is possible to significantly simplify the hierarchical structure – effectively eliminating a layer of unneeded complexity from the system.

Through its Passive Intelligent Port Extender (PIPE) technology Marvell is doing just that. This product offering is unique to the market, as other port extenders currently available were not originally designed for that purpose and therefore exhibit compromises in their performance, price and power. PIPE is, in contrast, an optimized solution that is able to fully leverage the IEEE 802.1BR bridge port extension standard – dispensing with the need for expensive intermediary switches between the control bridge and the access point level and reducing the roll-out costs as a result. It delivers markedly higher throughput, as the aggregating of multiple 802.11ac access points to 10 Gigabit Ethernet switches has been avoided. With fewer network elements to manage, there is some reduction in terms of the ongoing running costs too.

PIPE means that enterprises can future proof their office data communication infrastructure – starting with 10 Gigabit Ethernet, then upgrading to a 25 Gigabit Ethernet when it is needed. The number of ports that it incorporates are a good match for the number of access points that an enterprise/campus will need to address the wireless connectivity demands of their work force. It enables dual homing functionality, so that elevated service reliability and resiliency are both assured through system redundancy. In addition, supporting Power-over-Ethernet (PoE), allows access points to connect to both a power supply and the data network through a single cable – further facilitating the deployment process.