Marvell Blog

Featuring technology ideas and solutions worth sharing

Marvell

Latest Articles

August 2nd, 2017

Wireless Technology Set to Enable an Automotive Revolution

By Avinash Ghirnikar, Director of Technical Marketing of Connectivity Business Group

The automotive industry has always been a keen user of wireless technology. In the early 1980s, Renault made it possible to lock and unlock the doors on its Fuego model utilizing a radio transmitter. Within a decade, other vehicle manufacturers embraced the idea of remote key-less entry and not long after that it became a standard feature. Now, wireless technology is about to reshape the world of driving.

The first key-less entry systems were based on infra-red (IR) signals, borrowing the technique from automatic garage door openers. But the industry swiftly moved to RF technology, in order to make it easier to use. Although each manufacturer favored its own protocol and coding system, they adopted standard low-power RF frequency bands, such as 315 MHz in the US and 433 MHz in Europe. As concerns about theft emerged, they incorporated encryption and other security features to fend off potential attacks. They have further refreshed this technology as new threats appeared, as well as adding features such as proximity detection to remove the need to even press the key-fob remote’s button.

The next stage in favor of convenience was to employ Bluetooth instead of custom radios on the sub-1GHz frequency band so as to dispense with the keyfob altogether. With Bluetooth, an app on the user’s smartphone can not only unlock the car doors, but also handle tasks such as starting the heater or air-conditioning to make the vehicle comfortable ready for when the driver and passengers actually get in.

Bluetooth itself has become a key feature on many models over the past decade as automobile manufacturers have looked to open up their infotainment systems. Access to the functions located on dashboard through Bluetooth has made it possible for vehicle occupants to hook up their phone handsets easily. Initially, it was to support legal phone calls through hands-free operation without forcing the owner to buy and install a permanent phone in the vehicle itself. But the wireless connection is just as good at relaying high-quality audio so that the passengers can listen to their favorite music (stored on portable devices). We have clearly move a long way from the CD auto-changer located in the trunk.

Bluetooth is a prime example of the way in which RF technology, once in place, can support many different applications – with plenty of potential for use cases that have not yet been considered. Through use of a suitable relay device in the car, Bluetooth also provides the means by which to send vehicle diagnostics information to relevant smartphone apps. The use of the technology for diagnostics gateway points to an emerging use for Bluetooth in improving the overall safety of car transportation.

But now Wi-Fi is also primed to become as ubiquitous in vehicles as Bluetooth. Wi-Fi is able to provide a more robust data pipe, thus enabling even richer applications and a tighter integration with smartphone handsets. One use case that seems destined to change the cockpit experience for users is the emergence of screen projection technologies. Through the introduction of such mechanisms it will be possible to create a seamless transition for drivers from their smartphones to their cars. This will not necessarily even need to be their own car, it could be any car that they may rent from anywhere in the world.

One of the key enabling technologies for self-driving vehicles is communication. This can encompass vehicle-to-vehicle (V2V) links, vehicle-to-infrastructure (V2I) messages and, through technologies such as Bluetooth and Wi-Fi, vehicle-to-anything (V2X).

V2V provides the ability for vehicles on the road to signal their intentions to others and warn of hazards ahead. If a pothole opens up or cars have to break suddenly to avoid an obstacle, they can send out wireless messages to nearby vehicles to let them know about the situation. Those other vehicles can then slow down or change lane accordingly.

The key enabling technology for V2V is a form of the IEEE 802.11 Wi-Fi protocol, re-engineered for much lower latency and better reliability. IEEE 802.11p Wireless Access in Vehicular Environments (WAVE) operates in the 5.9 GHz region of the RF spectrum, and is capable of supporting data rates of up to 27 Mbit/s. One of the key additions for transportation is scheduling feature that let vehicles share access to wireless channels based on time. Each vehicle uses the Coordinated Universal Time (UTC) reading, usually provided by the GPS receiver, to help ensure all nearby transceivers are synchronised to the same schedule.

A key challenge for any transceiver is the Doppler Effect. On a freeway, the relative velocity of an approaching transmitter can exceed 150 mph. Such a transmitter may be in range for only a few seconds at most, making ultra-low latency crucial. But, with the underlying RF technology for V2V in place, advanced navigation applications can be deployed relatively easily and extended to deal with many other objects and even people.

V2I transactions make it possible for roadside controllers to update vehicles on their status. Traffic signals, for example, can let vehicles know when they are likely to change state. Vehicles leaving the junction can relay that data to approaching cars, which may slow down in response. By slowing down, they avoid the need to stop at a red signal – and thereby cross just as it is turning to green. The overall effect is a significant saving in fuel, as well as less wear and tear on the brakes. In the future, such wireless-enabled signals will make it possible improve the flow of autonomous vehicles considerably. The traffic signals will monitor the junction to check whether conditions are safe and usher the autonomous vehicle through to the other side, while other road users without the same level of computer control are held at a stop.

Although many V2X applications were conceived for use with a dedicated RF protocol, such as WAVE for example, there is a place for Bluetooth and, potentially, other wireless standards like conventional Wi-Fi. Pedestrians and cyclists may signal their presence on the road with the help of their own Bluetooth devices. The messages picked up by passing vehicles can be relayed using V2V communications over WAVE to extend the range of the warnings. Roadside beacons using Bluetooth technology can pass on information about local points of interest – and this can be provide to passengers who can subsequently look up more details on the Internet using the vehicle’s built-in Wi-Fi hotspot.

One thing seems to be clear, the world of automotive design will be a heterogeneous RF environment that takes traditional Wi-Fi technology and brings it together with WAVE, Bluetooth and GPS. It clearly makes sense to incorporate the right set of radios together onto one single chipset, which will thereby ease the integration process, and also ensure optimal performance is achieved. This will not only be beneficial in terms of the design of new vehicles, but will also facilitate the introduction of aftermarket V2X modules. In this way, existing cars will be able to participate in the emerging information-rich superhighway.

August 1st, 2017

Connectivity Will Drive the Cars of the Future

By Avinash Ghirnikar, Director of Technical Marketing of Connectivity Business Group

The growth of electronics content inside the automobile has already had a dramatic effect on the way in which vehicle models are designed and built. As a direct consequence of this, the biggest technical change is now beginning to happen – one that overturns the traditional relationship between the car manufacturer and the car owner.

With many subsystems now controlled by microprocessors running software, it is now possible to alter the behavior of the vehicle with an update and introduce completely new features and functionality by merely updating software. The high profile Tesla brand of high performance electric vehicles has been one of the companies pioneering this approach by releasing software and firmware updates that give existing models the ability to drive themselves. Instead of buying a car with a specific, fixed set of features, vehicles are being upgraded via firmware over the air (FOTA) without the need to visit a dealership.

Faced with so many electronic subsystems now in the vehicle, high data rates are essential. Without the ability to download and program devices quickly, the car could potentially become unusable for hours at a time. On the wireless side, this is requiring 802.11ac Wi-Fi speeds and very soon this will be ramped up to 802.11ax speeds that can potentially exceed Gigabit/second data rates.

Automotive Ethernet that can support Gigabit speeds is also now being fitted so that updates can be delivered as fast as possible to the many electronic control units (ECUs) around the car. The same Ethernet backbone is proving just as essential for day-to-day use. The network provides high resolution, real-time data from cameras, LiDAR, radar, tire pressure monitors and various other sensors fitted around the body, each of which is likely to have their own dedicated microprocessor. The result is a high performance computer based on distributed intelligence. And this, in turn, can tap into the distributed intelligence now being deployed in the cloud.

The beauty of distributed intelligence is that it is an architecture that can support applications that in many cases have not even been thought of yet. The same wireless communication networks that provide the over-the-air updates can relay real-time information on traffic patterns in the vicinity, weather data, disruptions due to accidents and many other pieces of data that the onboard computers can then use to plan the journey and make it safer. This rapid shift towards high speed intra- and inter-vehicle connectivity, and the vehicle-to-anything (V2X) communication capabilities that have thus resulted will enable applications to be benefitted from that would have been considered pure fantasy just a few years ago,

The V2X connectivity can stop traffic lights from being an apparent obstacle and turn them into devices that provide the vehicle with hints to save fuel. If the lights send out signals on their stop-go cycle approaching vehicles can use them to determine whether it is better to decelerate and arrive just in time for them to turn green instead of braking all the way to a stop. Sensors at the junction can also warn of hazards that the car then flags up to the driver. When the vehicle is able to run autonomously, it can take care of such actions itself. Similarly, cars can report to each other when they are planning to change lanes in order to leave the freeway, or when they see a slow-moving vehicle ahead and need to decelerate. The result is considerably smoother braking patterns that avoid the logjam effect we so often see on today’s crowded roads. The enablement of such applications will require multiple radios in the vehicle, which will need to work cooperatively in a fail-safe manner.

Such connectivity will also give OEMs unprecedented access to real-time diagnostic data, which a car could be uploading opportunistically to the cloud for analysis purposes. This will provide information that could lead to customized maintenance services that could be planned in advance, thereby cutting down diagnostic time at the workshop and meaning that technical problems are preemptively dealt with, rather than waiting for them to become more serious over time.

There is no need for automobile manufacturers to build any of these features into their vehicle models today. As many computations can be offloaded to servers in the cloud, the key to unlocking advanced functionality is not wholly dependent on what is present in the car itself. The fundamental requirement is access to an effective means of communications, and that is available right now through high speed Ethernet within the vehicle plus Wi-Fi and V2X-compatible wireless for transfers going beyond the chassis. Both can be supplied so that they are compliant with the AEC-Q100 automotive standard – thus ensuring quality and reliability. With those tools in place, we don’t need to see all the way ahead to the future. We just know we have the capability to get there.

July 17th, 2017

Rightsizing Ethernet

By George Hervey, Principal Architect, Marvell

Implementation of cloud infrastructure is occurring at a phenomenal rate, outpacing Moore’s Law. Annual growth is believed to be 30x and as much 100x in some cases. In order to keep up, cloud data centers are having to scale out massively, with hundreds, or even thousands of servers becoming a common sight.

At this scale, networking becomes a serious challenge. More and more switches are required, thereby increasing capital costs, as well as management complexity. To tackle the rising expense issues, network disaggregation has become an increasingly popular approach. By separating the switch hardware from the software that runs on it, vendor lock-in is reduced or even eliminated. OEM hardware could be used with software developed in-house, or from third party vendors, so that cost savings can be realized.

Though network disaggregation has tackled the immediate problem of hefty capital expenditures, it must be recognized that operating expenditures are still high. The number of managed switches basically stays the same. To reduce operating costs, the issue of network complexity has to also be tackled.

Network Disaggregation
Almost every application we use today, whether at home or in the work environment, connects to the cloud in some way. Our email providers, mobile apps, company websites, virtualized desktops and servers, all run on servers in the cloud.

For these cloud service providers, this incredible growth has been both a blessing and a challenge. As demand increases, Moore’s law has struggled to keep up. Scaling data centers today involves scaling out – buying more compute and storage capacity, and subsequently investing in the networking to connect it all. The cost and complexity of managing everything can quickly add up.

Until recently, networking hardware and software had often been tied together. Buying a switch, router or firewall from one vendor would require you to run their software on it as well. Larger cloud service providers saw an opportunity. These players often had no shortage of skilled software engineers. At the massive scales they ran at, they found that buying commodity networking hardware and then running their own software on it would save them a great deal in terms of Capex.

This disaggregation of the software from the hardware may have been financially attractive, however it did nothing to address the complexity of the network infrastructure. There was still a great deal of room to optimize further.

802.1BR
Today’s cloud data centers rely on a layered architecture, often in a fat-tree or leaf-spine structural arrangement. Rows of racks, each with top-of-rack (ToR) switches, are then connected to upstream switches on the network spine. The ToR switches are, in fact, performing simple aggregation of network traffic. Using relatively complex, energy consuming switches for this task results in a significant capital expense, as well as management costs and no shortage of headaches.

Through the port extension approach, outlined within the IEEE 802.1BR standard, the aim has been to streamline this architecture. By replacing ToR switches with port extenders, port connectivity is extended directly from the rack to the upstream. Management is consolidated to the fewer number of switches which are located at the upper layer network spine, eliminating the dozens or possibly hundreds of switches at the rack level.

The reduction in switch management complexity of the port extender approach has been widely recognized, and various network switches on the market now comply with the 802.1BR standard. However, not all the benefits of this standard have actually been realized.

The Next Step in Network Disaggregation
Though many of the port extenders on the market today fulfill 802.1BR functionality, they do so using legacy components. Instead of being optimized for 802.1BR itself, they rely on traditional switches. This, as a consequence impacts upon the potential cost and power benefits that the new architecture offers.

Designed from the ground up for 802.1BR, Marvell’s Passive Intelligent Port Extender (PIPE) offering is specifically optimized for this architecture. PIPE is interoperable with 802.1BR compliant upstream bridge switches from all the industry’s leading OEMs. It enables fan-less, cost efficient port extenders to be deployed, which thereby provide upfront savings as well as ongoing operational savings for cloud data centers. Power consumption is lowered and switch management complexity is reduced by an order of magnitude

The first wave in network disaggregation was separating switch software from the hardware that it ran on. 802.1BR’s port extender architecture is bringing about the second wave, where ports are decoupled from the switches which manage them. The modular approach to networking discussed here will result in lower costs, reduced energy consumption and greatly simplified network management.

July 7th, 2017

Extending the Lifecycle of 3.2T Switch-Based Architecture

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

and Yaniv Kopelman, Networking and Connectivity CTO, Marvell

The growth witnessed in the expanse of data centers has been completely unprecedented. This has been driven by the exponential increases in cloud computing and cloud storage demand that is now being witnessed. While Gigabit switches proved more than sufficient just a few years ago, today, even 3.2 Terabit (3.2T) switches, which currently serve as the fundamental building blocks upon which data center infrastructure is constructed, are being pushed to their full capacity.

While network demands have increased, Moore’s law (which effectively defines the semiconductor industry) has not been able to keep up. Instead of scaling at the silicon level, data centers have had to scale out. This has come at a cost though, with ever increasing capital, operational expenditure and greater latency all resulting. Facing this challenging environment, a different approach is going to have to be taken. In order to accommodate current expectations economically, while still also having the capacity for future growth, data centers (as we will see) need to move towards a modularized approach.

switching-blogpost

Scaling out the datacenter

Data centers are destined to have to contend with demands for substantially heightened network capacity – as a greater number of services, plus more data storage, start migrating to the cloud. This increase in network capacity, in turn, results in demand for more silicon to support it.

To meet increasing networking capacity, data centers are buying ever more powerful Top-of-Rack (ToR) leaf switches. In turn these are consuming more power – which impacts on the overall power budget and means that less power is available for the data center servers. Not only does this lead to power being unnecessarily wasted, in addition it will push the associated thermal management costs and the overall Opex upwards. As these data centers scale out to meet demand, they’re often having to add more complex hierarchical structures to their architecture as well – thereby increasing latencies for both north-south and east-west traffic in the process.

The price of silicon per gate is not going down either. We used to enjoy cost reductions as process sizes decreased from 90 nm, to 65 nm, to 40 nm. That is no longer strictly true however. As we see process sizes go down from 28 nm node sizes, yields are decreasing and prices are consequently going up. To address the problems of cloud-scale data centers, traditional methods will not be applicable. Instead, we need to take a modularized approach to networking.

PIPEs and Bridges

Today’s data centers often run on a multi-tiered leaf and spine hierarchy. Racks with ToR switches connect to the network spine switches. These, in turn, connect to core switches, which subsequently connect to the Internet. Both the spine and the top of the rack layer elements contain full, managed switches.

By following a modularized approach, it is possible to remove the ToR switches and replace them with simple IO devices – port extenders specifically. This effectively extends the IO ports of the spine switch all the way down to the ToR. What results is a passive ToR that is unmanaged. It simply passes the packets to the spine switch. Furthermore, by taking a whole layer out of the management hierarchy, the network becomes flatter and is thus considerably easier to manage.

The spine switch now acts as the controlling bridge. It is able to manage the layer which was previously taken care of by the ToR switch. This means that, through such an arrangement, it is possible to disaggregate the IO ports of the network that were previously located at the ToR switch, from the logic at the spine switch which manages them. This innovative modularized approach is being facilitated by the increasing number of Port Extenders and Control Bridges now being made available from Marvell that are compatible with the IEEE 802.1BR bridge port extension standard.

Solving Data Center Scaling Challenges

The modularized port-extender and control bridge approach allows data centers to address the full length and breadth of scaling challenges. Port extenders solve the latency by flattening the hierarchy. Instead of having conventional ‘leaf’ and ‘spine’ tiers, the port extender acts to simply extend the IO ports of the spine switch to the ToR. Each server in the rack has a near-direct connection to the managing switch. This improves latency for north-south bound traffic.

The port extender also functions to aggregate traffic from 10 Gbit Ethernet ports into higher throughput outputs, allowing for terabit switches which only have 25, 40, or 100 Gbit Ethernet ports, to communicate directly with 10 Gbit Ethernet edge devices. The passive port extender is a greatly simplified device compared to a managed switch. This means lower up-front costs as well as lower power consumption and a simpler network management scheme are all derived. Rather than dealing with both leaf and spine switches, network administration simply needs to focus on the managed switches at the spine layer.

With no end in sight to the ongoing progression of network capacity, cloud-scale data centers will always have ever-increasing scaling challenges to attend to. The modularized approach described here makes those challenges solvable.

June 21st, 2017

Making Better Use of Legacy Infrastructure

By Ron Cates, Senior Director, Product Marketing, Networking Business Unit

The flexibility offered by wireless networking is revolutionizing the enterprise space. High-speed Wi-Fi®, provided by standards such as IEEE 802.11ac and 802.11ax, makes it possible to deliver next-generation services and applications to users in the office, no matter where they are working.

However, the higher wireless speeds involved are putting pressure on the cabling infrastructure that supports the Wi-Fi access points around an office environment. The 1 Gbit/s Ethernet was more than adequate for older wireless standards and applications. Now, with greater reliance on the new generation of Wi-Fi access points and their higher uplink rate speeds, the older infrastructure is starting to show strain. At the same time, in the server room itself, demand for high-speed storage and faster virtualized servers is placing pressure on the performance levels offered by the core Ethernet cabling that connects these systems together and to the wider enterprise infrastructure.

One option is to upgrade to a 10 Gbit/s Ethernet infrastructure. But this is a migration that can be prohibitively expensive. The Cat 5e cabling that exists in many office and industrial environments is not designed to cope with such elevated speeds. To make use of 10 Gbit/s equipment, that old cabling needs to come out and be replaced by a new copper infrastructure based on Cat 6a standards. Cat 6a cabling can support 10 Gbit/s Ethernet at the full range of 100 meters, and you would be lucky to run 10 Gbit/s at half that distance over a Cat 5e cable.

In contrast to data-center environments that are designed to cope easily with both server and networking infrastructure upgrades, enterprise cabling lying in ducts, in ceilings and below floors is hard to reach and swap out. This is especially true if you want to keep the business running while the switchover takes place.

Help is at hand with the emergence of the IEEE 802.3bz™ and NBASE-T® set of standards and the transceiver technology that goes with them. 802.3bz and NBASE-T make it possible to transmit at speeds of 2.5 Gbit/s or 5 Gbit/s across conventional Cat 5e or Cat 6 at distances up to the full 100 meters. The transceiver technology leverages advances in digital signal processing (DSP) to make these higher speeds possible without demanding a change in the cabling infrastructure.

The NBASE-T technology, a companion to the IEEE 802.3bz standard, incorporates novel features such as downshift, which responds dynamically to interference from other sources in the cable bundle. The result is lower speed. But the downshift technology has the advantage that it does not cut off communication unexpectedly, providing time to diagnose the problem interferer in the bundle and perhaps reroute it to sit alongside less sensitive cables that may carry lower-speed signals. This is where the new generation of high-density transceivers come in.

There are now transceivers coming onto the market that support data rates all the way from legacy 10 Mbit/s Ethernet up to the full 5 Gbit/s of 802.3bz/NBASE-T – and will auto-negotiate the most appropriate data rate with the downstream device. This makes it easy for enterprise users to upgrade the routers and switches that support their core network without demanding upgrades to all the client devices. Further features, such as Virtual Cable Tester® functionality, makes it easier to diagnose faults in the cabling infrastructure without resorting to the use of specialized network instrumentation.

Transceivers and PHYs designed for switches can now support eight 802.3bz/NBASE-T ports in one chip, thanks to the integration made possible by leading-edge processes. These transceivers are designed not only to be more cost-effective, they also consume far less power and PCB real estate than PHYs that were designed for 10 Gbit/s networks. This means they present a much more optimized solution with numerous benefits from a financial, thermal and a logistical perspective.

The result is a networking standard that meshes well with the needs of modern enterprise networks – and lets that network and the equipment evolve at its own pace.

June 20th, 2017

Autonomous Vehicles and Digital Features Make the Car of the Future a “Data Center on Wheels”

By Donna Yasay, VP of Worldwide Business Development

Advanced digital features, autonomous vehicles and new auto safety legislation are all amongst the many “drivers” escalating the number of chips and technology found in next-generation automobiles.  The wireless, connectivity, storage and security technologies needed for the internal and external vehicle communications in cars today and in the future, leverage technologies used in a data center—in fact, you could say the automobile is becoming—a Data Center on Wheels.

Here are some interesting data points supporting the evolution of the Data Center on Wheels:

  • The National Highway Traffic Safety Administration (NHTSA) mandates that by May 2018, all new cars in the U.S. to have backup cameras. The agency reports that half of all new vehicles sold today already have backup cameras, showing widespread acceptance even without the NHTSA mandate.
  • Some luxury brands provide panoramic 360-degree surround views using multiple cameras. NVIDIA, which made its claim to fame in graphics processing chips for computers and video games, is a leading provider in the backup and surround view digital platforms, translating its digital expertise into the hottest of new vehicle trends. At the latest 2017 International CES, NVIDIA showcased its latest NVIDIA PX2, an Artificial Intelligence (AI) Car Computer for Self-Driving Vehicles, which enables automakers and their tier 1 suppliers to accelerate production of automated and autonomous vehicles.
  • According to an Intel presentation at CES reported in Network World, just one autonomous car will use 4,000GB (or 4 Terabytes) of data per day.
  • A January study by Strategy Analytics reported that by 2020, new cars are expected to have approximately 1,000 chips per vehicle.

Advanced Driver Assist Systems (ADAS), In-Vehicle Infotainment (IVI), autonomous vehicles—will rely on digital information streamed internally within the vehicle and externally from the vehicle to other vehicles or third-party services via chips, sensors, network and wireless connectivity.  All of this data will need to be processed, stored or transmitted seamlessly and securely, because a LoJack® isn’t necessarily going to help with a car hack.

This is why auto makers are turning to the high tech and semiconductor industries to support the move to more digitized, automated cars. Semiconductor leaders in wireless, connectivity, storage, and networking are all being tapped to design and manage the Data Center on Wheels.  For example, Marvell recently announced the first automotive grade system-on-chip (SoC) that integrates the latest Wi-Fi, Bluetooth, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) capabilities.  Another technology product being offered for automotive use is the InnoDisk SATA 3ME4 Solid-State Drive (SSD) series. Originally designed for industrial systems integrations, these storage drives can withstand the varied temperature ranges of a car, as well as shock and vibration under rugged conditions. Both of these products integrate state-of-the-art encryption to not only keep and store information needed for data-driven vehicles, but keep that information secure from unwanted intrusion.

Marvell and others are working to form standards and adapt secure digital solutions in wireless, connectivity, networking and storage specifically for the automobile, which is even more paramount in self-driving vehicles. Current data center standards, such as Gigabit Ethernet are being developed for automobiles and the industry is stepping up to help make sure that these Data Centers on Wheels are not only safe, but secure.

June 17th, 2017

Marvell Technology Instrumental in Ground-Breaking New Open Source NAS Solution

By Maen Suleiman, Software Product Line Manager, Marvell Semiconductor Inc.

The quantity of data storage that each individual now expects to be able to have access to has ramped up dramatically over the course of the last few years. This has been predominantly fueled by society’s ravenous hunger for various forms of multimedia entertainment and more immersive gaming, plus our growing obsession with taking photos or videos of all manner of things that we experience during an average day.

The emergence of the ‘connected home’ phenomenon, along with greater use of wearable technology and the enhanced functionality being incorporated into each new generation of smartphone handset have all contributed to our increasingly data oriented lives. As a result each of us is generating, downloading and transferring larger amounts of data-heavy content than would have been even conceivable a relatively short while in the past. For example, market research firm InfoTrends has estimated that consumers worldwide will be responsible for taking over 1.2 trillion new photos during 2017 (that is more than double the figure from 5 years ago). Furthermore, there are certainly no indications that the dynamics that are driving this will weaken and everything will start to slow down. On the contrary, it is likely that the pace will only continue to accelerate.

If individuals are to keep on amassing personal data at current rates, then it is clear that they will need access to a new form of flexible storage solution that is up to the job. In a report compiled by industry analysts at Technavio, the global consumer network attached storage (NAS) market is predicted to grow accordingly – witnessing an impressive 11% compound annual growth between now and the end of this decade.

Though, it must be acknowledged, that we are shifting an increasing proportion of our overall data storage needs to the cloud, the synching of large media files for use in the home environments can often prove to be impractical, because of latency issues arising. Also there are serious security issues associated with relying on cloud-based storage when it comes to keeping certain personal data and these need to be given due consideration.

Start-up company Kobol has recently initiated a crowdfunding campaign to garner financial backing for its Helios4 offering. The first of its kind – this is an open source, open hardware NAS solution that will allow the storing and sharing of music, photos and movies through connection to the user’s home network. It presents consumers with a secure, flexible and rapidly accessible data storage reserve with a capacity of up to 40 TeraBytes (which equates to around 700,000 hours of music, 20,000 hours of movies or 12 million photos).

Helios4 has small dimensions. Built-in RAID redundancy is included in order for ongoing reliability to be assured. This means that even if one of the 4 hard drives (each delivering 10 TeraBytes) were to crash, the user’s content would remain safely stored, as the data is mirrored onto another of its drives. The result is a compact, cost effective and energy saving storage solution, which acts like a ‘personal cloud’.

 

Figure 1: Schematic showing the interface structure of Helios4 powered by ARMADA 388 SoC

Figure 1: Schematic showing the interface structure of Helios4 powered by ARMADA
388 SoC

 

Figure 2: The component parts that make up the Helios4 kit

Figure 2: The component parts that make up the Helios4 kit

Inspired by the open hardware, collaborative philosophy, Helios4 can be supplied as a simple to assemble kit that engineers can then assemble themselves. Otherwise, for those with less engineering experience it comes as a straightforward to use out-of-the-box solution. It offers a high degree of flexibility and a broad array of different connectivity options.

At the heart of the Helios4’s design is a sophisticated ARMADA 388 32-bit ARM-based system-on-chip (SoC) from Marvell, which combines high performance benchmarks with power frugal operation. Based on 28nm node, low power semiconductor technology, its dual-core ARM Cortex-A9 processing resource is capable of running at speeds of up to 1.8 GHz. USB 3.0 SuperSpeed and SATA 3.0 ports are included so that elevated connectivity levels can be supported. Cryptographic mechanisms are also integrated to maintain superior system security.

By clicking on the following link you can learn more about the Helios4 Kickstarter campaign. For those interested in getting involved, the deadline to make a contribution is 19th June.