The concept of Supercomputing at the Edge is driven from the desire to improve the management of Big Data. In Supercomputing at the Edge, we push some data center content to the edge of the access network, caching locally pertinent content close to users. It can be viewed as distributing data center functions to local nodes, which will enable further revenue-generation services and applications.

The move to computing at the edge of the wireless network is taking shape across the industry. According to the Linley Group, there are over 2 million new base stations rolled out globally every year. If there was more intelligent computing co-located, then many applications could be distributed closer to the users. The applications would react faster and provide the users with opt-in services that are applicable to their desires in a more time sensitive way.

Download: Supercomputing at the Mobile Edge Overview (PDF)


  • Reduce Network Middle Mile Bottle Necks
  • Real-Time Application Driven Analytics
  • Geo Specific Services
  • Search Engine Optimization
  • Data Center Offload to Network
  • Distributed Intelligent Revenue Generating Service for Operators in Partnership with Application providers

Today many applications are used in the cloud. Personal computers may access the Internet via land connection, or mobile phones may access the Internet by 3G or 4G network. Once on the Internet, data travels from the access network over the core all the way to a data center where the applications may be running in the cloud. Some computing is done there and then results are returned to our client hardware--be it our phone, tablet or computer.

The heavy computing is not done on end-user hardware, but is done in cloud data centers, which consume massive quantities of energy and require a lot of computing capacity. They require entire municipal grids to be re-architected to handle the electrical load. Once the results are achieved, applications results are pushed out to end users over the network, which takes up time and congests the network along the “middle miles” between the data center and users.

A New Architecture for a Better Way

Renesas has introduced an architecture that can put low-latency supercomputing-caliber computing and interconnect with heterogeneous computing at the edge of the wireless network. Renesas and its partners announced the creation of a GPU cluster from which a variety of applications could run. It combines Renesas' RapidIO interconnect technology, Renesas Timing technology and NVidia’s Mobile Tegra K1 GPU technology using Prodrive Technologies’ RapidIO-enabled servers, and Concurrent Technologies’ GPU cards with embedded RapidIO interconnect.

This GPU cluster with low-latency interconnect can be deployed in the field at the macro base station, the central office and the cloud radio access network (C-RAN), a new deployment paradigm emerging in some Asian markets. Now instead of running software applications in the data center, the platform can host the same software in the field and offload the backhaul. The infrastructure innovation that allows for co-location is underway.

Across the Industry

Using the phrase Mobile Edge Computing, ETSI describes the technology this way:

“Mobile-edge Computing provides IT and cloud-computing capabilities within the Radio Access Network (RAN) in close proximity to mobile subscribers. Located at the base station or at the Radio Network Controller, it also provides access to real-time radio and network information such as subscriber location or cell load that can be exploited by applications and services to offer context-related services. For application developers and content providers, the RAN edge offers a service environment characterized by proximity, ultra-low latency, high-bandwidth, as well as real-time access to radio network information and location awareness. Mobile-Edge Computing allows content, services and applications to be accelerated, maintaining a customer’s experience across different radio and network conditions.”

The Potential for Supercomputing at the Edge

The combination of low-latency interconnect, timing, and mobile, low-power GPU technology all make this possible. Now network operators and those running application services can collaborate on what services end users want the most, then co-locate the services in a distributed geographic manner that are of most value to end users as well as those running application services today in the data center.

Other Resources