Our website uses cookies, which are necessary for its operation. With your consent, we will use additional cookies and process your personal data to measure traffic and better target your ads.
Advancements in Edge AI are more important than what will come with 5G
By Pavel Konečný, CEO & Co-Founder of Neuron soundware.
We are only just understanding the advantages that can come with edge computing.
Back in 2016, I visited CEBIT conference in Hannover. It was full of so-called “smart” things which I did not find smart at all. These “smart” things hype included, in fact, many just “connected” devices that in most cases, delivered a single purpose, narrowly defined benefit to the user. A few examples I still remember:
- A pipe valve that allowed to monitor the position remotely (open/close)
- A gas volume measurement device that, if secretly installed into a gas tank, could identify a truck driver stealing fuel and
- An electric plug, which can be switched on and off via wi-fi.
However, there was one very special presentation at CEBIT that influenced my views on how AI might be delivered in the future. IBM presented a research project SyNAPSE – developing an AI chip “TrueNorth” that could deliver computing power equivalent to an ant brain while consuming just 73mW of energy. The only clear disadvantage was that it cost about USD 1 million per piece at that time.
This example proved that bringing AI to the edge of the network will be possible. It was also obvious that within a few years the “Moore law” will drop the price. The question was how quickly it will be and how many other similar solutions would emerge on the market? Already at that time, Neuron soundware started to pursue such IoT strategy – run AI algorithms at the edge of the network – and decided to develop its own IoT edge devices with audio recording and AI processing capabilities.
A few months later, I created a graph which shows the relationship between energy consumption and intelligence as a function of computing power that a piece of HW can deliver:
- With a few mW, no intelligence could be achieved for a reasonable price at that time
- Smartphones consume several watts and provided enough computing for basic AI object recognition from images every second or so and
- Narrow AI, such as the capability to drive a car, would need HW with tens or a few hundreds of watts power consumption. The analysis cameras inputs about ten times per second required about 4 TFLOPS (4 trillion floating-point operations per second (FLOPS, flops or flop/s) is a measure of computer performance). So, if translated to what we do in Neuron soundware, you want to use the same computing performance either to drive a car or to analyze the sound of machines in order to detect an upcoming mechanical failure. Doing both would require computing power equivalent to the brain of an ant. And IBM made me see this power within a single ultra-low-energy-consuming chip coming.
The recent rise of edge computing
The edge computing capability was on a rise since then. I kept an eye on several other AI Hardware acceleration projects, too.
In 2017, Movidius Neural Compute Stick with less than 100 USD price provided 0.1 TFLOPS and about 0.5W power demand. It is designed to extend less computing capable boards such as Raspberry Pi providing about 10x computing power boost.
In 2018, Huawei introduced its Kirin 980 processor with 0.1W and almost 0.5 TFLOPS. Also, other vendors didn’t stay behind. Google announced its Edge TPU Units and Rockchip demonstrated RK3399 equipped with Neural Processing Unit. Both having performance about 3TFOLPS and cost just around 100 USD.
In 2019, specific microcomputers with hardware accelerators of AI technologies (specifically Neural Networks) become generally available for use. All key HW players have released edge optimized versions of the AI software stack, which further increases the performance. Generally available AI boards are, for example, Google’s Edge TPU is purpose-built ASIC design to run inference. Nvidia Jetson Nano brings 128 CUDA cores into action for less than 100 USD. ToyBrick RK3399 Pro is one of the first developer boards with Neural Processing Unit (it slightly outperforms even Nvidia Jetson).
This fast IoT technology advancement allowed us in Neuron soundware to develop nBox – the edge computing device that is capable not only to record hi-quality audio with up to 12 channels but also to deliver AI through edge computing. By edge computing, we mean run only a few processes in the cloud or central platform and run the majority of processes in the local platforms instead.
The importance of edge computing becomes obvious with Intel’s acquisition of Movidius for an estimated 400 million USD and Mobileye, an autonomous car chip maker, for more than USD 15.3 billion. I was thrilled to watch online Tesla Motors’ presentation of their purposely built AI-enhanced computer for their self-driving cars with 36 TFLOPS. That is enough computing to process more than 2000 high-resolution images from the car cameras per second and Tesla claims it is sufficient performance to achieve autonomous driving.
Overall, I see four key advantages of edge computing:
- Safety: All processed data can be stored locally with tight control.
- Speed: AI inference can process inputs in milliseconds, meaning minimal latency.
- Efficiency: Embedded micro-computers are low power at affordable prices.
- Offline: The AI algorithm is deployed in the field, where connectivity might be limited.
Advantages of edge computing over 5G
Are you asking why so much HW fuss and effort, why not just wait for 5G networks and leverage the abundant cloud computing power and infrastructure? Here are a few ideas on why such “waiting” might not be the best strategy.
- Imagine you sitting in a self-driving car when the car has lost 5G connectivity. The car will not only go blind, but it will literally lose its brain/decisioning power. Why risk this when computing capabilities required for the high-bandwidth and low latency communication might be practical for the same cost as an extra neural processing unit. In addition, the overall energy demand would be higher than for AI inference using specific hardware
- Mobile internet providers want to cash-out investment into the development and deployment of the 5G network. Although unlimited data plans might be technically possible, they might not be commercially available any time soon. For example, our nBox with 12 acoustic sensors can produce up to 1 TB of audio data per month. With the current price per GB in LTE, transferring this amount of data to the cloud would cost a fortune and
- Finally, the network coverage will be primarily built in cities, leaving large parts of the country without 5G. In contrast, edge computing devices can be deployed immediately at the right places with a clear one-off cost, which usually does not dramatically increase the costs of the IoT solution.
Edge computing combined with AI will allow to process enormous amounts of data locally. The additional cost of hardware accelerators is marginal. The computing performance for neural networks does boost-up about 10x every year. This trend doesn’t seem to slow down as the data can be processed in parallel, hence outperform traditional CPU design.
The future is coming faster
Usage of edge computing in applications such as self-driving cars, facial recognition or predictive maintenance is just a beginning. We will have enough computing power to build truly independently operating machines soon. They will be able to move safely in cities, factories and be almost as competent in their work duties as humans. It is incredible that somebody envisioned this almost a century ago. In 2020, it will be 100 years since the word “ROBOT” was introduced in the science fiction play R.U.R by the Czech writer Karel Čapek. His vision of humanoid robots quickly spread over the world. In this drama, robots become self-aware and could gain emotions such as love. Seeing the pace of computer power increase and other IoT advancements I think that Čapek’s visions might become true much sooner than we think.