Imagination Technologies;s complete, standalone hardware IP neural network accelerator delivers high efficiency through a specialized PowerVR architecture implementation for neural networks (NNs). Companies building SoCs for mobile, surveillance, automotive and consumer systems can integrate the new PowerVR Series2NX Neural Network Accelerator (NNA) for high-performance computation of neural networks at very low power consumption in minimal silicon area.
Neural networks such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Long Short Term Memory networks (LSTMs) are enabling an explosion in technological progress across industries. NNAs are a fundamental class of processors, likely to be as significant as CPUs and GPUs, both of which Imagination already delivers. Potential applications for NNAs are innumerable, but include: photography enhancement and predictive text enhancement in mobile devices; feature detection and eye tracking in AR/VR headsets; pedestrian detection and driver alertness monitoring in automotive safety systems; facial recognition and crowd behavior analysis in smart surveillance; online fraud detection, content advice, and predictive UX; speech recognition and response in virtual assistants; and collision avoidance and subject tracking in drones.
According to the January 2017 Embedded Vision Developer Survey conducted by the Embedded Vision Alliance, 79% of respondents said they were already using or were planning to use neural networks to perform computer vision functions in their products or services. As technologies continue to advance at a rapid rate, a broader range of companies will be able to develop products and services with neural networks. Imagination customers are already developing and deploying NN based systems into markets including security, mobile, automotive and set-top box.
Jeff Bier, founder of the Embedded Vision Alliance, says: “Numerous system and application developers are adopting deep neural network algorithms to bring new perceptual capabilities to their products. In many cases, a key challenge is providing sufficient processing performance for these demanding algorithms while meeting strict product cost and power consumption constraints. Specialized processors like the PowerVR 2NX NNA, designed specifically for neural network algorithms, will enable deployment of these powerful algorithms in many new applications.”
As neural networks become increasingly common, dedicated hardware solutions like the 2NX NNA – which provides an 8x performance density improvement versus DSP-only solutions – will be required to achieve the highest possible performance with the lowest possible power and cost. In addition, neural networks are traditionally very bandwidth hungry, and the memory bandwidth requirements grow with the increase in size of neural network models. This introduces significant challenges for SoC designers and OEMs in designing a system that can provide the required bandwidth to the NNA. The PowerVR 2NX can minimize bandwidth requirements for the external DDR memory to ensure a system is not bandwidth limited in terms of performance. Widespread availability of dedicated hardware like the PowerVR 2NX NNA will allow for further development of applications based on these neural network technologies.
PowerVR 2NX NNA enables the most efficient solutions PowerVR 2NX is a completely new architecture designed from the ground-up to provide:
- The industry’s highest inference/mW IP cores to deliver the lowest power consumption*
- The industry’s highest inference/mm2 IP cores to enable the most cost-effective solutions*
- The industry’s lowest bandwidth solution* – with support for fully flexible bit depth for weights and data including low bandwidth modes down to 4-bit
- Industry-leading performance of 2048 MACs/cycle in a single core, with the ability to go to higher levels with multi-core
Chris Longstaff, senior director of product and technology marketing, PowerVR, at Imagination, says: “Dedicated hardware for neural network acceleration will become a standard IP block on future SoCs just as CPUs and GPUs have done. We are excited to bring to market the first full hardware accelerator to completely support a flexible approach to precision, enabling neural networks to be executed in the lowest power and bandwidth, whilst offering absolute performance and performance per mm2 that outstrips competing solutions. The tools we provide will enable developers to get their networks up and running very quickly for a fast path to revenue.”
The 2NX includes hardware IP, software and tools to provide a complete neural network solution for SoCs. It efficiently runs all common neural network computational layers. Depending on the computation requirements of the inference tasks, it can be used standalone – with no additional hardware required – or in combination with other processors such as CPUs and GPUs.
Neural networks everywhere The PowerVR 2NX NNA is designed to power inference engines across a range of markets, with a highly scalable architecture designed to power future solutions across many others.
Mobile: With the upcoming release of Tensorflow Lite and an API for Android, as well as momentum of the Caffe2Go framework, we will see an explosion in the number of AI enabled smartphone applications. Companies need a highly efficient way to perform inference tasks for functions such as image recognition, speech recognition, computational photography and more. PowerVR 2NX is the only IP solution today that can deliver against all of the requirements for a deployable mobile solution with its low power, low area, MMU and planned support for Android. In mobile devices where a GPU is mandated, companies can pair a new PowerVR Series9XE or 9XM GPU with the 2NX NNA in the same silicon footprint as a competing standalone GPU.
Smart surveillance: Massive growth in numbers of cameras, both home and commercial installations, is driving the need for vision processing including neural networks. Smart cameras based on these technologies can be used for decision-making based on security alerts, retail analytics, demographics and engagement data. Taking into account bandwidth requirements, data confidentiality and other issues, cameras must be designed for some amount of ‘edge’ video analytics processing within the camera. Since these cameras typically have either no GPU or a very small GPU, and lower performance CPUs, what’s needed is an efficient, high-performance standalone neural network accelerator. The 2NX NNA is ideal, and is highly scalable to address both consumer and commercial implementations.
Automotive: Applications for neural networks in vehicles include driver alertness monitoring, driver gaze tracking, seat occupancy, road sign detection, drivable path analysis, road user detection, driver recognition and others. As the number of autonomous vehicles and smart transportation systems increases over the next several years, these applications will continue to expand. Within automotive systems, a full hardware solution like the 2NX NNA is required to meet the associated performance points.
Home entertainment: Devices such as set-top boxes and televisions will increasingly provide solutions based on neural networks, for example the ability to adapt preferences to certain users, provide automated child locks, and automatically pause and record programs based on user behavior. With such features, companies can increase their differentiation and revenues. Key to implementing neural networks on these devices will be highly efficient bandwidth and low cost as well as support for NN APIs – features at the heart of the 2NX NNA. There are numerous other emerging entertainment applications for NNA, including AR/VR.
Making it easy for developers Imagination is providing everything needed for developers to get their networks up and running quickly and easily, ensuring that compute and bandwidth can be well balanced against accuracy. PowerVR 2NX development resources include mapping and tuning tools, sample networks, evaluation tools and documentation. The comprehensive PowerVR NX Mapping Tool enables easy porting from industry standard machine learning frameworks such as Caffe and Tensorflow. Advanced network designers will be able to design and implement networks on the 2NX NNA that exploit all of its hardware features.
Imagination is also making available the common Imagination DNN (Deep Neural Network) API to enable easy transition between CPU, GPU and NNA. The single API works across multiple SoC configurations for easy prototyping on existing devices.
Mre information: Imagination Technologies, Imagination House, Home Park Estate, Kings Langley, Hertfordshire, WD4 8LZ United Kingdom. Tel: +44 (0)1923 260511; firstname.lastname@example.org for more information.