The microcontroller, the most widely deployed computing platform, is ready to welcome the artificial intelligence (AI) revolution. Really? For a start, MCUs don’t have enough storage space needed to hold the neural network-based trained models, and they don’t have enough RAM to carry out inference operations based on these models.
Microcontrollers are known for limited compute and memory resources compared to traditional processors such as CPUs, GPUs, and NPUs. That’s why the implementation of AI workloads in edge and other embedded systems is considered far more challenging than computing-rich data center environments.
So, how can embedded system designers perform AI inference on MCUs and further empower the reach of the Internet of Things (IoT) use cases in areas such as machine learning, object detection, language processing, facial recognition, and more? The answer has arrived, and it involves AI development kits that bypass the tedious and time-consuming task of optimizing and compressing a trained model to fit into an MCU footprint.
The neural network models can be trained on powerful CPUs, GPUs, and FPGAs and then deployed on an MCU with enough flash storage to hold the models. A neural network model can run on as little 2 KB RAM while flash or ROM storage can start from 128 KB, scaling upward according to specific AI application requirements.
The AI development kits employ software tools to craft AI models that can efficiently run on MCUs. Below is one example of how this works.
How AI dev kits work
What the AI kits generally do is take the pre-trained neural networks that classify data signals and convert them into c-code and then run on MCUs. In other words, AI development kits import AI neural networks trained by some of the most popular libraries such as Keras, TensorFlow, Caffe, Lasagne, and ConvnetJS and then map these neural networks for use on MCUs.
While optimizing AI frameworks like Caffe and TensorFlow, these AI kits fold some of the model layers to reduce the memory footprint. Besides AI frameworks, the dev kits feature neural network compilers such as Glow and XLA. So, embedded developers don’t have to be experts in data mining or neural net topologies, and they no longer have to code the time-consuming libraries.
That’s how development kits allow embedded system designers to move quickly from a development environment to AI application implementations. And, perhaps more importantly, how these kits demystify the otherwise highly sophisticated AI technology and significantly lower the barriers to entry.
The leading MCU suppliers, as well as embedded system design houses, are starting to launch AI development kits that enable MCUs to perform inference on resource-constrained embedded designs. In some cases, MCU vendors are adding AI function packs to their existing design ecosystems.
Leave a Reply