Texas Instruments has introduced two new MCU families at Embedded World 2026. The MSPM0G5187 and the AM13Ex both integrate TI’s TinyEngine NPU, a dedicated hardware accelerator that runs deep learning inference with up to 90x lower latency and more than 120x lower energy per inference compared to MCUs without an accelerator.
The MSPM0G5187 is an 80 MHz Arm Cortex-M0+ device with 128 KB flash and 32 KB SRAM, priced under $1 in 1,000-unit quantities, targeting cost- and power-constrained applications such as wearables and home appliances. The AM13Ex combines a high-performance Arm Cortex-M33 core with the TinyEngine NPU and an integrated trigonometric math accelerator performing trig calculations 10x faster than CORDIC implementations and enabling simultaneous real-time control of up to four motors alongside adaptive AI algorithms, with bill-of-materials reductions of up to 30% versus multi-chip approaches.
Both families are supported by TI’s CCStudio Edge AI Studio, which includes more than 60 pre-built models and application examples, and by generative AI features within the CCStudio IDE for code development, configuration, and debugging. The MSPM0G5187 is in production; the AM13Ex is available in preproduction quantities, with additional package and memory variants expected by end of 2026.
Leave a Reply