Experiential
Computing
Blog
Mar 09, 2025
By Todd Dust
The rise of AI-driven IoT devices is pushing the limits of today’s microcontroller unit (MCU) landscape. While AI-powered perception applications—such as voice, facial recognition, object detection, and gesture control—are becoming essential in everything from smart home devices to industrial automation, the hardware available to support them is not keeping pace. The challenge? The broad 32-bit traditional MCU install base cannot handle the demands imposed on them by AI-ready workloads.
While new solutions (such as AI MCUs) are being launched by semiconductor vendors, the overall experience is still less than ideal. Architectures are too rigid, software and tooling continues to be proprietary, and solutions are overly complex.
In addition, many AI-enabled devices on the market today are repurposed from silicon originally designed for other applications—mobile, cloud, automotive or general-purpose embedded computing. Such architectures, while powerful, are not optimized for the ultra-low-power, always-on operation required by IoT devices. The result is a fragmented AI ecosystem, where designers must choose between low, medium, or high AI processing capabilities—often leading to trade-offs in performance, power efficiency, and cost.
Compounding the issue is the inefficiency of existing system architectures. Many AI MCUs rely on rigid, inflexible designs that are unable to balance compute power with energy efficiency.
In an IoT environment where devices must operate for months on a single battery charge, this mismatch leads to unnecessary power consumption and limits the potential for AI at the edge.
To overcome these challenges, the industry needs a new class of MCUs that blend intelligent sensing with AI-accelerated compute. These next-generation MCUs must deliver high-efficiency AI processing, scalable performance, and energy optimization tailored for always-on, low-power applications.
Synaptics is leading the industry with the Synaptics Astra™ SR-Series platform of context-aware AI MCUs for IoT devices. The Astra™ platform integrates hardware, open software, dev kits and ecosystem partnerships.
The first-generation SR100 Series of high-performance MCUs introduce a novel three-tiered architecture designed to optimize AI processing for IoT devices. Unlike traditional MCUs that either remain fully powered or completely idle, the SR110 dynamically adjusts its compute power based on real-time system demands. This context-aware computing enables ultra-low-power (ULP) operation while maintaining high-performance AI capabilities when needed.
At the core of this design are three distinct compute domains, or “gears,” that operate at different power levels to balance energy efficiency and AI processing performance.
ULP Always-On Domain: Continuous activity monitoring at ultra-low power
The always-on (AON) domain is responsible for continuously monitoring the environment for variations, even when the primary CPUs are in sleep mode. This ensures the system remains responsive without draining battery life.
This domain is designed to detect both vision- and audio-based events, such as motion, sound patterns, or changes in lighting. It can generate wake-up triggers based on pre-programmed detection parameters.
Another feature of the SR100 series is low power pre-roll. As events are monitored, they are stored on device. And when a trigger happens, the trigger event and the events leading to it can be sent for deeper analysis to the next domain.
Efficiency Domain: Low-power AI processing for real-time event detection
The efficiency domain is responsible for handling initial AI inferencing after an event is detected. It consists of an Arm® Cortex®-M4 MCU running at 100 MHz and a custom micro-NPU (Neural Processing Unit) from Synaptics, delivering up to 10 GOPS of AI inferencing power.
When a wake-up event is triggered (such as a detected object or audio cue), the compute elements in the efficiency domain process the data with lightweight AI models to determine the nature of the event. This enables real-time object detection, sound event detection, and other basic AI tasks, while maintaining low power consumption.
If additional processing is required—such as higher-resolution facial recognition or complex AI inferencing—the system escalates to the performance domain.
Performance Domain: High-compute AI acceleration for advanced processing
For more demanding AI tasks, the performance domain is activated. This domain provides significantly higher processing power, making it suitable for computationally intensive applications such as facial recognition, body pose estimation and advanced object detection and classification.
The performance domain consists of an Arm Cortex-M55 MCU, with Arm® Helium™ extensions running at 400 MHz, providing high-speed AI execution within an MCU framework and a high-performance NPU (Arm Ethos U-55) also operating at 400 MHz, delivering up to 100 GOPS of AI inferencing capability.
This novel, tiered processing structure ensures that only the necessary compute power is used at any given time, dramatically improving energy efficiency without compromising on performance.
The Future of Context-Aware AI Computing
The SR100 series’ intelligent gearing algorithms dynamically shift between these compute domains based on the system’s needs. This context-aware AI computing improves scalability, energy efficiency and standardized development. This platform approach is more flexible, and leads to more-standardized development practices, essential to an IoT AI space that’s evolving rapidly.
With a rich set of I/O and peripherals, including MIPI-CSI, lightweight ISP, USB, serial interfaces and security features, the SR100 MCU delivers a versatile, power-efficient, and highly programmable solution for multimodal, AI-enabled IoT devices.
Learn more about Synaptics Astra SR Series of context-aware AI MCUs for the Internet of things.
Join the interest list to be on early access program for the Synaptics Astra Machina™ Micro dev kit.