• 搜索

Experiential
Computing
Blog

Forbes Highlights Synaptics' Vision of Perceptive Intelligence in IoT

May 21, 2020

By Saleel Awsare


Since its inception, Synaptics has always been a leader in how people interact with their machines – we are known widely as the HMI leader. Our lasting legacy will probably always be around our expertise in touch technology – we’re the company that brought you the original TouchPad on laptops. But we are also behind many of the capacitance sensing, display and identification technologies that are second nature to consumers now.

Our vision and strategy have expanded over the years and today our strengths in machine interaction reach far beyond touch – into video, speech, audio, image and even biometric-based identification. Building on our expertise in areas like far-field voice, wake word triggering, ambient noise cancellation, video streaming and image processing, our HMI solutions are breaking new ground for enhancing the user experience in new and innovative ways in all kinds of devices and systems.

The progression of interface methods parallels the type of interaction we enable for electronic systems, be they mobile devices, computers, cars, smart home devices, home entertainment systems and appliances. The first level of interface is direct interaction with devices to allow input and output - control and feedback (“Hey Google”). The second level is identification of users to detect who is using the device or is present - personalization/customization (e.g. your smart security system recognizes it’s you and unlocks the door). The third level is contextual awareness, when the system knows who you are and can perceive intent or preference (e.g. your TVs set-top-box recognizes you and offers you content options based on your previous viewing habits).

We’re building this future on a strong foundation of AI and machine learning techniques that we’ve optimized over the years. For IoT in particular, AI is enabling this move toward more perceptive intelligence.

We’re just written about this topic for Forbes Magazine, highlighting the move toward perceptive intelligence in IoT devices.

In the article, we note:

A new generation — indeed, ecosystem — of devices, will be driven by interfaces that perceive your wants and needs. Welcome to the future of IoT and perceptive intelligence, where user interaction is optional and contextual awareness is machine learning enabled. When devices transition from collecting and transferring information to using that information intelligently on their own, computing has become ambient.

Quite simply, the Synaptics vision for human machine interaction is an interface so ubiquitous, intuitive and natural that we lose conscious awareness of our interactions.

All this is against a backdrop of the increasing move toward edge-based devices. Which, of course, adds another wrinkle for technology developers who must consider the size, power and performance requirements of locally based devices that contain new levels of automation to power perceptive intelligence and ambient computing.

We call this move toward a more sophisticated, user-friendly and safer IoT experience Smart Edge AI. By definition, Edge AI implies that the AI processing is running within the end product itself (a set-top box or smart display, for example) and not in a local server or in the cloud.

Until now smart edge processing has been reserved for expensive devices like smartphones, as it requires a considerable amount of computation that has been out of reach for low cost devices or appliances.  Thanks to solutions like our VideoSmart VS680 integrated SoCs, we can offer secure neural network acceleration, running at the edge, at price points targeting mainstream consumer devices.

Now, cost-effective AI-based Edge solutions can be used to improve performance to create a more human-like experience in products that run our homes and lives.  This will enable the device to make use of implicit communication instead of only relying on explicit communications that today’s devices depend on. Most notably, enabling devices with local intelligence will allow them to respond with near human-like speed.  That reduction in latency achieved by not making a cloud call feels almost immediate to the user.

By being able to implement these networks on the edge, the systems operate with greater security, lower latency, and reduced processing requirements.  While this is great for consumers, our customer benefit as well: our high-performance, multi-processor SoCs that can support multi-modal interface solutions - and are available at consumer market price points - will help developers more easily differentiate their products.

关于作者

saleel

Saleel Awsare
Senior Vice President and General Manager, PC & Peripherals Division
linkedin

 
接收最新消息