CIO Influence
CIO Influence News Cloud

TinyML Computer Vision Is Turning Into Reality With microNPUs (µNPUs)

TinyML Computer Vision Is Turning Into Reality With microNPUs (µNPUs)

Ubiquitous ML-based vision processing at the edge is advancing as hardware costs decrease, computation capability increases significantly, and new methodologies make it easier to train and deploy models. This leads to fewer barriers to adoption and increased use of computer vision AI at the edge.

PREDICTIONS SERIES 2024 - CIO InfluenceCIO INFLUENCE News: TSplus and Kaspersky Forge Groundbreaking Partnership to Enhance Remote Access Security

Computer vision (CV) technology today is at an inflection point, with major trends converging to enable what has been a cloud technology to become ubiquitous in tiny edge AI devices. Technology advancements are enabling this cloud-centric AI technology to extend to the edge, and new developments will make AI vision at the edge pervasive.

There are three major technological trends enabling this evolution. New, lean neural network algorithms fit the memory space and compute power of tiny devices. New silicon architectures are offering orders of magnitude more efficiency for neural network processing than conventional microcontrollers (MCUs). And AI frameworks for smaller microprocessors are maturing, reducing barriers to developing tiny machine learning (ML) implementations at the edge (tinyML).

As all these elements come together, tiny processors at milliwatt scale can have powerful neural processing units that execute extremely efficient convolutional neural networks (CNNs)—the ML architecture most common for vision processing—leveraging a mature and easy-to-use development tool chain. This will enable exciting new use cases across just about every aspect of our lives.

The promise of CV at the edge

Digital image processing—as it used to be called—is used for applications ranging from semiconductor manufacturing and inspection to advanced driver assistance systems (ADAS) features such as lane-departure warning and blind-spot detection, to image beautification and manipulation on mobile devices. And looking ahead, CV technology at the edge is enabling the next level of human machine interfaces (HMIs).

HMIs have evolved significantly in the last decade. On top of traditional interfaces like the keyboard and mouse, we have now touch displays, fingerprint readers, facial recognition systems, and voice command capabilities. While clearly improving the user experience, these methods have one other attribute in common—they all react to user actions. The next level of HMI will be devices that understand users and their environment via contextual awareness.

CIO INFLUENCE News: The EdgeCore Digital Infrastructure Delivers Dense, Scalable, and Certain Execution

Context-aware devices sense not only their users, but also the environment in which they are operating, all in order to make better decisions toward more useful automated interactions. For example, a laptop visually senses when a user is attentive and can adapt its behavior and power policy accordingly. This is already being enabled by Synaptics’ Emza Visual Sense technology, which  OEMS can use to optimize power by adaptively dimming the display when a user is not watching it, reducing display energy consumption (figure 1). By tracking on-lookers’ eyeballs (on-looker detect) the technology can also enhance security by alerting the user and hiding the screen content until the coast is clear.

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

New 7900q All-in-One Endpoint Series Featuring More Power, Webcam, and Dual Mic Gears up the 10ZiG Line of Thin & Zero Clients

CIO Influence News Desk

Mojix Joins SAS Retail Partner Ecosystem

Business Wire

Eagle Eye Networks Launches V3 of its Video API Platform

Business Wire