Tiny machine learning (TinyML) is an industry-wide effort to bring the power of AI to extremely low power, always-on, battery operated IoT devices for on-device sensor data analytics in areas such as audio, voice, image and motion. CEVA's holistic approach to AI at the edge ensures that customers using TensorFlow Lite for Microcontrollers can utilize a unified processor architecture to run both the framework and the associated neural network workloads required to build these intelligent connected products. CEVA's WhisPro speech recognition software and custom command models are integrated with the TensorFlow Lite framework, further accelerating the development of small footprint voice assistants and other voice controlled IoT devices.
"CEVA has been at the forefront of machine learning (ML) and neural networks inferencing for embedded systems and understands that the future of ML is Tiny going into extremely power and cost constrained devices. Their continued investment into powerful architectures, tools and software which support TensorFlow models provide a compelling offering for a new generation of intelligent embedded devices to harness the power of AI," comments Pete Warden, Technical Lead of TensorFlow at Google.
"The increasing demand for on-device AI to augment contextual awareness and conversational AI workloads poses new challenges to the cost, performance and power efficiency of intelligent devices. TensorFlow Lite for Microcontrollers dramatically simplifies the development of these devices, by providing a lean framework to deploy machine learning models on resource-constrained processors. With full optimization of this framework for our CEVA-BX DSPs and our WhisPro speech recognition models, we are lowering the entry barrier for SoC companies and OEMs to add intelligent sensing to their devices," adds Erez Bar-Niv, Chief Technology Officer at CEVA.
The CEVA-BX DSP family is a high-level programmable hybrid DSP/controller offering high efficiency for a broad range of signal processing and control workloads of real-time applications. Using an 11-stage pipeline and 5-way VLIW micro-architecture, it offers parallel processing with dual scalar compute engines, load/store and program control that reaches a CoreMark per MHz score of 5.5, making is suitable for real time signal control. Its support for SIMD instructions makes it ideal for a wide variety of signal processing applications and the double precision floating point units efficiently handle contextual awareness and sensor fusion algorithms with a wide dynamic range. It facilitates simultaneous processing of front-end voice, sensor fusion, audio processing, and general DSP workloads in addition to AI runtime inferencing. This also allows brands and algorithm developers to take advantage of CEVA's extensive audio, voice and speech machine learning software and libraries to accelerate their product designs. For more information, visit CEVA-bx2-sound.