The Sensory and Syntiant cooperation enables manufacturers to implement seamless “voice” commands in dozens of languages. Working together, the two technologies could also support additional features, such as voice-based user identification.
“A busy mom in Korea setting a house alarm or teenager in Barcelona raising the volume on his smart speaker, voice commands are becoming more ubiquitous driven by worldwide consumer demand,” states Kurt Busch, CEO of Syntiant. “Collaborating with Sensory allows us to combine their AI with our silicon technology, providing customers a large multi-language library of local commands for just about any application.”
Syntiant and Sensory are accelerating the delivery of a fast, efficient, cloud-free multi-language interface in devices, such as earbuds, smart speakers and smartphones, at a power level orders of magnitude lower than typical MCU offerings.
Custom built to run neural workloads, the Syntiant NDP100 and NDP101 NDPs can support dozens of local voice commands and consume less than 140 microwatts while performing local processing of audio events, increasing privacy, reliability and responsiveness. In addition to voice triggers, other device capabilities include audio event and environment classification, as well as sensor analytics.
Sensory is focused on improving user experiences through embedded machine learning technologies such as voice, vision and natural language processing. The company pioneered the use of neural network approaches for embedded speech recognition for consumer electronics with a well-engineered and patented codebase that has shipped in over 2 billion consumer products.
www.syntiant.com | www.sensory.com