Audioscenic and AIStorm Collaboration Delivers Cost-Effective Position-Sensing 3D Sound

May 29 2025, 01:10
Audioscenic is gaining recognition in multiple markets and application segments with its technology that delivers effective spatial audio from speakers, using position sensing to optimize audio beamforming and crosstalk cancellation to the user(s). Now, the British company announced a collaboration with AIStorm, a pioneer in AI-in-sensor solutions that offers a lower-power, lower-cost device, with privacy-preserving safeguards.
 

In its multiple implementations for its Amphi position-adaptive 3D sound technology, including in laptops, soundbars, and computer monitors, Audioscenic uses different types of position sensors to track the user's head, optimized for each design, and delivering the best possible Spatial Audio experience, every time. As Audioscenic has proven, the solution works with multiple types of presence detection sensors or image tracking from built-in webcams. Nevertheless, for lower-cost designs, ODMs have requested simpler solutions that are simpler to implement and don't generate privacy concerns from users. AIStorm sensors provide exactly that alternative without requiring a dedicated camera and no external connectivity.

The strategic technology collaboration with AIStorm enables Audioscenic to offer a position-adaptive 3D Sound solution based on the company's efficient Cheetah image sensor, at significantly lower cost. AIStorm is a pioneer in AI-in-Sensor processing, which eliminates the latency, power, and cost associated with competitive AI solutions at the edge. 

AIStorm's Cheetah high-speed charge-domain image sensor that powers efficient facial landmark tracking works with Audioscenic's AI position sensing to triangulate ear location in real-time. This enables precise steering of Amphi 2-CH and Amphi Hi-D multichannel beamforming and crosstalk cancellation, allowing manufacturers to design products with 3D sound rendering that adapts to listener position.
Biometric keypoint tracking in real-world environments is challenging. Normal imagers are too slow to keep up with real-world movements, and when combined with an AI processor of reasonable cost, it creates unacceptable lags. A combining sensing and processing solution in a single module, AIStorm Cheetah, overcomes the issues of cost, latency, and processing. In the case of audio beam steering applications, it requires up to 85% lower processing vs conventional methods. All of this is done at a cost 3x lower than existing solutions, which is compatible with high-volume consumer applications. 

More importantly, since only position data is processed on-device, privacy is maintained by not tracking faces, determining identity, or transmitting data (there is no external transmission). Performance is also enhanced in low-light conditions, and the solution maintains responsive tracking and accuracy, even during rapid user movement.

"We’re solving a long-standing bottleneck in edge-based human interaction,” said David Schie, CEO of AIStorm. "By effectively eliminating latency while slashing power and processing costs, we make real-time biometric tracking a really satisfying user experience for the first time."
 
At the core of the system are proprietary AI models coupled with AIStorm’s Cheetah high-speed charge-domain imager, which can capture up to 40,000 frames per second. The integration time and frame rate are programmable, and each pixel is converted to an 8-bit digital representation and output via proprietary parallel interfaces. Cheetah also includes an LED driver capable of up to 40mA continuous current (programmable) and PWM control, and four configurable GPIOs capable of up to 100MHz operation. This ensures accurate tracking even during fast motion, such as head turning or gaming gestures. The high-speed capture leaves more time for processing each frame, thereby reducing peak workloads and computing costs.

"Achieving spatial accuracy with freedom of movement has long been the missing link for immersive 3D sound," adds Marcos Simón, CEO of Audioscenic. "AIStorm's tracking solution overcomes the latency barrier and opens the door to adaptive beamforming solutions while reducing costs and integration complexity for product makers."

The combined Audioscenic Amphi and AIStorm solution is ideal for integration into laptops, displays, gaming soundbars, and any other types of products, including personal electronics. The integration modules should be available for ODM evaluation in Q3 2025.
www.audioscenic.com | www.aistorm.ai
 
Page description
About Joao Martins
Since 2013, Joao Martins leads audioXpress as editor-in-chief of the US-based magazine and website, the leading audio electronics, audio product development and design publication, working also as international editor for Voice Coil, the leading periodical for... Read more

related items