Treble Technologies Offers Software Development Kit (SDK) for Sound Simulation in the Cloud

October 12 2023, 00:35
Treble Technologies, the Icelandic startup enabling acoustic engineers, designers, and developers to model sound with accuracy and efficiency within virtual environments, announced its new software development kit designed to generate synthetic audio data for AI-based audio software and enable the virtual prototyping of acoustics in audio hardware. The SDK will allow users to utilize Treble’s unique wave-based acoustic simulation technology in their own solutions, seamlessly integrating the offering into existing workflows at any scale.

Following the launch of its software platform leveraging scientific breakthroughs in acoustics and simulation to provide hyper accurate virtual prototyping, Treble Technologies now expands its offerings with a cloud SDK.

With the ability to integrate with a variety of platforms and tools, Treble's new SDK has many use cases; notably, in the audio industry for the testing and designing of audio devices such as headphones, speakers, and microphones. This will also extend to the architectural and engineering industries, where the SDK can be used to seamlessly analyze and shape acoustics in automotive and building designs. 

According to the company, beyond architectural or hardware design, Treble's SDK will also enable developers to generate synthetic audio data for the training of advanced AI software such as voice recognition, spatial audio and VR-based audio technologies. The SDK allows developers to synthesize hyper realistic datasets encompassing thousands of customized audio scenes by configuring sound sources, receivers, environments and materials. The end result is that users will be able to train and test their AI algorithms and products in thousands of different meeting rooms, open plan offices, restaurants, and living rooms - all with varying sorts of furnishing and details. 

"The high fidelity acoustic data generated within these virtual environments will allow for the training, testing and validation of audio machine learning models," the company states. Such algorithms are becoming increasingly popular as developers seek to integrate artificial intelligence (AI) into the development of their audio devices. "Through its plethora of virtual environments and ability to replicate real-world acoustic behavior, Treble’s platform is capable of generating significant volumes of data on sound which proves invaluable when training AI to recognize potential problems with designs. Specifically, the synthetic audio data can be used to train algorithms such as speech recognition, speech enhancement, source localization, echo cancellation, beamforming, noise suppression, de-reverberation, and blind room estimation," they add. 
 
Treble’s new Python-based interface will enable audio equipment designers to effortlessly simulate and test their products in thousands of different scenarios whilst extracting high fidelity acoustic data for machine learning (ML) models.
"AI was always destined to revolutionize the audio industry. At Treble, we're proud to be empowering developers and facilitating this technological shift. By allowing our wave-based technology to be combined with all manner of platforms and tools, we aim to revolutionize what can be done with AI and how audio equipment manufacturers and other industries conceptualize, design, and optimize their products. Not only will our SDK grant them the freedom to test and iterate with virtual soundscapes, but will provide invaluable insights and data for AI to optimize acoustic properties, fine-tune audio equipment, and deliver a truly immersive listening experience for customers," comments Finnur Pind, CEO and co-founder of Treble.

"Our goal with this product is to enable the next generation of AI-based audio through better training data. Imagine being able to easily set up thousands of virtual scenarios: homes, offices, restaurants, meeting rooms, etc, all with detailed geometric features and realistic material properties. Have them populated with realistic sound sources such as humans, loudspeakers and noise sources, as well as real receivers such as human listeners and multi-microphone audio devices. Then being able to simulate all these virtual sound scenarios in parallel in the cloud with ultra high precision, thanks to Treble’s proprietary wave-based sound simulation technology. The data can then be used to train various audio machine learning models," Finnur Pind adds.
www.treble.tech


 
Page description
About Joao Martins
Since 2013, Joao Martins leads audioXpress as editor-in-chief of the US-based magazine and website, the leading audio electronics, audio product development and design publication, working also as international editor for Voice Coil, the leading periodical for... Read more

related items