Following the launch of its software platform leveraging scientific breakthroughs in acoustics and simulation to provide hyper accurate virtual prototyping, Treble Technologies now expands its offerings with a cloud SDK.
With the ability to integrate with a variety of platforms and tools, Treble's new SDK has many use cases; notably, in the audio industry for the testing and designing of audio devices such as headphones, speakers, and microphones. This will also extend to the architectural and engineering industries, where the SDK can be used to seamlessly analyze and shape acoustics in automotive and building designs.
According to the company, beyond architectural or hardware design, Treble's SDK will also enable developers to generate synthetic audio data for the training of advanced AI software such as voice recognition, spatial audio and VR-based audio technologies. The SDK allows developers to synthesize hyper realistic datasets encompassing thousands of customized audio scenes by configuring sound sources, receivers, environments and materials. The end result is that users will be able to train and test their AI algorithms and products in thousands of different meeting rooms, open plan offices, restaurants, and living rooms - all with varying sorts of furnishing and details.
"The high fidelity acoustic data generated within these virtual environments will allow for the training, testing and validation of audio machine learning models," the company states. Such algorithms are becoming increasingly popular as developers seek to integrate artificial intelligence (AI) into the development of their audio devices. "Through its plethora of virtual environments and ability to replicate real-world acoustic behavior, Treble’s platform is capable of generating significant volumes of data on sound which proves invaluable when training AI to recognize potential problems with designs. Specifically, the synthetic audio data can be used to train algorithms such as speech recognition, speech enhancement, source localization, echo cancellation, beamforming, noise suppression, de-reverberation, and blind room estimation," they add.
"Our goal with this product is to enable the next generation of AI-based audio through better training data. Imagine being able to easily set up thousands of virtual scenarios: homes, offices, restaurants, meeting rooms, etc, all with detailed geometric features and realistic material properties. Have them populated with realistic sound sources such as humans, loudspeakers and noise sources, as well as real receivers such as human listeners and multi-microphone audio devices. Then being able to simulate all these virtual sound scenarios in parallel in the cloud with ultra high precision, thanks to Treble’s proprietary wave-based sound simulation technology. The data can then be used to train various audio machine learning models," Finnur Pind adds.
www.treble.tech