The Future of Seamless Audio According to Qualcomm

March 20 2025, 18:10
In 2024, during the Mobile World Congress in Barcelona, Qualcomm made a series of spectacular platform announcements, including a major upgrade to its audio products, and in particular to its Snapdragon Sound range.

Those included the Qualcomm S3 Gen 3 and Qualcomm S5 Gen 3 SoCs, designed to allow OEMs to deliver new sound experiences in the mid and upper tiers. All S5 and S3 Gen 3 devices support Bluetooth LE Audio and Auracast, and even the mid-tier S3 Gen 3 SoCs allow customizations with third-party solutions from the Qualcomm Voice & Music Extension Program. Justifiably, this caused excitement among OEM/ODMs and developers.

The tremendous boost that the Qualcomm S5 and S3 Gen 3 SoCs are causing in the mobile and consumer segments, was clearly on display in the impressive examples of TWS earbuds, wireless headphones, and even AR glasses launched recently, and all showcased at the Qualcomm booth at MWC 2025. The first report from the show was included in The Audio Voice newsletter and is now available here.

MWC 2024 was also where Qualcomm showcased for the first time its completely new premium-tier, with the new cutting-edge S7 and S7 Pro Gen 1 Snapdragon Sound platforms. Together, these updated sound platforms created the most powerful architectures and resources to elevate true wireless earbuds, headphones, and speakers.
 
Qualcomm_MWC2025_IMG_8464-Web
Qualcomm booth at MWC 2025.
MWC2025_Qualcommbuds_IMG_8598-Web
The Qualcomm TWS earbuds showcase at MWC 2025 was packed with products that were just recently launched and others we had only seen at trade shows. All making use of the latest S3 and S5 Gen 3 platforms.

Just the Beginning
One year later, audioXpress returned to Barcelona to MWC 2025 and there were actual product announcements based on the S7 Pro platform, with the Xiaomi Buds Pro 5 announcement effectively bringing to market the whole series of technology breakthroughs enabled by the platform. audioXpress readers have all the details available here.

In my 2024 interview with John Turner, Senior Director, Product Management Voice and Music at Qualcomm Technologies, we discussed the roadmap for the new S7 Pro, and the fact that these new Snapdragon platforms broke away from Qualcomm's Kalimba DSP to embrace Tensilica Hifi, which was another major factor for many developers embracing these latest Qualcomm solutions. But the major story was the introduction of the Qualcomm XPAN (Expanded Personal Area Network) Micro Power Wi-Fi technology, which promised to enable many new exciting possibilities, including finally enabling lossless high-resolution audio streaming.

Qualcomm XPAN is a completely new Snapdragon connectivity technology that unlocks breakthrough range and audio quality, bringing together the best of Bluetooth and Wi-Fi. It essentially optimizes the connectivity experience, as the user transitions from being at home, in the car, at work, or out and about – staying connected to their devices directly with Bluetooth or via Wi-Fi. XPAN technology enhances audio streaming quality and seamlessly scales the codec rate as the Bluetooth link changes to ensure no audio dropouts or glitching. 

For now, available on audio devices using the Snapdragon S7+ Gen 1 Sound platform, and smartphones based on Snapdragon 8 Gen 3 and Snapdragon 8 Elite, XPAN maintains the same low power performance users have come to expect from Bluetooth earbuds. In fact, the Xiaomi Buds Pro 5 launched at MWC 2025 are said to be able to stream 24-bit/96kHz lossless audio over Wi-Fi at a lower power than if it was possible using Bluetooth. Even better, XPAN is designed to work with existing Wi-Fi infrastructure without special software or features, supporting the 2.4, 5, and 6GHz bands.
 
Qualcomm_MWC2025_IMG_8538-TWeb
A year after Qualcomm announced its S7 Pro Snapdragon Sound platform with micro-power Wi-Fi, Chinese manufacturer Xiaomi unveiled the new Xiaomi Buds 5 Pro Wi-Fi earbuds at MWC 2025.
Qualcomm_MWC2025_P3033171-Web
This premium model with a triple coaxial driver system, offers hybrid active noise cancellation (ANC) up to 55dB, a built-in voice recorder with support for AI-powered transcription. The highlight was obviously the first availability of Qualcomm XPAN technology enabling 24-bit/96kHz lossless audio with an expanded 4.2Mbps bandwidth over Wi-Fi.
Exciting Features to Come
The exciting perspectives for the audio industry have encouraged us to reach out once again to talk to Qualcomm to discuss the wireless product development perspectives with these new Snapdragon Sound platforms. And to discuss the state of Bluetooth LE Audio and Auracast adoption, as well as the progress in the development tools and frameworks. 

As an accomplished semiconductor industry technologist with an extensive perspective on all these topics, our conversation with Dino Bekis offers fascinating insights into the roadmap for the audio industry, and we were even able to learn about Qualcomm's perspective on future development such as Bluetooth High Data Throughput (HDT) and support for wireless multichannel for immersive and spatial applications.

During MWC 2025, Qualcomm also demonstrated its direct-to-cloud concept, whereby TWS earbuds and wireless headphones will be able to have audio streamed directly from the cloud, and users can interact with AI assistants without the need for a connected smartphone. And with the company's vision for hearables, there was also the first opportunity to see a joint AI headphones demo with Bragi, expanding on the strategic collaboration announced in 2024. This collaboration brings support for Bragi’s AI User Interface and integrated apps, as part of the Qualcomm Voice and Music Extension Program.

As Dino Bekis wrote on social media about these initiatives and the company’s presentations at MWC 2025, "This is just the beginning of how we're changing the way the world thinks about audio. While others iterate, Qualcomm innovates... making the impossible inevitable!"
 
Dino Bekis, Qualcomm's Vice President and General Manager, Wearables & Mixed Signal Solutions Business Unit.

J. Martins: A year ago at MWC 2024 I was here interviewing John Turner, and we had a great conversation covering a lot of ground on the new solutions and technologies being introduced. At MWC 2025, we now have actual products being launched that use those solutions. Can you share how the response from developers in the past 12 months was?

Dino Bekis: Absolutely. As a company we wanted to make sure that we were focused on outcomes, and we were focused on being able to deliver a user experience that really was exceptional. And so - beyond the internal development work that we did, both on the hardware and the software - what we did was to look at what companies and most consumers were looking for. We've been talking about XPAN for about a year, and one of the discussions around that was... Really, what's the experience you want to deliver?

So that's where we started.

And I think there were a couple of key pillars in that. The first was to make sure that we don't detract or remove anything from what people already experience today in terms of ease of use, connecting to the phone... There's always room for improvement there, of course, and we're trying to work through a number of channels on that.

In addition to audio, I also have a responsibility for our Snapdragon Seamless initiatives. And there, we're really working with ecosystem folks like Microsoft, Google and others - in addition to OEMs - to try to come together with an industry standard approach to enable a lot of these experiences across vendors, across OSs or across ecosystems.

As part of that, we thought XPAN was one additional vehicle to try to augment. Like, for example, being able to move away from your source device to an almost unlimited distance - whatever the distance where you still have Wi-Fi available. And the reason I say "almost unlimited" is because in some cases, it could be your home, but in some cases, it could be a whole building, campus-wide, or even Wi-Fi over an area of the entire city. So, in those scenarios, your phone could be at home, and you could be walking around, go to a restaurant, go shopping, and your device could still be connected, right? In principle.

So, we started with that and we said, okay, as you look through that journey, can we make sure that it is both a high-quality audio and an uninterrupted audio experience.
 
Qualcomm_Snapdragon-Seamless-Web
Qualcomm Snapdragon Seamless is a Device-to-Device Connectivity ecosystem that mirrors Apple's Continuity allowing products that run Android or Windows with Snapdragon processors to allow dragging and dropping of files, having earbuds intelligently switch sources, and projecting images and content from a phone to augmented-reality glasses. This is becoming reality with support from Google and Microsoft as well as Honor, Lenovo, Oppo, and Xiaomi.
In other words, a lot of complexity and technology working with Bluetooth, working with peer-to-peer Wi-Fi, working with roaming over axis points, etc. But from a user perspective, you don't have to worry about any of that. All that is pushed to the background, and all you hear is uninterrupted music or audio as you're going. And depending on conditions, you can scale up to very high audio quality, 96kHz and 24 bit, and all the way down to standard Bluetooth.

Then if you have a Qualcomm solution today in the phone or the source device, you can actually run lossless. Eventually, as we look at Bluetooth 7.0 - and we're looking forward to that - we're looking to take advantage of HDT as a standardized approach to support higher bit rates over Bluetooth.

We look at this XPAN technology as something for longer term, and we are looking at how we could open it up to the broader market, making it very much a standard.

We know that there's Bluetooth initiatives for 5 and 6 gigahertz, but we think that taking advantage of Wi-Fi and all the advantages it brings in some cases over Bluetooth and making that more of a de facto or open standard is something that would benefit the entire industry.

So those are the core pieces.

Last, but not least, I would say is battery life. When you put Wi-Fi in a headset with a 60 or 80 milliamp hour battery or maybe less, people are still expecting eight hours of always-on audio. For the product that was announced, Xiaomi was saying that they could actually do more battery life with (Micro Power) Wi-Fi than what they can do with a Bluetooth only solution.

So, we're very proud of that.

This micro power Wi-Fi architecture is something very innovative that we've spent about six years developing inside Qualcomm. And that we will continue to evolve as we go forward. Today in 2.4 and 5GHz, tomorrow adding 6GHz. And then addressing additional features like supporting the latest Wi-Fi standards, because, for example, if you use a Wi-Fi6 or even Wi-Fi7, you expand the number of clients you can support, and you have built-in power saving features like TWT (Target Wake Time). There are many advantages.
 
Qualcomm_MWC2025_IMG_8544-WEB
XPAN leverages both Bluetooth and micro-power Wi-Fi so users can benefit from the best audio streaming and connectivity experience when transitioning from home, work, or on the move. This is a significant development, given that not only micro-power Wi-Fi doesn’t compromise on battery life, but it also enables a seamless streaming experience from the limited range and limited data rates of Bluetooth directly to expanded range and bandwidth of Wi-Fi networks, ideal to leverage Qualcomm’s aptX Adaptive codecs up to 4.2Mbps.

JM: It seems things are happening faster on that front. There's been also a lot of talk about Bluetooth HDT and potential support for multichannel audio applications. Could Micro Power Wi-Fi enable a home multichannel channel use case? And will the development tools also support wireless speakers, for example?

Dino Bekis: Oh, absolutely. I mean, that's in fact our vision.

If you're going to do something brand new, it's got to be an exceptional user experience. It can't be anything less than that.

That's one of the reasons why we're so maniacally focused on delivering an end-to-end solution, right? But once you've done that, once you've defined the parameters and sorted the limits, then you can start very quickly iterating on new form factors and, as I mentioned earlier, opening up the standard to enable more and more devices.

So, we started with earbuds; headphones are definitely there (it's not even a question); and smart speakers or general Bluetooth speakers are going to be there. And as you expand into Wi-Fi speakers you could imagine the ability to do almost a self configuration on multichannel audio, or spatial audio kind of arrangements. Running some things on Bluetooth, and running some things over Wi-Fi.

So, that's where we see this going.

The tools are built in a way that we enable some of the early use cases and it's going to be kind of a roadmap of continuous improvement. Right now, with the same tool chain, same chipset, same SDK or ADK... a customer can come in, write software, and simultaneously deliver earbuds, headphones, and smart speakers. That is the initial vision.

Moving forward - and something we've demoed here and I'm very excited about - is to really get rid of the leash. You know, we had a short leash before, now we have a long leash to some other device, but can we get rid of the leash? Can we completely unshackle the earbuds? And really, what that translates to - and it's a lot more complicated than just saying, oh, we'll just disconnect it - it's really about turning what people consider today to be an accessory device into a standalone platform. 

And there we have the same architecture, the same approach that we're taking with XPAN technology, and we can apply it in what we're calling a direct-to-cloud model. Which for us is quite simply a service-driven model: How do we deliver services directly to the endpoint?

The complication there is you need to build effectively your own OS on top of an RTOS, right? You need to develop a concept of virtual machines or containers. How do you segregate data? And how do you get these clients ported into that environment?
 
MWC2025_Qualcomm_P3043197-Web
Direct-to-cloud demonstration using headphones at MWC 2025. Qualcomm is promoting its Snapdragon S7+ Gen 1 Sound Platform to bring ultra-low power compute and Wi-FI connectivity to hearables, enabling smart applications that are connected directly to cloud services.

With S7 Pro, we've built a significant capability of both compute or memory, and embedded NPU, in addition to the connectivity we've just been talking about, and the great audio capability that's always been in our products. And with that, we think we've got a very unique capability to deliver a direct-to-cloud model. You've got your IP address in your earbud and now the benefit of that is that for us, it's the same thing whether you're using agents to activate applications, or whether you just want the applications themselves.

So, for example, putting an agent-client interface in your earbud that can run in low power mode, and so you can just activate by voice and then have it go up and do whatever queries. The ability, for example, to run something like a Spotify client or any other music client in the earbud, so that to the service it doesn't matter whether I'm talking to an earbud, or whether I'm interfacing with a phone or a tablet or a PC. Right?

If I'm just using Spotify, as an example, the service just knows there's a Spotify client there. So that's the same concept we're trying to drive. Eventually we're looking to actually have a proper product SDK that we can make available to our customers. And with that, to open up a bit of the universe. You can have hybrid models where you're still maybe connected to devices, and you have some stand alone. For example, for always-on AI assistance; you still have your device-to-device connection, but there's always an AI assistant that you can access to get some information.

Sure, there's always a benefit to having a screen, but I think that in itself is a powerful model, because as LLMs get better, as voice authentication and natural speech recognition become commonplace, the most natural way to interact with our devices is by touching things, maybe doing gestures, but certainly using voice.
 
Qualcomm_MWC2025_P3043188-Web
Development platforms and examples of available products leveraging the latest S7 Pro Snapdragon Sound solution.
JM: In that context, you've partnered with companies such as Bragi for these demonstrations. How do you see the level of connection? Bragi is promoting an App Store that hosts software from other developers. There's also the example of Sonical, which is trying to create a whole "OS for hearables" How do you see the partnership with those companies? Do you see them as developers that can be part of the core system?

Dino Bekis: I think so. Currently our products run on our RTOS, right? So, really, what we're talking about is what do we layer on top. Some companies, like Sonical, bring some aspects of those upper layer controls. I guess in the limit you could also think of it as a shrunk version of Android, that would also offer the system resources you need. Of course, there are practical limitations of how much memory, how much compute power you want to run. 

When you look at our demos - which are really an attempt to showcase the art of the possible - for example the Bragi demo with an App Store, there's going to be a number of different clients you're going to want to download, right? How are we going to manage those? Bragi is an example of a storefront. Any OEM can "skin" that storefront to offer their own apps. It can be a generic app store, and with the ability to run applications in containers, you have the ability to take advantage of an app store as these clients grow. That's one approach...

The other approach is, "I want to build my own bespoke apps." For that we're working with another company called Microedge, which is really about enabling those virtual machines in those containers. These are just examples of partners that we already work with. But we would like to expand that ecosystem of partners or independent software vendors (ISVs).

Eventually, the ability to have multiple applications or multiple agents that are running can be done through a collection of different ISVs. We'll also work directly with OEMs, because some are a little larger and they want to drive a complete vertical implementation. 

We're looking to offer the most optionality and the most flexibility to developers and customers. Do you want to port specific apps, or do you want to have a storefront? We can accommodate anything in between or both of those extremes. It's really all about what business model do you want to run on. We view it as an open platform, and we're just trying to show the spectrum of what's possible. That's how I look at our demos.


JM: Last year you introduced S3 Gen 3, S5 Gen 3, and S7 and we saw a level of different reactions from companies and developers. Can you give us a sense of adoption, particularly for DSP applications?

Dino Bekis: If you rewind the clock before S7, before the new S5, we had a very long history of a proprietary DSP architecture that we had done (Kalimba). And that was serving us very well in terms of performance, capability, all that.

But we saw a couple of drivers to dramatically scale performance and power/performance equations, and to make sure we can take advantage of the latest innovations happening in the tool chain. Also, how can we make the developer's experience easier?

When I look at S3 Gen 3, that's more about, "how do we improve on the low end, get a very optimized solution and continue driving performance/cost leadership equation?" That's been actually doing quite well, and it helped us gain share, especially in markets where over the years people have assumed Qualcomm could exit. Actually, the opposite has been true. We've doubled down and our share has grown in a very big way.

When we looked at S7, there was an opportunity there to allow customers to offer some really innovative experiences, including more adaptive ANC, and significant embedded NPU capability for interesting use cases. But knowing that's going to be a premium solution. So, the S7 was going to be our flagship capability in the sound space. It was about setting a new tier.

It definitely added much more capabilities than with prior generations of S5, so we felt that deserved something different as a way to differentiate us long term.

So, in terms of roadmaps, we have a set of plans that'll continue making a distinction between what S7 means to customers, as a brand, or as a tier of product, versus S5. But we're not lowering our capability on S5, we're just dramatically increasing it on S7.

With the latest generation S5 we launched, again, that was about how do we quickly waterfall the same architecture approach? How do we leverage the tools and everything? Because if the developers build something for a premium tier solution, our customers want to quickly be able to offer multiple skews of that product. Whether it's for different price points, for different markets, or for different use cases. S5 is a much more optimized solution that still offers all the benefits of portability and also reuse.

And so yes, that's a shift for our customers, but as part of our ADK development tools, I think we're trying to make that a simpler shift and make that transition more seamless. In the long term, we intend to leverage these industry standard tool chains and cores that we can scale very easily and eventually take that down to the S3 tier. We still don't have a clear date for that, but it makes it easier for people to develop across the entire lineup of products.
 
Qualcomm_MWC2025_P3053254-Web
In this exclusive interview with audioXpress, Dino Bekis provides a unique perspective of Qualcomm's vision to create new higher-quality wireless experiences, where the technology is pushed to the background from the user perspective.

JM: Talking about product tiers, unlike what happened at CES, among the mobile companies exhibiting here at MWC 2025, Auracast was not even yet visible...

Dino Bekis: Actually, I feel that Auracast is mainstream already... I understand that it may not be percolating down in the consumer space, but LE audio is already here, right? Auracast is already here.

The harder question is more along the lines of Bluetooth dual mode. From our perspective, there's a need to continue supporting legacy devices. For device-to-device connections you can go LE audio only. But if you need to interface with a spectrum of products that may be in the market for 10 years or more, I think LE audio and Classic are going to be required.

I do expect that to continue for a while. Our intention is to continue to support dual mode. But of course, the shift is happening, right?


JM: Auracast is actually building a strong momentum with solution providers meeting the early deployments in public spaces, but those companies are predominantly in the commercial audio space. Is that something that Qualcomm is also going to offer?

Dino Bekis: Yeah, that's a good question. We have been sticking to enabling it all on our source platform. So, when you're looking at our PCs, when you look at our smartphones, we look at all those, that's all part of the equation. We are actually developing dedicated dongle solutions, and we have solutions in the market today. For example, we have our very low latency dongle, which uses our QCC3086 and QCC3083 products, that we announced in the Summer of 22.

We've already taken to market dongles that use both Bluetooth Classic and LE - and we use LE for low latency gaming. And our intention is to continue doing that. Are we thinking about delivering more purpose-built solutions for professional audio equipment? I think we're delivering that technology so that our customers may choose to build that. It hasn't been a core focus for us, and we're seeing it happen. People are surprising us in how they're taking our dongle solution and putting it into devices. An example being for inflight entertainment on a plane.

Our view is that if we can make it flexible, everyone can build it, right? aX

This article was originally published in The Audio Voice newsletter, (#508), March 20, 2025.
Page description
About Joao Martins
Since 2013, Joao Martins leads audioXpress as editor-in-chief of the US-based magazine and website, the leading audio electronics, audio product development and design publication, working also as international editor for Voice Coil, the leading periodical for... Read more

related items