This article explains how recent engineering efforts are supporting musicians with cost-effective, ultra-low latency wireless connectivity, which will ultimately benefit the performer’s hearing.

When music is transmitted through real-time wireless digital technology—such as in live shows, music production, virtual choirs, and other such scenarios — the performer’s experience is impacted by latency.
From the perspective of the music consumer, we are entering what may be viewed as the golden era for Bluetooth. Use of this technology to consume and enjoy media is ubiquitous, and while the underlying protocols have not materially changed in many years, the gradual adoption of Bluetooth LE Audio, and its new features, should extend the dominance of this ecosystem well into the future.
While the Bluetooth Classic protocols have been relatively stable over the past years, improvements in user experience have still come in the form of new compression algorithms and codecs, which have increased the quality of the processed content many times over.
Unfortunately, latency had been the poor cousin, until the launch of Bluetooth Low Energy (LE) Audio which, through various link layer techniques and the adoption of a new codec in the form of Low Complexity Communication Codec (LC3), significantly reduces the achievable single-link latency from the typically 100ms achievable with Bluetooth Classic, to between 20ms to 30ms on an optimized Bluetooth LE Audio link.
While this is a step-change in the right direction, it still leaves off-the-shelf Bluetooth-based solutions unsuitable for many live performance use cases. Latency performance is especially critical in those common scenarios where audio must traverse multiple links and be subject to other sources of delay (e.g., digital effects) prior to its real-time feedback to the performer. While amateur/solo musicians may tolerate a single 20ms lag (which is, after all, the time it would take sound to propagate naturally across a medium-sized stage), by the time this compounds across multiple links it is almost inevitably unacceptable.
How Low Is Low Enough?
Even when Bluetooth LE is becoming more ubiquitous, the same argument exists. While Bluetooth technology does not provide a complete solution for live performance audio, it still has a key role to play. In particular, the value of a standards-based approach manifests with the availability of low-cost Bluetooth LE-compatible transceivers of equivalent form from a variety of SoC vendors. Building on these readily available transceivers, we can engineer systems that are strongly optimized for live performance applications. But what latency performance targets do we actually need to meet?
Given we can measure latency empirically, as opposed to the highly subjective nature of audio, let’s begin by outlining where delays are introduced in a high-quality wireless audio link. However, it is first worth acknowledging that there are schools of thought where 10ms here or there doesn’t really matter, along with some at the other end of the spectrum who consider that anything over zero milliseconds is intolerable.
An Audio Engineering Society (AES) paper reveals how the type of musical instrument — and monitoring environment — impacts the perception of latency. The authors conducted a subjective listening test to identify how intolerable various amounts of latency are for performers in live monitoring scenarios. They discovered the perception of latency depends on the type of musical instrument and monitoring environment (wedges vs. in-ear-monitors - IEMs). This experiment showed the acceptable range of latency is between 42ms and less than 1.4ms.
Vocalists can tolerate the least amount of latency (less than 3ms), followed by drummers (less than 6ms), pianists (less than 10ms), guitarists (less than 12ms), and keyboardists (less than 20ms).
What else do we need to consider? The audio chain (without a wireless link and for a one-way link) can have a total latency of 10ms to 20ms.
Why does this happen? This buffering is largely due to:
- Conversion issues: Typically, between analog and digital domains or frequency and time domains.
- Inconsistent block size across a chain of processing modules: Various digital signal processing algorithms work on blocks of samples, rather than one sample at a time, which requires buffers processed as a single unit. If the block size isn’t consistent across a chain of processing modules, further buffering is needed.
- The challenges of transferring between systems: Wired or wireless transfer of data between systems — such as between different software algorithms, different chips within a single product, different products, or different locations over a network—requires buffers. Transfers typically happen in blocks and can require retransmissions. This can be further compounded when the data size of the radio transfers doesn’t match the typical size of the audio blocks—requiring further buffering.
- Mismatched clocks: Some systems require crossing clock domains to transfer data, which can need additional buffers to handle the slightly different rates, and the processing required. Fortunately, some professional systems are designed to run from a common clock to avoid this problem.

New Technology Can Lower the RF Portion to Around 3.8ms
The good news is we can achieve ultra-low latency digital wireless audio using standards-based transceivers and low-power links thanks to innovations such as Virscient’s LiveOnAir — a technology introduced in 2023 for wireless microphones and IEMs. LiveOnAir is a proprietary protocol stack that manages the fine tuning between buffer size, packet size, and transport of the packets through the air. There is a hand-shaking mechanism that hands over from Bluetooth LE to LiveOnAir and when enabled can reduce the audio latency by more than 80%, depending on the choice of codec.
Already, LiveOnAir can support various proprietary and well-known codecs, allowing systems to find their sweet spot among the engineering trade-offs between latency, reliability, bandwidth, and concurrency. For example, with the ultra-low latency Skylark codec, LiveOnAir can deliver an analog-to-analog latency below 3.5ms using a Bluetooth LE-compatible physical layer operating in the 2.4GHz spectrum.
Skylark is an excellent option for applications needing the very lowest latency — its encode/decode processing delay of 1.8ms can enable extremely low latency with maximum room for retransmission to account for packet errors. Notwithstanding this, Skylark is designed specifically for such applications and thus is intrinsically quite tolerant to bit errors and the occasional packet loss.
As an alternative, selection of the LC3plus codec within LiveOnAir makes some latency sacrifice for significantly reduced MIPS and increased bandwidth flexibility. With this option, links of down to 10ms analog-to-analog latency are achievable, and solutions involving multi-channel or bidirectional audio can be delivered using low-cost hardware.
But the real beauty of the LiveOnAir solution is the combination of a flexible ultra-low latency wireless audio protocol and a suite of codecs that can operate on off-the-shelf SoCs and enjoy the cost benefits and reliability offered by proven technology. For example, its wireless microphone reference design is based entirely on Nordic Semiconductor’s nRF5340 BLE SoC, which runs both the protocol stack and codecs.
The nRF5340 is an all-in-one SoC that includes a superset of the most prominent nRF52 Series features. Features such as Bluetooth 5.2, high-speed SPI, QSPI, USB, up to 105°C operating temperature, and more, are combined with more performance, memory and integration, while minimizing current consumption.
The application processor is optimized for performance and can be clocked at either 128MHz or 64MHz, using voltage-frequency scaling. It has 1MB Flash, 512KB RAM, a floating-point unit (FPU), an 8KB two-way associative cache and DSP instruction capabilities. The network processor is clocked at 64MHz and is optimized for low power and efficiency (101 CoreMark/mA). It has 256KB Flash and 64KB RAM.
In the wireless microphone reference design, LiveOnAir makes efficient use of the nRF5340’s dual-core architecture by deploying the optimized low-layer protocol stack on the network processor, and running the Skylark or LC3+ codec and upper application layers on the application processor. The integrated USB 2.0 interface provides for device firmware update and USB audio scenarios, and I2S/I2C interfaces provide for flexible connectivity to an ADC and/or DAC for high-quality audio reproduction. This system is a turn-key reference solution for vendors looking to design audio transport products with analog-to-analog latency down to 3.5ms.

What Does This Mean for End Users?
While it’s not up to me to say if this level of latency is enough — only performers can make that call — these changes make it more appealing for them to remove cables for mics and protect their hearing with IEMs.
What’s equally exciting is, if we have cracked one of the most demanding use cases in live music, then gamers and DJs can also enjoy wireless audio connectivity without being held back by latency. We’ve come a long way.

Resources
LiveOnAir, <3.5ms latency for wireless audio,” Virscient, www.virscient.com/solutions/liveonair
“New High Quality, Low Latency Digital Wireless Transmission Solution Available from Audio Codecs,” audioXpress September 2022
“Virscient Announces LiveOnAir Ultra-Low Latency Wireless Audio Technology,” audioXpress May 2023
About Virscient
Virscient push the boundaries of technology innovation for embedded systems and wireless connectivity in challenging environments, including automotive, professional audio, and marine. With five locations around the world, Virscient works with the world’s leading semiconductor and product companies that choose them because of their deep expertise in wireless and connectivity. In every strategic engineering project, Virscient develops secure embedded software for connected systems, designing hardware from silicon to PCB/product-level, and supporting all other aspects of the connectivity journey from technology selection through to product RF and interoperability certification. www.virscient.com
This article was originally published in audioXpress, December 2024