OTC Hearing Aids: Is Innovation Leaving the Rule Behind?

February 15 2023, 14:10
A wave of new developments promises to solve the most pressing need for people with milder hearing loss, allowing mass-market hearables to supplant over-the-counter hearing aids with less stigma.
 

To much fanfare in the US, the Federal Drug Administration’s (FDA) over-the-counter (OTC) hearing aid rule took effect in October 2022. Devices are now available both in stores and online. This is the culmination of a process that properly began five years earlier with the passage of the law requiring the FDA to create such a category for self-perceived mild to moderate hearing loss. Five years being an eternity in the wearables world, in a March 2022 article [1], I suggested that the assumptions used to write the rule were becoming obsolete even as the process was concluding. A fresh look at developments illustrates how quickly this may come about. CES will be a showcase for the initial suite of OTC hearing aids, while at the same time providing clues to the coming post-OTC revolution.

Ground Shifting Under Our Feet
What we typically think of as a “hearing aid” is a device meant to compensate for damage to, or deterioration of, the cells of the inner ear that convert sound into impulses sent to the brain. Damage is usually selective, causing the ear to be more or less sensitive to sound at different frequencies. To compensate, a hearing aid can be programmed by a hearing care professional to provide varying levels of amplification at different frequencies according to an individual’s hearing loss profile. A modern hearing aid is a sophisticated device, providing more control than just the amount of amplification, but selective amplification remains the core feature (Figure 1).
 
Figure 1: Block diagram of a typical hearing aid processor [2]

The new FDA rule defines an OTC hearing aid in the same way, specifically as a device that “must allow the user to cause frequency-dependent changes based on the user’s preference” [3]. An OTC hearing aid also provides control over the amplification profile but with settings determined by the end user rather than a hearing care professional, either through a hearing self-test or choice of profiles.

An unstated premise is that a device’s core functionality is determined by design and therefore readily identified as a hearing aid out of the box. The FDA barely recognized the possibility that additional software might change the nature of a device, considering mainly that a traditional hearing aid might be given self-fitting capability later on by unlocking or downloading a program. Therefore, two basic principles undergird the OTC hearing aid rule:
  • Selective amplification is needed to ameliorate mild to moderate hearing loss.
  • Such amplification must be provided for at the time of manufacture, even if a self-fitting component is activated by software afterward.

Both of these will prove untrue, bypassing the OTC rule in different ways and opening the door to increased innovation in hearing devices.

Amplification and the Cocktail Party Problem
People with more severe hearing loss require selective amplification to understand other people even in quiet settings. Others whose hearing loss is milder, or even nonexistent according to a standard hearing test, often hear fine in quiet settings but have difficulty in loud restaurants or pubs. This “cocktail party problem” is one of the most difficult because the noise is created by other voices. The ability to discriminate between the voice one wants to hear and all the others is one of the most sophisticated functions of the entire auditory system from ear to brain. It doesn’t take much to impair this ability, which is why a significant number of people without measurable hearing loss report difficulty hearing in noise (Figure 2) [4],[5].
 
Figure 2: The estimated US adult population with self-reported hearing difficulty and no audiometric hearing loss (box C) is shown here.

That leaves open the question if amplification is even necessary for people who are challenged only in loud situations. All that is lacking is an effective way to separate nearby speech from the din. The company that solves this problem will bypass hearing aid regulations entirely, opening up the possibility of offering “restaurant mode” in mass-market true wireless stereo (TWS) earphones. Even Apple has moved in this direction, adding by software update what it calls “conversation boost” (directional microphones) and ambient noise reduction to AirPods Pro. With up to 7dB of signal-to-noise ratio (SNR) improvement in hearing [6], these modes offer meaningful benefit in a popular consumer device (Figure 3). And that is just the beginning.
 
Figure 3: Hearing SNR improvement is illustrated with the AirPods Pro hearing features turned on.

Traditional hearing aids improve speech in noise performance in similar ways to AirPods Pro, using a combination of directional mics and noise filtering. Referring to Figure 1, hearing aids are more sophisticated in that they perform amplification, compression, and noise reduction on per-channel basis to squeeze out higher SNRs than is possible operating on the entire spectrum as a whole. In addition, sound scene analysis dynamically controls the filter bank as the auditory situation changes. Because the noise profile in a restaurant is mainly the babble of other voices, there are limits to the efficacy of acoustic filtering, even under dynamic control.

A Solution from the Broadcast World
Those in the broadcast world are familiar with machine learning-based noise reduction through programs, such as Krisp. These work on a completely different principle. Rather than performing acoustic filtering, machine learning systems are trained to separate speech from noise and pass on only the speech. Often a deep neural network is used. This is a method of analyzing an input similar to the way the human brain works. Just as a person can learn to recognize the difference between a cat and a dog from life experiences, a deep neural network can learn the difference between speech and noise by being trained with speech recordings in different scenarios. The result is an inline system for separating out the speech and sending it through without the noise (Figure 4). Krisp wasn’t even launched until 2018 and still requires a PC’s processor and resources to create the deep neural network in software for running its noise reduction program. That’s a far cry from implementing such a system in-ear. The situation is changing rapidly, however, and the effects will be revolutionary.
 
Figure 4: This shows a deep neural network with N hidden layers [7].

ML Noise Reduction Goes Portable
A company called Whisper was the first to apply machine learning (ML) deep neural networks to the hearing device case, launching its first product in 2020. It was still not possible to run the neural network in-ear, so the Whisper system uses a separate “Brain,” about the size of a mobile phone and using a similar processor. The hearing aids themselves are of a classic design and work without the Brain. When the Brain is turned on, received audio is passed from the hearing aids to the Brain, de-noised, and returned at very low latency. Not only does the use of an external device allow enough processing power to run the neural network locally, it is also updatable. As of this writing, Whisper has delivered its fifth major software update to its customers (Figure 5). It is not likely that such a system would gain mass-market acceptance, however. Consumers will balk unless the entire system is in-ear and provides noticeable benefit. This is where it gets interesting.
 
Figure 5: A Whisper hearing aid uses a separate “Brain,” about the size of a mobile phone and using a similar processor. (Image Source: Whisper)

A Neural Network in Your Ear
With today’s technology, typical processors that can be incorporated into a hearable do not have the resources to run a deep neural network, especially considering limitations on allowed power consumption. That is poised to change as companies develop processors specifically designed to address this need. Greenwaves Technologies is sampling its GAP9 processor with a nine-core RISC-V compute cluster that they claim “is perfectly adapted to handling combinations of neural network and digital signal processing tasks delivering programmable compute power at extreme energy efficiency” [8]. Greenwaves has been demonstrating noise reduction and other applications on the GAP9, and has reserved a suite at CES.

Femtosense has taken a different approach by designing a neural network in hardware. It uses what they call a “sparse” architecture, activating only those nodes in the network that provide useful results at any given moment. They claim significantly reduced power consumption as a result, since only a fraction of the notes are turned on at any given time (Figure 6). Although the Femtosense chip could be implemented as a stand-alone co-processor, it is more likely to find a home as an IP block within a multifunction SoC for the sake of efficiency. Having already demonstrated its de-noising algorithm running on a development board at CES 2022, it will be most interesting to see its progress at the 2023 event. Solutions such as these will bring hearables one step closer to providing broadcast quality denoising in-ear, potentially revolutionizing how consumer devices address the cocktail party problem without amplification.
 
Figure 6: This is an illustration of a Femtosense processor showing active nodes at a given moment. (Image Source: Femtosense)

Evolving Toward Smart Hearables
Hearable devices have been following the same trajectory as mobile phones, in three stages. The first stage was the introduction of mobile phones whose sole purpose was to originate and receive calls. Next came feature phones with more advanced functions, such as email and web browsing preprogrammed. Finally, with the release of the original iPhone, came the app store ecosystem. By democratizing access to what amounted to a pocket-sized computer, innovation exploded. Today, we are at the “feature hearables” stage, but the same progression is about to happen, with important consequences in the hearing space.

One company working to bring a true operating system to hearables is Bragi, which pioneered true wireless devices in the previous decade and more recently pivoted solely toward developing its OS. Today, Bragi offers a range of apps that can be pre-configured at the factory or sold afterward through an app store. Though the complexity of the apps is limited by the processing power of the supported SoCs from Airoha and WuQi, a glimpse of the future is seen by the inclusion of Mimi Hearing Technologies’ app for personalizing streaming audio playback to one’s individual hearing profile (Figure 7).
 
Figure 7: This illustration shows a Bragi in-ear app ecosystem. (Image Source: Bragi)

While Bragi is working with existing SoC suppliers, Sonical is taking a different approach by developing its own processor (Figure 8). Optimized to run its OS, Sonical claims it will “provide the world’s first ear computer that has the capabilities and performance for next-generation ear worn products.” In a webinar hosted by Danish Sound cluster, Gary Spittle, founder and CEO, promised that the “world’s first ear computer platform” will launch at CES, and that “many plugins are already available.”
 
Figure 8: Sonical’s hearable processor and app ecosystem wants to provide ’the world’s first ear computer.” (Image Source: Sonical)

A New Wave of Innovation
By providing a platform for software developers to innovate without having to create corresponding hardware, smartphones have opened up whole new worlds. Think about all the smartphone apps you use and imagine a similar ecosystem running natively in your true wireless earphones.

Coming back to the cocktail party problem, there are companies working to mimic the brain’s ability to concentrate on and filter the desired speaker even more closely. The smart hearable ecosystem will provide the means to bring these innovations to practical use.

In the same webinar at which Sonical teased its CES presence, Neurotech software company Segotia discussed research performed at KU Leuven
(Figure 9) as basis of its own work to detect the specific voice being attended to by measuring EEG signals [9]. EEG sensor developer AAVAA also sees a future in hearables, highlighting that its “end-to-end solution fuses our advancements in attention and audio software, sensors, hardware, and design” in a modular platform (Figure 10). It is not hard to imagine how beamforming mics and ML-based denoising algorithms could be enhanced by knowing which person the user is attempting to hear.
 
Figure 9: This Segotia slide describes the KU Leuven study cited in text. (Image Source: Segotia)
Figure 10: AAVAA’s EEG sensor implementation is shown. (Image Source: AAVAA)
 

Future hearable apps could detect the microphone and sensor capability of a particular device and configure itself for optimum performance. Much like picking out a smartphone based on a mix of hardware features and performance (think screen resolution, camera performance, and sound quality), a consumer will be able to choose a hearable with optimum performance for their primary use cases, knowing they will still work with a wide range of other apps. It all amounts to a playground for innovation that will yield exciting results for hearing enhancement.

Regardless of Approach, Consumers Win
Even with techniques such as de-noising speech and beamforming mics steered by detecting which sound one is attending to, selective amplification will not be totally banished from hearing enhancement devices. But it looks increasingly likely that even amplification will be available from third-party app developers.

Already Mimi Hearing Technologies’ app for personalizing streaming audio, an unregulated hearing feature, is becoming available on both the Bragi and the Sonical platforms. It is well within the capability of app developers to offer a similar program for the microphones, essentially converting a smart wireless earphone into an OTC hearing aid after the fact.

It is difficult to imagine how the FDA will regulate earphones, which do not themselves provide amplification even if a percentage of consumers will buy an app from a third party afterward, and non-amplifying hearing features are not regulated at all. The newly released OTC hearing aid rule will be left by the wayside should these promising developments become reality.

One thing is clear. While the pace of developments may in time give regulators and some other stakeholders fits, the consumer stands to benefit from all the innovation coming to hearables. With the ability to “try before you buy” a variety of hearing apps and load them into mass-market consumer earphones, more people will find an effective solution for their hearing and lifestyle, with less stigma. That can only be a good thing. aX

References
[1] A. Bellavia, “Is the FDA’s OTC Hearing Aid Rule Already Obsolete?” LinkedIn, March 3, 2022, www.linkedin.com/pulse/fdas-otc-hearing-aid-rule-already-obsolete-andrew-bellavia

[2] L. Gerlach, G. Payá-Vayá, and H. Blume, “A Survey on Application Specific Processor Architectures for Digital Hearing Aids,” J Sign Process System, March 20, 2021, https://doi.org/10.1007/s11265-021-01648-0

[3] “Medical Devices; Ear, Nose, and Throat Devices; Establishing Over-the-Counter Hearing Aids,” Federal Register, Volume 87, No. 158, p. 50703 (page 6 of rule).

[4] D. R. Moore, M. Edmondson-Jones, P. Dawes, H. Fortnum, A. McCormack, R. H. Pierzycki, and K. J. Munro, “Relation between speech-in-noise threshold, hearing loss and cognition from 40-69 years of age” PloS one, 9(9), e107720, September 17, 2014, https://doi.org/10.1371/journal.pone.0107720

[5] B. Edwards, “Emerging Technologies, Market Segments, and MarkeTrak 10 Insights in Hearing Health Technology,” Seminars in hearing, 41(1), 37–54. https://doi.org/10.1055/s-0040-1701244, February 10, 2020, www.ncbi.nlm.nih.gov/pmc/articles/PMC7010484

[6] N. Chong-White, PhD, J. Mejia, PhD, J. Valderrama-Valenzuela, PhD, and B. Edwards, PhD, “Evaluation of Apple AirPods Pro with Conversation Boost and Ambient Noise Reduction for People with Hearing Loss in Noisy Environments,” Hearing Review, March 22, 2022,
https://hearingreview.com/hearing-products/hearing-aids/psap/apple-airpods-pro-for-people-with-hearing-loss-in-noisy-environments

[7] J. Moolayil, “A Layman’s Guide to Deep Neural Networks,” Towards Data Science, July 24, 2019,
https://towardsdatascience.com/a-laymans-guide-to-deep-neural-networks-ddcea24847fb

[8] Greenwaves Technologies, “GAP9 Product Brief,”
https://greenwaves-technologies.com/wp-content/uploads/2022/06/Product-Brief-GAP9-Sensors-General-V1_14.pdf

[9] R. Zink, S. Proesmans, A. Bertrand, S. Huffel, M. de Vos, “Online detection of auditory attention with mobile EEG: closing the loop with neurofeedback,” 10.1101/218727, November 2017, www.researchgate.net/publication/326717755_Online_detection_of_auditory_attention_with_mobile_EEG_closing_the_loop_with_neurofeedback

This article was originally published in audioXpress, January 2023
 
Page description
About Andrew Bellavia
Prior to founding AuraFuturity, a marketing consulting company focusing on in-ear and hearing, Andrew Bellavia had experience in international sales, marketing, product management, and general management. Audio has been both an abiding interest and a market he... Read more

related items