Imagine listening to music while you’re working out. You see a friend and take an earbud out to talk to them. The music stops. You finish your conversation and the moment you put the earbud back in, the music starts again on its own. After you get home, you look at the exercises your earbuds logged automatically for you. Sounds pretty nice, right? This is only one part of the future that hearable technology can enable. And it can’t do it without sensors.
What Are Hearables?
The hearables market is exploding (the market for hearables is expected to reach $2,038.3 million by 2026, growing at a CAGR of 37.6%), and its applications have greatly expanded from simple wireless headphones to robust devices that can measure fitness activity and connect users with voice-enabled virtual assistants. Like most wireless devices, they require sensors to perform critical functions, and these sensors need to provide a variety of features in order to deliver the best user experience.
Often, these devices provide more enhanced listening by amplifying relevant sounds and blocking out external noise. With virtual assistants integrated into hearable technology, they’re now expanding beyond basic audio applications and becoming more prevalent in everyday use as smart devices.
Healthcare applications are a major driver of hearables demand. Some hearables can track fitness activity, making them suitable options for those who want to listen to music while recording how far they’ve run. And the most advanced devices can actually measure biometric data such as heart rate, body temperature and calories burned, and even detect if the user falls.
Sensors for Hearables
There are several key features that a sensor should have in order to be deployed in hearable technology:
In-ear detection: Being able to tell when an earbud is physically in an ear or not can be helpful to tailor functionality based on context. For example, as an earbud is taken out, it could pause the music or mute itself on a call automatically.
Layered listening: Hearables with this feature can filter out sounds that are not part of their intended audio experience, or switch off to enhance sounds that would improve the real-world listening experience.
Gestures: With some context, gestures can make life easier. Imagine: a call comes in while you’re mid-conversation. You can simply nod or shake your head to accept or reject it. Or use simple taps to play and pause music, instead of clicking a button that could jam.
Predictive head tracking: This is what enables a hearable to calculate sound cues ahead of time, based on a user’s motion. It can be used in 3D audio applications to create a seamless and immersive 360-degree audio experience that takes real-world movement into account. For example, if you were listening to a concert with 3D audio, as you turn your head in the real world, sound cues change the direction of the delivered sound based on your orientation, as if you were in the same physical space as the performer.
Activity classifiers: With this feature, the hearable will be able to tell if a user is standing, walking, running or performing other physical activities, which is crucial to automating fitness tracking. It’s just as important in certain healthcare applications, where sensors can track things like posture or whether a user has fallen.
Hearable devices require low-latency sensors that can seamlessly perform all of these functions and deliver a smooth experience for users. By integrating an IMU sensor and sensor fusion software into your hearable device, you’ll enhance motion tracking and your user experience, whether it’s for auto-switching audio or tracking fitness and healthcare data.
Contact CEVA if you’d like to learn more about how to integrate motion sensing into your hearable application.
About the author:
Charles Pao is a Sr. Marketing Specialist, Sensor Fusion Business Unit, CEVA