Tim Cook opened the WWDC20 event and was followed by a long sequence of company executives introducing iOS 14, iPadOS 14 - including enhanced handwriting capabilities with Apple Pencil - watchOS 7, macOS Big Sur (which will be macOS 11 when launched in October 2020) all with groundbreaking features and supported by easy-to use and powerful development tools.
It was certainly an historic day for Apple, the Mac, and the complete Apple eco-system, which will become even more cohesive and powerful. Apple kicked off its all-online Worldwide Developers Conference in Cupertino, California, with important updates across all Apple platforms, and announced a completely new age for the Mac, with the new macOS Big Sur operating system update, which will also pave the way for the company's transition from Intel-based Mac computers to Apple's own silicon ARM-based chips. Apple Silicon is the name that Apple uses for its own system-on-chip (SoC) and system-in-package (SiP) processors designed using ARM architecture. After precisely 20 years, this is the biggest design upgrade for Mac since the introduction of Mac OS X.
Another radical transition for the company, which surprised only for the fact that everything was so advanced, well planned and ready. At least for developers, which can start immediately to leverage the new updated platform with simple updates of their apps, now befitting from the fact that they will all work across iPhone, iPad and Macs. For the audio industry, this also means that the more than one billion Apple devices in the market will support a simplified approach to bring new solutions to the market in large scale, and with simplified implementation. Including creating secure bridges to the home and the car.
In the audio front, Apple confirmed that AirPods will gain the ability to seamlessly switch between Apple devices with automatic device switching. Users can start listening to music on the iPhone, and simply by putting down one device and holding another, AirPods will automatically switch the audio to the one that is currently in use. When watching a movie on Apple TV, users can take a call on iPhone and the earbuds will pause the movie and switch to the smartphone.
This will work not only with AirPods Pro and AirPods (2nd generation), but also with Beats Powerbeats and Powerbeats Pro true wireless earbuds, plus Beats Solo Pro wireless headphones, after a software upgrade. All these products have in common the fact that they use Apple's own H1 system-in-package (SiP), allowing software updates and the introduction of new valuable features. This is a powerful signal to a market used to products that become obsolete after as little as 6 months or a year.
But the most sensational of the announcements in this front was the revelation that AirPods Pro will gain spatial audio support with dynamic head tracking for an immersive theatrical experience when watching movies, TV and any content with surround 5.1, 7.1 and Dolby Atmos soundtracks. As the industry knows too well, introducing a convincing binaural rendering of multichannel audio sources is not an easy achievement, and until now required things like external hardware support (normally a dongle) or a dedicated full-featured preamp with DSP, plus extra hardware with sensors for head-tracking, plus a powerful computer to process all the complex calculations required.
And this was normally done until now with large over-ear headphones, and not in true wireless earbuds. Apple was able to bring the spatial audio effect to AirPods Pro, because it already equips those in-ears with its H1 multicore chip and a complete array of sensors, powered by the same SiP that is able to process the complex active noise cancellation features, Siri voice recognition. In fact, that low power SiP package even features its own digital amplifier platform, ideal to work in combination with real-time audio signal processing. With this powerful upgradable platform, Apple was able to develop a solution based on standard directional audio filters, and subtle frequency adjustments to each ear, allowing sounds to be placed virtually in space, according to the same aural cues of the multichannel surround formats, or object-based audio sources like Dolby Atmos.
This immersive listening experience is fully implemented in advanced spatial audio algorithms powered by the H1 chip. Specifically designed for headphones, the Apple H1 chip was first used in the 2019 version of AirPods, it has Bluetooth 5.0, supports hands-free "Hey Siri" commands, and offers 30 percent lower latency compared to the W1 chip in the earlier version of AirPods. But the AirPods Pro are also able to coordinate the binaural rendering of those complex signals according to the user's position. For that, Apple uses the accelerometers and gyroscopes in the AirPods Pro to track the motion of the user's head. In this way, each sound cue remains anchored to the device and a center channel remains in the front, even when the user turns its head around. And even if the user moves the device while watching the movie, like an iPad, the system tracks the position of the user's head relative to the screen. The system is able to understand how they are moving in relation to each other and maintain the aural cues anchored to its positions.
We only have to hope that Apple also confirms support for MPEG-H audio, and eventually support for loading HRTF profiles, for complete personalization.
More Audio Technologies and Features
There are still lots of other WWDC20 announcements that have complemented all the multiple platform and OS updates, and many are audio related.
Accessibility features in iOS 14 and iPadOS 14 include Headphone Accommodations, which amplifies soft sounds and tunes audio to help music, movies, phone calls, and podcasts sound crisper and clearer. These Headphone Accommodations are available on Apple and Beats headphones featuring the H1 headphone chip, as well as EarPods. In the AirPods Pro, this accessibility feature is also implemented when the Transparency mode is active, helping users to better recognize important sounds and dialog. Interestingly, Apple also implemented sign language detection in Group FaceTime calls, which makes the person signing more prominent in a video call. The screen reader for the blind community, VoiceOver, now automatically recognizes what is displayed visually onscreen so more apps and web experiences are accessible to more people.
Probably one of the most important technology announcements at WWWDC20, again transversal to all platforms, are the updates to the Siri voice recognition engine, connecting its contextual inference engine to new cloud sources, so users can get more information and relevant replies from "web sources". Apple has been investing in the acquisition of many companies in this field, and it seems that not only its voice engine is evolving significantly but that Apple also found out a workaround for its search engine limitations that give Google Assistant a huge advantage and have hindered Siri until now. Siri not only expanded its "knowledge," but now helps find answers directly from across the Internet, and can now also send audio messages. An improved keyboard dictation also now runs on device when dictating messages, notes, email, and more.
The second transversal technology announcement that is extremely significant is the translation feature now available to all OS platforms. Offering a new Apple native app called Translate, the company said at WWDC20 that they are able to deliver "the best and easiest app for translating conversations, offering quick and natural translation of voice and text among 11 different languages." And this works with on-device mode, allowing users to experience the features of the app offline for private voice and text translation. A very significant announcement given the potential to combine it also with AirPods and Apple Watch in the future.
In fact, Apple watchOS 7 now delivers enhanced customization tools and powerful new health and fitness features, including sleep tracking, and support for Siri's new language translation abilities. Users can use Siri to translate many languages directly from the wrist, and dictation is handled on device with the power of the Apple Neural Engine. The Apple Watch also supports Announce Messages with Siri.
And following the introduction of the Noise app in watchOS 6 that measures ambient sound levels and duration of exposure, watchOS 7 now adds support for hearing health with headphone audio notifications. Users can now understand how loudly they are listening to media through their headphones using their iPhone, iPod touch, or Apple Watch, and when these levels may impact hearing over time.
When total listening with headphones has reached 100 percent of the safe weekly listening amount, Apple Watch provides a notification to the wearer. This amount is based on World Health Organization recommendations that, for instance, a person can be exposed to 80 decibels for about 40 hours per week without an impact to hearing abilities. Users can also see how long they have been exposed to high decibel levels each week in the Health app on iPhone and can control the maximum level for headphone volume. No audio from the headphone audio notification feature is recorded or saved by the Health app or Apple Watch.