Friday, September 23, 2022
If you haven’t yet heard, the world is on the cusp of what might be called “Headphones 3.0”—and with it comes a multitude of electronics engineering needs and opportunities. The headphones coming down the pike are better defined as “ear-worn computing” than audio players.
What is coming: new types of wireless technology will start to be implemented, with the goal of moving to independent devices. Additional processing is required to run more advanced audio and voice processing—on device, with embedded AI, and an increasing number of sensors to help drive new use cases.
These products will be user-defined, and there will be a wide range of use cases. For this all to work, the headphones must have the correct hardware: powerful processing, multiple connectivity technologies, a suite of sensors, improved battery density, and more memory.
Just as importantly, they will require the right software and device requirements to move toward the same model as any other independent device: They need a high-performance processor, a dedicated and widely used OS, and an apps marketplace through which users can customize their devices.
Ahead of this giant curve ball were two generations, or major technology landmarks, for headphones. Wired devices defined Headphones 1.0. These are “dumb” and used to play audio from a small number of devices like a “Walkman”, MP3 player, or smartphone.
Headphones 2.0 saw devices transition toward being wireless. It was driven most recently by the rapid growth of True Wireless Stereo (TWS) earbuds. They need intelligence to manage wireless audio and an increased number of use cases. These devices rely on their source device, such as a smartphone, for some processing and all content. The functionality and use cases are vendor-defined; customization by the user is extremely limited.
Imagine, in the coming era, that I want to create the best audio device. I can download my favorite streaming app with lossless audio, ensure that I have the best audio CODEC for spatial audio (to which I may need to buy a license), and I can tweak the audio using an audio-enhancement app. If I am hard of hearing, with a particular frequency that I have a weakness for, there will be an app for that, too!
This will open a huge marketplace for app developers, enabling them to build upon the base hardware and software. In turn, it will create use cases that are not yet imaginable. With this comes an even greater need for processing power and memory.
So, how do we get there? Is business as usual enough? I would say it is not.
The technology landscape has advanced considerably in the last few years, with all the usual things you would expect: faster processors, increased battery density, more efficient systems-on-chips (SoCs), improved connectivity, and much more.
SoC vendors like Qualcomm will continue to push the envelope by developing more powerful chips, shifting the market toward increasingly advanced devices. The evolution in the headphone SoC market is demonstrated by exploring how Qualcomm’s chips have improved over time (see Figure 2).
Early TWS devices used readily available wireless headphone SoCs, such as the CSR8670, which were shoehorned into the small form factor. As designs progressed, more specialized SoCs were developed with dual CPU cores. Dual CPU/dual DSP cores are now common.
Other SoC vendors, such as Airoha and Bestechnic, are also rapidly advancing their SoCs to increase processing power and speed. The drive toward cost savings—particularly for the small margin smartphone accessories market—has pushed SoC vendors to increase integration. Standalone CODECs and audio processors are rarely used these days outside of high-end devices. This reduces cost for device vendors, though it does limit the progression of the market.
Apple will always do its own thing. Using unique hardware and software within its walled garden works for it. Everyone else must consider how they both compete with Apple, plus the hundreds of other brands in the market. Many will focus on low cost, low margin devices with basic functionality. The pioneering brands will look to establish the Headphone 3.0 market and push it forward. This will open the market to a range of chip vendors supplying cutting-edge hardware and software.
SAR anticipates that we will see a drive toward disintegration in the higher end of the market—at least in the short term—where Headphone 3.0 devices use multi-chip designs, selecting the best components to enable the highest performance. Higher-end device vendors will always look for differentiation in features, hence they will not always be satisfied by the all-in-one TWS SoCs available. These chips have pre-defined specifications at least a year ahead, so there will always be a gap in time to address new feature adoption or enhance performance.
The adoption of what was once a new feature, such as active noise cancellation, is increasing, and it will be commonly used across the market very soon. So, higher-end TWS brands must differentiate by providing better performance. In the short term, spatial audio will add another level of complexity in design as this typically needs more processing power to enable Dolby ATMOS, and others.
Vendors are beginning to enter the market with different options for enabling next-generation headphone features. Some focus on single functions, such as edge AI processing—for example, Syntiant, AONdevices, and Aspinity. Others, such as Bragi, have created reference designs and an OS for the ear that can speed up design and open the market to third-party apps. Sonical is looking to provide complete solutions that incorporate advanced processors, matched with an OS to bring next-generation features to high-end devices.
Once an OS can be established as suitable for use across various high-end vendors, it is then likely to permeate through lower tiers. This can open up billions of ear-worn computing devices to app developers, creating a market likely worth tens of billions of U.S. dollars. This enables a wide range of software/IP/apps companies to push into the market, licensing apps directly to the consumer. For example, Dolby and DTS; algorithm vendors, such as Sensory and iFlytek; and a new range of audio software/apps vendors, such as AncSonic, Augmented Hearing, Canary Speech, Segotia, and Thymia.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|