We select and review products independently. When you purchase through our links we may earn a commission. Learn more.

What Is Audio Beamforming?

Second-Generation Apple HomePod sitting on a table
Tyler Hayes / Review Geek
Audio beamforming systems control how sound travels through space. Beamforming is often utilized in omnidirectional speakers and noise-reducing microphones, but more impressive use cases, including "invisible headphones," are just over the horizon.

Thanks to audio beamforming, speakers and microphones can overcome a variety of problems, such as unwanted noise or poor room acoustics. But beamforming is more than just an audio-enhancement trick, and it will completely revolutionize how we think of sound.

What Is Audio Beamforming?

Audio beamforming, also called acoustic beamforming, is a technique that allows you to measure and control a sound wave’s path within an environment. This technique may be utilized for several purposes, though it’s primarily a tool for audio enhancement.

Modern microphones, especially those that are integrated in smartphones, headphones, or smart speakers, use acoustic beamforming to remove background noise from your voice. The process here is pretty simple—one microphone listens to your voice, while an additional mic (or array of mics) focuses on background noise. The background noise data is subtracted from your voice in real-time, automatically boosting voice clarity.

This audio-enhancement trick can be extended to speakers. Many soundbars, AVRs, and smart speakers use microphones to hear how audio interacts with your room (usually through a one-time setup process). From there, a computer uses the microphone data to adjust output settings, compensating for reflections, resonances, and other acoustic nastiness in your room.

Some soundbars use beamforming for virtual surround sound. Justin Duino / Review Geek

In some cases, this kind of beamforming audio enhancement is simply EQ; cutting problematic high frequencies by a few decibels can reduce audio reflections, giving you a much clearer audio signal. But more complex systems can mimic a surround sound setup, or pump out music that sounds consistent (in terms of volume and clarity) regardless of your location in the room.

But the most advanced beamforming systems are like magic. “Invisible headphones” is the classic example—you can use audio beamforming to send a “bubble” of sound to a specific part of the room. Anyone outside of this bubble won’t hear the sound. When combined with face-tracking cameras, this system can create a “moving bubble” that follows the listener.

To be clear, the concept of acoustic beamforming has existed for decades. But it requires advanced digital signal processing (DSP) and an array of microphones—two things that couldn’t be integrated in a consumer-level device until fairly recently.

We should also note that beamforming is a longstanding part of radio, cellular, and wireless internet transmission, as it allows multiple antennas to direct and combine their output into one coherent signal.

How Does Audio Beamforming Work?

Noveto "Invisible Headphones" at CES 2022.
The Noveto N1, a failed “invisible headphone” system. Josh Hendrickson / Review Geek

Sound is a pressure wave; a vibration that’s carried through air, water, and solid mass. When you clap your hands, the acoustic pressure causes air molecules to vibrate alongside any neighboring matter. This creates a cascade, or a “wave,” where molecules knock into each other like billiards, allowing sound pressure to travel away from its source.

Air molecules are naturally spaced out a bit. So, when acoustic pressure forces an air molecule to slam into its neighbors, there’s a slight increase in air pressure—the molecules are more “compressed” than normal. But the pressure quickly travels forward, dragging molecules behind it. In other words, the area that was once “compressed” is now “rarefied” and has a lower-than-normal air pressure.

These fluctuations of “compressed” and “rarefied” pressure make up a sound wave. If you look at an illustration of a sine wave, you’ll notice that it has peaks and troughs. These highs and lows correspond to the compressed and rarefied areas of the wave. (Amplitude denotes loudness or volume, while wavelength corresponds to a sound’s pitch.)

A basic diagram of a sound wave.

Two sounds can happily coexist in the same space. But as you may learn when setting up a home theater or music studio, sound waves can interact with each other. And the most notable interaction, at least for our purposes, is phase cancelation.

The peaks and troughs of a sound wave correspond to changes in air pressure. So, if we want to cancel a sound, all we need to do is manipulate the air pressure to prevent any “compression” or “rarefaction.” This seems difficult, but phase cancelation offers a simple solution—create an identical sound wave, reverse its phase (swap the peaks and the troughs), and allow it to intercept the original sound wave. This smooths out the changes in air pressure and “cancels” both the original and inverse-phase sound waves.

Phase cancelation is usually “destructive,” meaning that it’s an unwanted and unintentional result of poor room acoustics or incorrect speaker installation. But it’s also the key concept behind noise-canceling headphones. And, as you can probably guess, beamforming audio utilizes plenty of phase cancelation. Complicated algorithms allow beamforming speakers to “cancel” audio within a room—this may be used to create a personal “bubble” of sound around a listener, provided that there are cameras (or other optical systems) that can track the listener’s head (plus microphones to detect and correct problems with the audio signal).

That said, beamforming is usually used to broaden a speaker’s soundstage (so every seat is the “best seat in the house”) or to enhance an audio signal (by removing problematic frequencies that bounce around the room and create echoes or resonances).

Do Any Products Use Beamforming Audio?

A HomePod sitting on a cabinet next to a plant.
Tyler Hayes / Review Geek

While it may sound like a futuristic technology, audio beamforming is fairly common. You’ll find it in most noise-reducing microphones, though the most notable use of beamforming audio (at least in a consumer-grade device) is Apple’s HomePod.

The HomePod contains four microphones and five speakers (eight speakers if you own the original model). These microphones and speakers face every direction, which can introduce some problems—two of the microphones will always have trouble hearing your voice (depending on where you’re standing), and objects in your room will obstruct each speaker’s audio signal, resulting in uneven volume, resonances, and reflections around the room.

Apple uses beamforming to resolve both of these problems. The HomePod listens for the location of your voice and adjusts its microphone settings accordingly—it “focuses” on the microphone that’s facing toward you, and it uses the two remaining mics to collect data for background noise cancelation, which increases the intelligibility of your voice commands.

These microphones also evaluate how the HomePod’s speakers interact with an environment. And as a result, the HomePod can automatically optimize each of its speakers to provide a consistent sound regardless of your location in the room. If a HomePod is placed next to a wall, for example, the speakers facing the wall may be turned down or equalized to limit certain reflections or resonances. (Some soundbars utilize acoustic beamforming for similar audio-enhancement features.)

For a more advanced example, there’s always the Razer Leviathan V2 Pro. It uses beamforming technology to create “invisible headphones” around the listener. Essentially, an optical system tracks the location of your head. An algorithm uses this data to digitally process an audio signal, resulting in the beamformed signal that can only be heard by you, the user.

I should reiterate that acoustic beamforming is a signal-processing technique. It requires software and is not a fully mechanical process. That said, the idea of controlled audio dispersal is nothing new. Large speakers often have a trapezoid-shaped indent around their tweeter, which directs the audio forward and reduces leakage to the left and right. And back in the day, Polk sold speakers with its patented Stereo Dimensional Array (SDA) technology, which utilized dozens of tricks (including phase cancelation) to create a “wraparound” stereo soundstage.

The Future of Beamforming Audio Is Incredible

A photo of the Razer Leviathan V2 Pro at CES 2023.
Josh Hendrickson / Review Geek

Acoustic beamforming is a complicated technology that has a lot of room to grow. Products like the HomePod are impressive and convenient, but they cannot match the sound quality of a typical pair of speakers. This is partially due to speaker design (a speaker pointed at your ear will sound better than one that’s facing a random direction), though it’s also a sign that our digital signal processing tech isn’t up to snuff.

After going through some growing pains, audio beamforming will be more useful and effective. Cylinder-shaped speakers like the HomePod will deliver improved audio quality without sacrificing their omnidirectional design (as a result, many people will prefer such speakers). And soundbars will be a lot better at mimicking a proper surround sound setup, especially in large rooms.

Beamforming will also become a more important part of typical speaker setups. Most AVRs made in the last 15 years offer some kind of auto-optimization setting, which uses a microphone to measure sound performance in your room. Large microphone arrays and optical systems could take this technology to the next level or even provide an adaptive experience that automatically adjusts to environmental changes (such as your seating position or the number of people actively looking at your TV).

But audio beamforming will be best known for its use in “invisible headphones.” The ability to broadcast sound to a certain person in the room is truly incredible, and it opens the door to a variety of scenarios. Obviously, you could use this technology to avoid wearing real headphones. But what if “invisible headphones” were integrated into your TV or your car? You could watch TV, listen to music, or take calls without bothering other people or compromising your personal privacy.

That said, large venues may benefit the most from beamforming. Auditoriums and stadiums are designed to service high-quality audio, but these venues aren’t immune from problems—there’s always a best seat in the house, and audio always spills into hallways or vendor areas. Modern signal processing and beamforming audio could mitigate these problems.

Theme parks could enjoy similar benefits. And, of course, beamforming could be used for creative purposes. Imagine if a ghost whispered in your ear during Disney’s Haunted Mansion ride, for example.

If you’re deeply interested in acoustic beamforming, you should visit a trade show. Speakers with cutting-edge beamforming technology regularly appear at CES and other events. You could even run into a few concept designs, which may never be released due to their unreliability or extreme cost.

Andrew Heinzman Andrew Heinzman
Andrew is the News Editor for Review Geek, where he covers breaking stories and manages the news team. He joined Life Savvy Media as a freelance writer in 2018 and has experience in a number of topics, including mobile hardware, audio, and IoT. Read Full Bio »