Opinion: PS5 and Xbox Series X audio upgrades will be as impressive as their visuals

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


With the upcoming consoles, audio technology will finally get a generational leap alongside visual fidelity.

One of the most intriguing parts of the PS5 technical announcement was that of the Tempest Engine. System architect Mark Cerny quite rightly said that audio hasn’t received the same attention as visual upgrades over the years, despite it being an essential part of the experience.

Sony started its work on a 3D audio system out of necessity for its PlayStation VR headset. However, the PS5’s new Tempest Engine supports hundreds of higher quality sound sources – compared to around 50 from the “modern-day standard bearer” PSVR.

Take rain for one example. Rather than a generic rain sound, individual droplets could be heard dependent on where they land. With what developers were able to achieve using binaural sound in games like Hellblade, it’s exciting to consider the future possibilities with such technology.

Of course, everyone’s hearing is unique and getting it to sound right for each individual is not a simple feat.

The Tempest Engine operates using the Head-Related Transfer Function (HRTF). Each person’s HRTF is unique to them and determines how they hear sound. Sony modelled HRTFs for around one hundred people and has come up with five presets for PS5’s launch, allowing players to choose the best for their ears.

Microsoft is also building a next-generation audio technology for the Series X which it calls Project Acoustics.

This is how Microsoft describes Project Acoustics:

“Project Acoustics is a wave acoustics engine for 3D interactive experiences. It models wave effects like occlusion, obstruction, portaling and reverberation effects in complex scenes without requiring manual zone markup or CPU intensive ray tracing. It also includes game engine and audio middleware integration.

Project Acoustics’ philosophy is similar to static lighting: bake detailed physics offline to provide a physical baseline, and use a lightweight runtime with expressive design controls to meet your artistic goals for the acoustics of your virtual world. Ray-based acoustics methods can check for occlusion using a single source-to-listener ray cast, or drive reverb by estimating local scene volume with a few rays. But these techniques can be unreliable because a pebble occludes as much as a boulder.

Rays don’t account for the way sound bends around objects, a phenomenon known as diffraction. Project Acoustics’ simulation captures these effects using a wave-based simulation. The acoustics are more predictable, accurate and seamless. Project Acoustics’ key innovation is to couple real sound wave-based acoustic simulation with traditional sound design concepts. It translates simulation results into traditional audio DSP parameters for occlusion, portaling, and reverb. The designer uses controls over this translation process.”

So while we’re getting excited about next-generation visuals, let’s not forget we’re finally getting a leap in audio technology to match.

(Photo by kyle smith on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: , , , , , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *