“Making it easier to create models makes it practical to build interactive environments with realistic sound effects,” remarked Doug James, a professor of computer science at Stanford who is currently focusing on this VR music challenge.
The traditional approach to modeling sounds is based on theories developed by famed scientist Hermann von Helmholtz. But the researchers have based their algorithm on how music composer Heinrich Klein was able to blend multiple musical notes into a single sound. It uses a computer’s graphic processor to split different vibration modes into chords, and then it creates a sound-wave simulation of those chords.
In practical terms, the algorithm can significantly reduce the amount of time required to model sounds. It can be many thousands of times faster than traditional modeling, which could bring an unprecedented amount of realism to VR experiences.
See the full story here: https://www.digitalmusicnews.com/2019/08/05/stanford-virtual-reality-sound/