3D modeling most commonly refers to building objects and environments visually, but it can also be used to create an acoustic space. One technique used to synthesize realistic audio is waveguide synthesis, which is the process of using a digital waveguide to model how sound moves through a defined space.
With the right dimensions and surfaces, 3D simulations can achieve desirable conditions for signaling sound waves throughout a room. Waveguide synthesis is the process of configuring the right wavelength for modeling acoustics for a Room Acoustics Model, or reverberator. A real-world example of a standard “room” would be a speakerbox. The room dimensions enable the sound to pass through set filters.
Below, Stanford guest instructor Perry Cook demonstrates waveguide synthesis, simulating a room model in ChucK to make a simple reverb.
This content comes from Session 5 of Physics-Based Sound Synthesis for Games and Interactive Systems. Join the course for free below, or check out Session 1.
Physics-Based Sound Synthesis for Games and Interactive Systems
Stanford University