Thanks for expressing your interest in using Resonance Audio SDK for the Web (formally Songbird). We don't currently offer a 5.1 rendering of the audio as an available option. However, we do offer .ambisonicOutput which outputs ambisonic content directly and this can be rendered to a 5.1 target using 3rd party tools (http://www.radio.uqam.ca/ambisonic/b_g.html). If we see a large need for 5.1 rendering, it may be worth considering posting an issue to Omnitone (https://github.com/GoogleChrome/omnitone/issues) which is our ambisonic renderer on the web. :)
Got it, thanks. Do other platforms support 5.1+? Also, can Resonance be used in wasm projects? Wondering how practical it would be to use binaural output for the web, then recompile and port to native platforms if 5.1+ is needed. Wasm at least would eliminate the need for a rewrite from JS.
Hi! Just to let you know, I've decided to add support for stereo, 5.1 and 7.1 speaker layouts in the Web version of Resonance Audio. https://github.com/resonance-audio/resonance-audio-web-sdk/i... Follow the repo to find out when the work is completed (hopefully soonish :) )
Resonance Audio SDK is available for a variety of platforms, including Unreal, Unity, FMOD, WWISE, Android, iOS, the Web and VSTs. For the web version, it can be made compatible with any other code that uses the WebAudio API. Unfortunately, none of our API current supports multi-channel surround output as you're describing.
Haha. Actually, you might have had your headphones on correctly all along. S was "Source" and L was "Listener". I've added some clarification to the examples. Thanks for testing it out!
We're using a room acoustics model that captures early and late reflections based on the acoustic properties (dimensions and materials) of the room. :)
Hi Science404, yes indeed we're launching with the standard shoebox for now, but obviously we're thinking about the future too. :) Currently we calculate listener-based 1st-order reflections, optimized for performance. Once the ecosystem out there gets faster, we can explore fancier methods. ;) You can see this in EarlyReflections.js
Songbird renders stereo-out using Omnitone internally, so Android/mobile is certainly supported. Feel free to file any issues you have at https://github.com/Google/songbird/issues.
Hello everyone, thanks for checking out the new repository. I've resolved the CDN link issues, but feel free to file any more issues at https://github.com/Google/songbird/issues. Looking forward to seeing all the great stuff you all make with it. :)
Thanks! I'm confused about what this is, exactly, from the description:
> Songbird is a JavaScript API that supports real-time spatial audio encoding for the Web using Higher-Order Ambisonics (HOA). This is accomplished by attached audio input to a Source which has associated spatial object parameters. Source objects are attached to a Songbird instance, which models the listener as well as the room environment the listener and sources are in. Binaurally-rendered ambisonic output is generated using Omnitone, and raw ambisonic output is exposed as well.
My confusion is that The Web Audio API [1] also supports real-time spatial audio for the Web [2]. It looks like Ambisonics is a format that encodes spatial audio into a fixed set of audio channels, rather than just playing audio into PannerNodes directly.
Some questions: (1) Is Songbird indeed an alternative to the PannerNode API like I'm suspecting? (2) If so, why would you want to downmix your audio into a set of intermediate channels, rather than play each source directly into a PannerNode? (3) Is there any advantage to using Omnitone, which I suspect does the HRTFs, rather than using a PannerNode and its HRTFs directly?
Thanks for your interest! Let me try to clarify and answer your questions:
1. Songbird is indeed an enhanced alternative to PannerNode.
2. It internally works with ambisonics, but outputs stereo (we use Omnitone internally to render the multichannel audio down into a stereo track).
The general reason people use ambisonics instead of direct HRTF rendering is because ambisonics allows for head rotation prior to rendering, so the user can easily turn their head, etc. without you having to adjust all the incoming sources' hrtfs.
The reason we feel Songbird is an upgrade to PannerNode is three-fold:
One, you can control the quality of the localization/spatialization effect by adjusting ambisonicOrder (1st to 3rd, atm).
Two, PannerNode is costly... 2 convolutions per source, while songbird is a fixed number of convolutions irregardless of the number of sources, so it ends up allowing you to get more for less.
Three, PannerNode doesn't support any sort of room modelling and Songbird produces spatialized (ambisonic) room reflections and reverberation.
Should Songbird users be concerned about Creative's spatial audio patents? Will Google provide legal counsel if I get sued for using Songbird to provide spatial audio in a game?
"On March 5, 1998 Creative Labs sued Aureal for patent infringement. Aureal countersued because they believed Creative was guilty of patent infringement. After numerous lawsuits Aureal won a favorable ruling in December 1999,[1] which vindicated Aureal from these patent infringement claims, but the legal costs were too high and Aureal filed for bankruptcy. On September 21, 2000, Creative acquired Aureal's assets from its bankruptcy trustee for US$32 million. The purchase included patents, trademarks, other property, as well as a release to Creative from any infringement by Creative of Aureal's intellectual property including A3D. The purchase effectively eliminated Creative's only competition in the gaming audio market. It also eliminated any requirements for Creative to pay past or future royalties as well as damages for products which incorporated Aureal's technology."
Aureal's tech was based on HRTF which is a different (now expired IIRC) patent than Ambisonics though;