(no title)
meta-meta | 1 year ago
I've been developing a VR spatial sound and music app for a few years with the Unity game engine, bypassing the game engine's audio and instead remote controlling Ambisonic VSTs in REAPER. I can achieve low latency with that approach but it's a bit limited because all the tracks and routing need to be setup beforehand. There's probably a way to script it on REAPER but that sounds like an uphill battle. It would be a lot more natural to interface with an audio backend that is organized in terms of audio objects in space.
What I'd like is more flexibility to create and destroy objects on the fly. The VSTs I'm working with don't have any sort of occlusion either. That would be really nice to play with. Meta has released a baked audio raytracing solution for Quest, and that's fun for some situations but the latency is a bit too much for a satisfying virtual instrument.
Here's my project for context: https://musicality.computer/vr
noahfk|1 year ago
What you’re working on sounds really cool, I’ll have a look at it!
It sounds like Audiocube offers the kind of features that you need, although it doesn’t have realtime audio input (yet, I’m working on it and have it partially working).
meta-meta|1 year ago
noahfk|1 year ago
ganonm|1 year ago
https://valvesoftware.github.io/steam-audio/
meta-meta|1 year ago
unknown|1 year ago
[deleted]