A Standalone 3D Audio Workstation
Audiocube is a 3D DAW — unleash a new dimension of sonic creativity in your music and sound design.
No credit card required
Unlock a new sonic dimension…
✅ An Intuitive Workflow
Instantly import audio files, process and spatialize your sounds, then record and export your project as a HD .wav file.
✅ More Power Than VST Plugins
As a standalone app, Audiocube offers tools, workflows, and processes that go beyond the capabilities of VST plugins.
✅ Support You Can Count On
Our responsive support team ensures you’ll never feel stuck, with help available whenever you need it.
1000+ sound designers & musicians are using Audiocube for audio spatialization and experimentation
Featured on
Audio Spatialization & Experimentation
Are 2D plugins really the best way to work with 3D audio?
Try it free!
Audiocube provides total spatial freedom
A dedicated 3D audio engine gives you a level of control, depth, and freedom that 2D software can’t.
✅ Total freedom of source and listener placement
✅ Natural & customizable acoustic simulation
✅ Quick and easy for 3D audio creation
2D interfaces are bad for 3D audio production
2D DAWs and plugins don’t have the level of depth and freedom you need to fulfill your creative vision
❌ VSTs only offer a fixed listening perspective
❌ 2D software creates a barrier between creativity
❌ Fiddly, difficult, and time consuming
How To Easily Spatialize Audio In 2 Minutes
Audiocube is a 3D audio engine that expands your immersive audio workflow with a deep set of spatial and experimental tools.
Stop wasting time with fiddly VST plugins and 2D DAWs
Try it free!
Instantly fill your audio library with your own file collection, or download our included 2GB+ (and growing) custom sample packs
📁 Flexible Browser
Easily import, manage, and browse your audio file collection
📯2GB+ Samples Included
Download our collections of custom made samples, including synths, drums, textures, field recordings and more…
Create sound sources and devices, and place them exactly where you want them
🔊 7 Unique Device Types
Samplers, Emitters, Tickers, Logic Boxes, Ambience Nodes, FX Zones, Soundwalls each provide powerful functionality.
✥Total Placement Freedom
Place your sources and listening position wherever you want in the scene. Automate their movement over time.
Adjust the Acoustic Engine
Tweak and customize the acoustic simulation algorithm to capture the type of spatialization and sonic rendering required for your project.
🔊 Advanced Acoustics
Get full control over how the acoustic engine processes reflections, occlusion, air absortion, depth, and more.
🎧 HRTF Binauralization
Use a detailed Binaural process to capture a perfectly immersive spatial audio scene for headphone listening.
Use a flexible mixer and mastering effects for some final coloration, then export your finished audio.
“Your binauralization algorithm sounds incredible! It sounds so much more natural than Dear VR PRo’s HRTF VST.”
Cross-Platform and Fully Standalone
Easy to Learn, with Powerful Depth
Audiocube is easy to set up and requires no other software. It includes a 2GB+ sound library, an intuitive interface, and personal support.
Your purchase includes both Windows and Mac versions.
You’re 2 Minutes Away From Creating More Immersive Music
Stand out from the crowd with deeper, immersive, and more natural sounding audio. Whether you’re making music or working on sound design don’t miss out on the realism you need.
Expand your sound with Audiocube!
✅ No other software required – Mac or Windows
✅ Extensive tutorials and rapid support
✅ Try Audiocube For Free
Unlock a deeper level of creative freedom
It’s Not Just Another VST Plugin…
Audiocube is a standalone 3D DAW, built with a custom audio, physics, and graphical engine – enabling more depth and control than any plugin.
Move your workflow away from a cluttered and compromised VST-based setup, and embrace a modern solution for immersive audio creation and exploration.
18 Comments
tines
Is there an audio format that stores the 3D origins of each sound, so that you could theoretically play it through Airpods (or some other spatially-aware headphones) and hear the sound change as you tilt your head?
JofArnold
This is great (in theory). During lockdown I got an ambisonic mic (Rode NF-SFW1) and used it to create Dolby Atmos experiences. The workflow – including sending it to Dolby's tool every time – was such a pain. Adding additional 3d elements was especially annoying and limiting.
Unfortunately that's no longer my hobby so can't test this for you but definitely scratches an itch for past me. Nice
hyperific
YouTuber Benn Jordan would probably get a kick out of this. He's a major audiophile and did a series of ambisonic ambience.
roddylindsay
Nice work! Can you export for multichannel playback or is it binaural / stereo?
seltzered_
Not spatial audio, but reminded me of audioGL (2012, but a newer video posted in 2024): https://m.youtube.com/@AudioGL/videos
https://news.ycombinator.com/item?id=3579543
beeburrt
For those of us unfamiliar with the term DAW, I assume it’s Digital Audio Workstation.
At first glance I thought of: DEW
https://en.m.wikipedia.org/wiki/Directed-energy_weapon
sitkack
I assume this can do sound rendering, like simulating a conversation on a subway platform while a subway passes by?
Or singing while walking through a tunnel?
Since it has capabilities that would be hard to replicate, rather than show the tool on the landing page, I would show the output. Remove the clutter and force people to listen to what the tool can produce.
The tool as it is now, is being marketed towards yourself, people that wanted to build that tool and know what it is. But everyone will know what it can do after listening to sample output.
poeticfolly
Very interesting! Did you write your own acoustic simulation engine for this?
ggerules
Will there be a Linux version anytime soon!?
dekhn
IIUC, I'd be able to take individual closely miked recording on multiple different instruments and mix them into a soundspace, such that when I listen on stereo headphones, I'd be able to "locate" the sounds on a virtual stage?
(asking because I listen to a lot of live jam music in stereo and noticed that they use a stereo mix with a virtual image)
benterix
[flagged]
LeoPanthera
[flagged]
crazygringo
This seems intriguing but I'm genuinely confused.
It seems like "bakes in" spatial audio to binaural stereo?
But who is the market for that?
I love spatial audio on my AirPods but a big feature is that it moves with my head and can even be customized for my ears.
And I certainly don't want it applied when downsizing to a mono Bluetooth speaker.
It seems like you'd need to export your final product to surround/Atmos for the way people want to, and currently do, consume spatial audio? I assume the target here is Apple Music, short films, etc.?
I mean I think the concept of the 3D DAW is great. I just want to make sure there's a product-market fit here, so you can succeed. Or is there a market I'm overlooking?
mr777777777
[dead]
TheRealPomax
> As a standalone app, Audiocube offers tools, workflows, and processes that go beyond the capabilities of VST plugins.
But does it support VST/AU in order to load instruments rather than "samples"?
S0y
It's funny to see this now because I've been for a couple weeks looking into audio spacialization. After a lot of research and even trying to write my own spatializer plugin, I found that Game Engines have probably the most complete toolset to do this task. (Specifically I'm using Godot with https://valvesoftware.github.io/steam-audio/).
Steam audio is pretty awesome in that regards because it supports HRTF and all the physical based goodies like occlusion/reflection and sound propagation. So you can get really really immersive spatial audio.
The only downside with this solution is that you can't do offline rendering. So my question is:
can Audiocube do offline rendering? seems like it would be one killer feature for my use case.
meta-meta
This looks great! How small of an audio buffer have you been able to get down to? Any plans for an API?
I've been developing a VR spatial sound and music app for a few years with the Unity game engine, bypassing the game engine's audio and instead remote controlling Ambisonic VSTs in REAPER. I can achieve low latency with that approach but it's a bit limited because all the tracks and routing need to be setup beforehand. There's probably a way to script it on REAPER but that sounds like an uphill battle. It would be a lot more natural to interface with an audio backend that is organized in terms of audio objects in space.
What I'd like is more flexibility to create and destroy objects on the fly. The VSTs I'm working with don't have any sort of occlusion either. That would be really nice to play with. Meta has released a baked audio raytracing solution for Quest, and that's fun for some situations but the latency is a bit too much for a satisfying virtual instrument.
Here's my project for context: https://musicality.computer/vr
thot_experiment
Really cool! I've been working on a side project that utilizes spatial audio and I've been pleasantly surprised by the quality I'm getting just using the WebAudioAPI HRTF spatialization. I'm sure this is leagues ahead but it was really nice to find that I didn't really need to do anything to get decent spatial audio other than set the panner node to HRTF mode.