Last year I tried this driver (and Soundflower, and a bunch of other similar stuff) because I needed to transmit my system audio + microphone via Skype (online music classes thanks to COVID). I couldn't get it to work properly for my setup, where I wanted to listen just the system audio through my headphones but route system + microphone to Skype. I ended up buying Loopback [1], it's not cheap but it worked pretty well, it's pretty easy to use and enabled basically any kind of connection I wanted. I leave it here because it took me quite some time to find alternatives, so if anyone finds this driver doesn't cover their use cases maybe this other is helpful.
Literally tried using BlackHole with the same aim you do, but couldn't believe I was unable to route an input (BlackHole) to an output (DisplayPort, for example, in my case).
After hunting agressively on Google I was stoked when I found Apple's native, in-house solution: AU Lab.
I've also seen Source-Nexus, that has plugins for Pro Tools as well as VST/AU, and you can make multiple named drivers. It would work pretty well with Loopback it seems.
>BlackHole is a modern MacOS virtual audio driver that allows applications to pass audio to other applications with zero additional latency.
I don't think this is quite true. Each time the signal path crosses into the BlackHole Audio Server Plugin, it suffers an additional `AudioServerPlugInIOCycleInfo::mNominalIOBufferFrameSize` amount of latency.
That's not true of JACK, and it would be both a shame and little bit silly if Blackhole hasn't learnt from JACK's design. JACK doesn't use a pattern of server->client->server->client->server, each client wakes up the next one, so the pattern is server->client->client->client->server, with zero additional latency.
Indeed, but what you describe isn’t possible with Audio Server Plugins on macOS. Something similar is possible with CoreAudio Audio Units, but that’s a separate thing.
Fun fact: Apples tries to prevent you from using virtual audio devices like BlackHole with macOS's dictation and speech to text features. I've been unable to get macOS to output the high quality Siri voice to a virtual audio device for recording. It forces a lower quality voice.
Sadly this derailed my plans to use it as a free audio transcripter... (but there's better alternatives and a workarounds)
How do trigger the dictation/TTS with Siri? I wonder if a hardware audio interface with loopback can do it - I will give it a go if you give me some steps :)
I hope this works for my usecase, as everything I have attempted so far either results in silence or completely mangled audio.
All I want on my M1 is to be able to screen record my Zoom calls(full desktop recording) outside of the Zoom application itself.
This is because I want to, as a general review tool for myself, be able to refer back to meetings. My memory has gone to hell in recent years. I want to be able to record my meeting, tag my notes with the recording name, and if I need to refer to a specific recording, throw a speech to text processor at the audio and search.
I think a lot of people take this approach of recording everything (or at least try to once the idea of recording meetings is available), but you really need to think about whether it’s going to solve the problem you think you have. Having a full recording of everything implies that at some point you have to actually process the content (take notes, etc.), and you’re really just setting yourself up for taking double the amount of time than it would if proper notes were taken in the first place. You seem to be aspiring to use a techie solution to this, but would it really work in practice, and be worth the effort?
Consider that 99.9% of people are able to function pretty well without having to refer back to a full recording of every meeting they’ve ever had. What’s their secret? They just pay attention and take notes the first time through.
Is it possible you have some very specific requirements that no one else does? Maybe, but I’ve landed on the idea that this a really a form of procrastination — where the “paying attention” part of the process is being pushed off until sometime later, and really just makes more work in the long run.
> Is it possible you have some very specific requirements that no one else does?
No. Recording meetings may be a form of procrastination for you, but recordings can be valuable for a bunch of reasons. Tools like Otter and Gong make it trivial to transcribe and scan meetings, and to jump to associated audio if need be.
> They just pay attention and take notes the first time through.
Some of us are running those meetings, are focused on the conversation instead of note-taking, and don't have extra staff for note-taking.
I'm specifically compensating for my own mental decline.
Additionally, I do take notes, and I keep a summary work journal to track what I did and key insights. My daily log is typically about 6-7 bullet points for my own use, but may reference JIRA tickets or other links with more information that is intended for others to consume as well.
As for my meetings, I take notes for actions and key ideas, but each meeting is typically less than 5-6 bullet points, unless it results in a lot of sub tasks to a larger one.
I don't want to review ever meeting, in fact, I don't want to review any meetings. The goal of the recording is to have the record that I can refer back to if needed. Which, incidentally is about 5% of my meetings. So 95% will never be looked at again. And that 5%, I'm not going to replay the entire meeting, Im going to use something like Otter.ai to process the audio into text with time stamps, that way I can skip to the relevant bits.
Cool, I love more audio drivers… no joke. However how is this different than say loopback or soundflower? Other than it may actually work, I’d love to see a hyper nerdy deep dive into the intricacies, challenges and choices that led to this clump of code coming into existence.
It's a vanilla loopback Audio Server Plugin implementation, which is a userspace audio driver mechanism on macOS. Soundflower is an actual kernel-space driver using IOKit, which is the older (and deprecated) way of achieving the same thing.
This is cool. Will have to compare it with the rogue amoeba software. If you are looking for a commercial alternative they have audio hijack and loopback which are excellent and latency free.
I'm trying to find an explanation of how this compares to Loopback and Soundflower. What doesn't it do that the others can/what does it do that the others can't.
Would be good to know if this can do it all and replace multiple solutions each needing their own low level drivers.
Anyone tried both this and Loopback? Any thoughts on relative functionality, ease of use and reliability?
Super useful utility for those odd times you need to record the system audio. Seems more stable than Soundflower, and I don’t need to do this often enough that I’d invest in Audio Hijack!
Has anyone found an easy way to route the audio to the speakers as well as through Blackhole? I use Ableton to do this, which works but feels like overkill and isn’t much good if I’m trying to tell someone else how to use it who doesn’t have Ableton. There used to be a little Apple app in some kind of extra tools zip which would route any input to any any output but it doesn’t work on newer macOS.
Just change around some variables: `kDevice_Name`, `Manufacturer_Name`, `kPlugIn_BundleID`, `kBox_UID`, `kDevice_UID`, `kDevice_ModelUID`, and then the Blackhole.driver folder it spits out after build.
You can also go up to 16 (or more?) channels instead of the stock 2, which is great for multiple participants in a jacktrip session, among other things.
That one is slightly out of date. The new variable name is `kNumber_Of_Channels` for number of channels.
You can have a dedicated Blackhole audio device for each thing you want to mix and then stitch them together into an aggregate audio device (on Macs). Now you can use this aggregate device inside something like Ableton live, and then you route and mix audio from different sources.
Then you can send your Ableton mix to something like OBS Studio for example. Or even use different sets of return channels for different mixes to different places, like one for your monitors, one for OBS/live stream, one for your headphones for monitoring what you're playing perhaps.
I am currently setting up my a new M1 Mac for streaming using OBS Studio and was surprised that capturing audio from multiple sources other than the mic required this.
It is one of the few things I have found that “just works” on a PC compared to a Mac.
did you try setting up an aggregate device in Audio MIDI Setup?
I do a lot of audio work with Macs, and even use blackhole on occasion, but typically something as simple as capturing audio from multiple sources can be done natively with a simple aggregate device.
I would be very interested in knowing how to set it up natively as the OBS Studio [instructions][1] that I found recommended using BlackHole along with setting up the aggregate device in Audio MIDI Setup.
I have tried to use this to no avail the documentation (as usual is woeful) in the end I have brought a second sound card.
I want to use a daw to handle my audio for the online TTRPG's processing voice and playing back cues for when I am DM's
This functionality should be part of the core of windows audio - which given the rise in remote working improving the frankly crap widows audio should be a priority.
I did run into an issue where any WebRTC application like Google Meets would have a lower sample rate and trying to stream in higher sample rate audio would be 2-3 times slower than the original. Really weird but amusing.
Also based on a a quick scan: the mutex appears to be used for other state in the plugin.
For better performance and stability they should consider changing the mutex to an os_unfair_lock and maybe find a way to avoid dispatch_async to a global queue. And I’m not convinced the locking is correct around/within the dispatch_async blocks. But it might be fine.
There is no locking on the audio path. Mutexes are fine for protecting metadata access. Don't use atomics unless it would measurably improve hot-path performance.
I was really hoping this was some kind of implementation of eventide blackhole or space but this is pretty neat on its own. Could I pass audio from VCV rack without latency directly into reaper?
https://rogueamoeba.com/loopback