Hacker News new | past | comments | ask | show | jobs | submit login
BlackHole Audio Driver (github.com/existentialaudio)
152 points by colinprince on Dec 20, 2021 | hide | past | favorite | 65 comments



Last year I tried this driver (and Soundflower, and a bunch of other similar stuff) because I needed to transmit my system audio + microphone via Skype (online music classes thanks to COVID). I couldn't get it to work properly for my setup, where I wanted to listen just the system audio through my headphones but route system + microphone to Skype. I ended up buying Loopback [1], it's not cheap but it worked pretty well, it's pretty easy to use and enabled basically any kind of connection I wanted. I leave it here because it took me quite some time to find alternatives, so if anyone finds this driver doesn't cover their use cases maybe this other is helpful.

https://rogueamoeba.com/loopback


Literally tried using BlackHole with the same aim you do, but couldn't believe I was unable to route an input (BlackHole) to an output (DisplayPort, for example, in my case).

After hunting agressively on Google I was stoked when I found Apple's native, in-house solution: AU Lab.

It's actually really clutch.

https://www.apple.com/apple-music/apple-digital-masters/

https://www.apple.com/apple-music/apple-digital-masters/docs...


I'm using BlackHole with Hosting AU. Great for Parametric Equalization and re-routing audio. Built on Audio Unit but easier to use.

http://ju-x.com/hostingau.html


I've also seen Source-Nexus, that has plugins for Pro Tools as well as VST/AU, and you can make multiple named drivers. It would work pretty well with Loopback it seems.

https://www.source-elements.com/products/source-nexus/


Do you mind sharing the configuration?


>BlackHole is a modern MacOS virtual audio driver that allows applications to pass audio to other applications with zero additional latency.

I don't think this is quite true. Each time the signal path crosses into the BlackHole Audio Server Plugin, it suffers an additional `AudioServerPlugInIOCycleInfo::mNominalIOBufferFrameSize` amount of latency.


That's not true of JACK, and it would be both a shame and little bit silly if Blackhole hasn't learnt from JACK's design. JACK doesn't use a pattern of server->client->server->client->server, each client wakes up the next one, so the pattern is server->client->client->client->server, with zero additional latency.


Indeed, but what you describe isn’t possible with Audio Server Plugins on macOS. Something similar is possible with CoreAudio Audio Units, but that’s a separate thing.


How does JACK handle audio devices linked in a cycle?



Ironically, that code is the biggest bug in JACK1 (and is replicated, I think, in JACK2).


I'd love to hear what that bug is! It's doing a topological sort on the audio graph, right?


it's not. that's the bug. For most scenarios, it doesn't make any difference.


OP care to comment on this?


Fun fact: Apples tries to prevent you from using virtual audio devices like BlackHole with macOS's dictation and speech to text features. I've been unable to get macOS to output the high quality Siri voice to a virtual audio device for recording. It forces a lower quality voice.

Sadly this derailed my plans to use it as a free audio transcripter... (but there's better alternatives and a workarounds)


How do trigger the dictation/TTS with Siri? I wonder if a hardware audio interface with loopback can do it - I will give it a go if you give me some steps :)


Does this use lower quality voice?

  $ say -v Samantha -o out.aiff "Hello World"
  $ open out.aiff


Yea "Samantha" isn't a Siri voice. They aren't listed in `say -v '?'`, but you can address a Siri voice directly like this (if it's installed):

  say -v "com.apple.speech.synthesis.voice.custom.siri.helena" hi
It'll error out mysteriously with the message `Open speech channel failed: -86`


It print the same error even for voices that I have downloaded. Is there anything I can do to make it work?


Maybe try

  sudo !!
https://xkcd.com/149/


This is a great project. I've used it extensively with Pioneer rekordbox and OBS for live streaming DJ and VJ stuff.

There are alternatives like loopback and soundflower but this is free, OSS, works on all modern Mac OS and hardware.


I hope this works for my usecase, as everything I have attempted so far either results in silence or completely mangled audio.

All I want on my M1 is to be able to screen record my Zoom calls(full desktop recording) outside of the Zoom application itself.

This is because I want to, as a general review tool for myself, be able to refer back to meetings. My memory has gone to hell in recent years. I want to be able to record my meeting, tag my notes with the recording name, and if I need to refer to a specific recording, throw a speech to text processor at the audio and search.


It'll work with a multi-output device. It may be illegal where you're at to record without consent in case you're not already aware.


You can probably figure out a way to do this with Blackhole but Audio Hijack will for sure if you just want to pay for an easy to set up app.

https://rogueamoeba.com/audiohijack/


Should do. I used this and OBS to do the same.


Just start QuickTime Player and choose New Screen Recording, surely?


That won’t capture the system audio.


Will that capture audio though?


I think a lot of people take this approach of recording everything (or at least try to once the idea of recording meetings is available), but you really need to think about whether it’s going to solve the problem you think you have. Having a full recording of everything implies that at some point you have to actually process the content (take notes, etc.), and you’re really just setting yourself up for taking double the amount of time than it would if proper notes were taken in the first place. You seem to be aspiring to use a techie solution to this, but would it really work in practice, and be worth the effort?

Consider that 99.9% of people are able to function pretty well without having to refer back to a full recording of every meeting they’ve ever had. What’s their secret? They just pay attention and take notes the first time through.

Is it possible you have some very specific requirements that no one else does? Maybe, but I’ve landed on the idea that this a really a form of procrastination — where the “paying attention” part of the process is being pushed off until sometime later, and really just makes more work in the long run.


> Is it possible you have some very specific requirements that no one else does?

No. Recording meetings may be a form of procrastination for you, but recordings can be valuable for a bunch of reasons. Tools like Otter and Gong make it trivial to transcribe and scan meetings, and to jump to associated audio if need be.

> They just pay attention and take notes the first time through.

Some of us are running those meetings, are focused on the conversation instead of note-taking, and don't have extra staff for note-taking.


You can also send the recording to people who missed the meeting - very useful for training sessions.


I'm specifically compensating for my own mental decline.

Additionally, I do take notes, and I keep a summary work journal to track what I did and key insights. My daily log is typically about 6-7 bullet points for my own use, but may reference JIRA tickets or other links with more information that is intended for others to consume as well.

As for my meetings, I take notes for actions and key ideas, but each meeting is typically less than 5-6 bullet points, unless it results in a lot of sub tasks to a larger one.

I don't want to review ever meeting, in fact, I don't want to review any meetings. The goal of the recording is to have the record that I can refer back to if needed. Which, incidentally is about 5% of my meetings. So 95% will never be looked at again. And that 5%, I'm not going to replay the entire meeting, Im going to use something like Otter.ai to process the audio into text with time stamps, that way I can skip to the relevant bits.

So creating more work? Not at all.


Cool, I love more audio drivers… no joke. However how is this different than say loopback or soundflower? Other than it may actually work, I’d love to see a hyper nerdy deep dive into the intricacies, challenges and choices that led to this clump of code coming into existence.


It's a vanilla loopback Audio Server Plugin implementation, which is a userspace audio driver mechanism on macOS. Soundflower is an actual kernel-space driver using IOKit, which is the older (and deprecated) way of achieving the same thing.

The actual work is done in a few dozen lines of code (see BlackHole_DoIOOperation at https://github.com/ExistentialAudio/BlackHole/blob/master/Bl...). The rest is just setting up the object model for the OS to provide metadata about the device.


This is cool. Will have to compare it with the rogue amoeba software. If you are looking for a commercial alternative they have audio hijack and loopback which are excellent and latency free.


They also issue frequent and timely updates. Their tools rely on a low level audio tool called ACE which also gets updated regularly.


Description to save a click:

BlackHole is a modern macOS virtual audio driver that allows applications to pass audio to other applications with zero additional latency.


For people who worked with Networing before: this is not the same as a blackhole router ;-)


This confused me as well. I assume that this was meant to sink sound that you didn't want. Maybe the author was thinking "wormhole" not "blackhole"?


Sounds like a new Soundflower.


I'm trying to find an explanation of how this compares to Loopback and Soundflower. What doesn't it do that the others can/what does it do that the others can't.

Would be good to know if this can do it all and replace multiple solutions each needing their own low level drivers.

Anyone tried both this and Loopback? Any thoughts on relative functionality, ease of use and reliability?


Super useful utility for those odd times you need to record the system audio. Seems more stable than Soundflower, and I don’t need to do this often enough that I’d invest in Audio Hijack!

Has anyone found an easy way to route the audio to the speakers as well as through Blackhole? I use Ableton to do this, which works but feels like overkill and isn’t much good if I’m trying to tell someone else how to use it who doesn’t have Ableton. There used to be a little Apple app in some kind of extra tools zip which would route any input to any any output but it doesn’t work on newer macOS.


A MacOS aggregate audio device might help? https://support.apple.com/en-ie/HT202000


Didn’t think of this, I use them all the time for other stuff too! Thanks


Your question seems to be covered here with some limitations. https://github.com/ExistentialAudio/BlackHole/wiki/Multi-Out...


Ah ha thanks!


The unsung hero of Blackhole is the ability to easily build a new one, meaning you can have infinite sets of audio devices with custom names.

https://github.com/ExistentialAudio/BlackHole/wiki/Build-wit...

Just change around some variables: `kDevice_Name`, `Manufacturer_Name`, `kPlugIn_BundleID`, `kBox_UID`, `kDevice_UID`, `kDevice_ModelUID`, and then the Blackhole.driver folder it spits out after build.

You can also go up to 16 (or more?) channels instead of the stock 2, which is great for multiple participants in a jacktrip session, among other things.

https://github.com/ExistentialAudio/BlackHole/wiki/Change-th...

That one is slightly out of date. The new variable name is `kNumber_Of_Channels` for number of channels.

You can have a dedicated Blackhole audio device for each thing you want to mix and then stitch them together into an aggregate audio device (on Macs). Now you can use this aggregate device inside something like Ableton live, and then you route and mix audio from different sources.

Then you can send your Ableton mix to something like OBS Studio for example. Or even use different sets of return channels for different mixes to different places, like one for your monitors, one for OBS/live stream, one for your headphones for monitoring what you're playing perhaps.

Multiple Blackholes + aggregate audio device = A+ happy audio routing


Had to use this the other day for a video I'm releasing soon. Incredibly useful and surprised this isn't built into streaming software.


I am currently setting up my a new M1 Mac for streaming using OBS Studio and was surprised that capturing audio from multiple sources other than the mic required this.

It is one of the few things I have found that “just works” on a PC compared to a Mac.

I am glad this exists, thank you!


did you try setting up an aggregate device in Audio MIDI Setup?

I do a lot of audio work with Macs, and even use blackhole on occasion, but typically something as simple as capturing audio from multiple sources can be done natively with a simple aggregate device.


We use Streamlabs OBS and their support page guides you through BlackHole for setting up on a Mac.

Where would I go to find info on setting this up natively? I've looked before but clearly missed something.


I would be very interested in knowing how to set it up natively as the OBS Studio [instructions][1] that I found recommended using BlackHole along with setting up the aggregate device in Audio MIDI Setup.

[1]: https://obsproject.com/forum/resources/mac-desktop-audio-usi...


If you're looking for a Windows equivalent check out VoiceMeeter. Very capable audio routing engine including support for ASIO for negligible latency.


I have tried to use this to no avail the documentation (as usual is woeful) in the end I have brought a second sound card.

I want to use a daw to handle my audio for the online TTRPG's processing voice and playing back cues for when I am DM's

This functionality should be part of the core of windows audio - which given the rise in remote working improving the frankly crap widows audio should be a priority.


I did run into an issue where any WebRTC application like Google Meets would have a lower sample rate and trying to stream in higher sample rate audio would be 2-3 times slower than the original. Really weird but amusing.


Would this be similar to VoiceMeeter Banana for Windows?

Reference: https://vb-audio.com/Voicemeeter/banana.htm


It did solve my audio issue when broadcasting with OBS on my M1 MacBook.


Based on a quick scan through the code, there are a lot of mutexes there, e.g. for reference counts, that could be replaced with atomics.


Also based on a a quick scan: the mutex appears to be used for other state in the plugin.

For better performance and stability they should consider changing the mutex to an os_unfair_lock and maybe find a way to avoid dispatch_async to a global queue. And I’m not convinced the locking is correct around/within the dispatch_async blocks. But it might be fine.


There is no locking on the audio path. Mutexes are fine for protecting metadata access. Don't use atomics unless it would measurably improve hot-path performance.


I've been using this for the last 12 months without issue. It works flawlessly. Thank you to the devs!


Spoke with Devin I believe who wrote this for a potential side project and super nice guys.


I was really hoping this was some kind of implementation of eventide blackhole or space but this is pretty neat on its own. Could I pass audio from VCV rack without latency directly into reaper?


You could also have been doing this since Rack was first released using JACK.


Yes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: