That describes every music related purchase I’ve made in the past 20 years. I’m glad I’ve done a small part to support some creative people, but aside from that I’ve got a bunch of pedals I’ve used once.
I played piano for 9 years as a child, and then stopped when I went to college (because no piano). But I taught myself to play guitar in (boredom + found guitar and music know-how). But I'm in my mid 40s now and have done very little musically in 20 years. I really really really want to play more music, but I'll probably end up saying that on my deathbed (in past tense). Maybe downloading this thing and plugging a guitar or piano up to a computer will change that.
I'm a software developer right now but I've worked with DAWs as a producer for more than 5 years. You can't even imagine how frustrating is working with Digital Audio Workstation. One messy plug-in and you can lose hours and hours of work. Preset management is a nightmare, there are so many things that they could do to go forward, but the Sequencer market is stall and hasn't moved in years.
Imagine if they applied something similar to a git versioning system to music projects.... I don't even know if the VST interface can be used or if it's licensed somehow from Steinberg.
Also consider that there are no good audio drivers for Linux (like Asio for example) so you're almost forced to stay in windows or Mac...
No plug-in or DAW has a CLI... I could go on for hours...
I'm doing some digital audio processing for a startup idea and the only thing I've came up with is using sox trough a Python API.
> Also consider that there are no good audio drivers for Linux (like Asio for example) so you're almost forced to stay in windows or Mac...
This is false.
> Imagine if they applied something similar to a git versioning system to music projects.
People have done this. Using git itself is a little problematic because it is very line-oriented and most project file formats for DAWs are not.
Regarding plugins, I know that I'm not the only lead developer of a DAW who, if they possibly could, would refuse to support plugins entirely. The problem is that most users want more functionality than a DAW itself could feasibly provide (they also sometimes like to use the same functionality (plugin) in different DAWs or different workflows).
There are things close to DAW functionality that have a CLI (such as ecasound). You can also run plugins from the command line by using standalone plugin hosts. You can use oscsend(1) to control plugins inside several different plugin hosts.
It sounds to me as if you've worked with a relatively small number of DAWs on only Windows and macOS and are not really aware of the breadth or depth of the "field".
> > Also consider that there are no good audio drivers for Linux (like Asio for example) so you're almost forced to stay in windows or Mac...
> This is false.
This was my immediate thought as well. Not sure what level we're talking here, so sorry if I'm addressing the wrong part of the stack, but JACK on Linux has been a great experience for me in terms of latency and ease of use. I run into way more day-to-day problems on Windows.
What feature specifically are you missing on Linux?
Re: plugins, DAWs with VST sandboxing are great. I use Bitwig, and I've never lost work due to a plugin crash.
>Re: plugins, DAWs with VST sandboxing are great. I use Bitwig, and I've never lost work due to a plugin crash.
Exactly, the original thread reads as someone who hasn't touched a modern DAW from the last 8 years or so. Even Renoise has multicore support with sandboxed plugins so one of my ancient free shitty vsts doesn't bring down the whole system.
> Using git itself is a little problematic because it is very line-oriented and most project file formats for DAWs are not.
Ardour and Reaper use plaintext project formats that work well with Git, at least for basic versioning.
> Regarding plugins, I know that I'm not the only lead developer of a DAW who, if they possibly could, would refuse to support plugins entirely. The problem is that most users want more functionality than a DAW itself could feasibly provide (they also sometimes like to use the same functionality (plugin) in different DAWs or different workflows).
I think the answer to this would be something like Reaper's "JS" plugins, which are written in a small compiled language and distributed as source code. Compared to "JS", it would need to: 1) be open source; 2) be a better language; and 3) support pretty skeuomorphic graphics ('cause people seem to really want that in their plugins). Ardour seems to be working on something like this using Lua (don't know about the graphics, or if the plugins could be supported in other DAWs).
Ardour uses XML for its session format, which is not a line-oriented format. Git can handle it, most of the time, but is not ideal. Something that groks the concept of XML nodes would do a better job (i.e. less conflicts duing merge resolution).
Ardour comes with a small set of basic "curated" plugins written in C or C++, that are "blessed" by us. Writing DSP plugins in Lua is also possible, but generally discouraged and, as you guessed, you can't provide a dedicated GUI for them, nor can they be used elsewhere (same limitation as Reaper's Jesusonic plugins).
However, even if those details were improved, the idea that a DAW manufacturer is going to be able to supply the precise EQ that demanding users want, let alone noise reduction, polyphonic pitch correction, and so, so much more, strikes me as unrealistc.
Note that git's diffs and merges are customisable. If somebody wants to put in the effort for a particular format, they can write tools that perform these two specific tasks and git can be told "Here are the tools for this repo" and deliver the benefits of improved tooling. It's definitely possible that could be worth doing for a DAW if used with git.
Git even understands that it's possible that you can neatly summarise a change in text (for display e.g. as part of the change log) but that summary is not actionable (so the data stored to implement that change is different), e.g. I believe git's man pages provide an example showing how you can get EXIF from a JPEG so that your git tools say the change was from "Photo of Pam and mummy" to "Cropped image of Pam" when actually it's a huge binary data change that is unintelligible to reader.
The problem is quite simple: if it isn't supported by the base installation it might as well not exist. Github/Gitlab is how most companies and people now experience git, and afaik you can't add those custom plug-ins to those platforms easily if at all.
The tools don't preserve the literal contents of the file; only the JSON / XML semantics. This makes them unpopular. (Source: I decided not to use them because of this; I needed hashes to remain stable for some reason I can't quite recall.)
You could expose enough GTK bits to expose an event loop to the LGI lua library. It's gobject-introspection for Lua. Since you already use these libs, it would not make Ardour any bigger.
I am not saying it's a great idea to mix GUI and a realtime DSP in the same thread, but it would be supported in you see some demand there.
There's really no technological reason for not allowing Lua to create GUIs within Ardour. It's more a question of whether or not we actually want to. Either way, you would not be mixing GUI and realtime code - the architecture of Ardour doesn't allow that.
Nice article Paul. It is motivating me to take another look at Ardour. I have also worked on some very large audio/video authoring tools. When we made the new lighting tools at Dreamworks, you could only create the UI using the scripting system. I am not sure if that discipline is still observed, but it was a good way to make sure that there wasn't a 2nd class citizen status given to extending the UI.
Thanks. Thing is, in an open source system, there are no 2nd class citizens caused by the "eat your own dogfood" rules (or lack thereof). You want to change the GUI in Ardour? The code is all right there.
The question I was raising (which I think you understand) is whether most users care that this is possible if it can't be done without a rebuild (compiling).
Right. In the case of the studio tools, artists could extend the application but not touch the core. Much like Ardour, having to build the application from source was more complex and who wants to make their users do that?
I've been reading all of your comments in this thread and the links provided carefully, thank you for all the great work on Ardour and the degree of forethought that goes into it, it really shows in the final product.
Just use DVC for any non-line-oriented files. If you don't want cloud backing stores, it's pretty easy to set up using the "local remote" pointing to a custom path (e.g. External drive) if you are the only person working on it. Otherwise I'd recommend symlinks.
> Regarding plugins, I know that I'm not the only lead developer of a DAW who, if they possibly could, would refuse to support plugins entirely. The problem is that most users want more functionality than a DAW itself could feasibly provide (they also sometimes like to use the same functionality (plugin) in different DAWs or different workflows).
Honestly the sound quality of most DAWs' built-in effects and synths are garbage. Even the effects section of most VST synths is bad! Best to allow plugins rather than trying to reinvent the wheel; you'll have to pry my beloved serum/iZotope/u-he/softube plugins out of my cold dead hands
Also, the "sometimes" is an understatement. Anyone who's been doing this a while likely to be pretty invested in the plugins they have. I would say the majority of working musicians that use DAWs work this way
I think you are misinterpreting that comment, I believe the idea there would be to have built-in functionality that would be equal or surpassing to those plugins, possibly by merging the actual plugins with the DAW. Of course doing that in a way that will please everyone is a lot harder to do than it sounds and is likely not even feasible, but the supposed benefit would be that the DAW could better integrate that functionality and would not have to deal with a lot of the reliability issues that exist with plugins. (Disclaimer: I agree with the GP comment in the sense that dropping plugins from DAWs would be a great thing to do if it were at all possible, but it currently really isn't)
The thing is, doing something that would 'be equal or surpassing' is pretty much impossible at an acceptable cost. You can probably write something that's 'good enough' that some people might be happy with -- that is what most all-in-one DAWs do -- but the beauty of the plugin system is that you can delegate development duties for those components to someone who a company who specializes in effects, for example. Or even a specific effect! There are developers who just write synths, or just write reverbs, or whatever. And we want to use their products!
That's the thing though, not having plugins would not mean that you can't delegate development duties for a synth or reverb out to another company. I am not sure where you are drawing that conclusion from. It just means that the development would be happening within the DAW.
A large part of the reason those developers can exist independently is because they can target every DAW. So either you will have to hire them all or find new specialists, or develop that specialization yourself.
Even in the case of developers only targeting one DAW (Pro Tools), at least one company (AIR music) saw that it was worth the extra effort to release its products in other plugin formats like VST.
Honestly, I would not like to see new developers trying to oust the plugin standard. It's one of those quirks about music software that exists for very good reason.
I'm sorry, I don't get where this is coming from, the specialists don't need to be hired to write the same code again. The old code can just be re-used. If I could guess, I think this is stemming from a misunderstanding that "no plugins" means "no code re-use" which is an understandable misunderstanding (say that 5 times fast) but it isn't the case.
In my opinion, plugins only exist because it's convenient to package a synthesizer/reverb/whatever for users that way, but there is no reason that can't be supplanted with something that is more convenient. Of course if it's less convenient then that wouldn't be worth doing, if that is your concern then I agree with you there.
Not saying no code reuse. But if you're going to reuse code across multiple DAWs, then a plugin system is the established standard.
Do you think those existing developers are just going to drop their code into your environment and hit compile? How do you plan on getting all these musical components into your DAW? It doesn't work like that. Someone is going to have to write something, or you're going to be acquiring the rights to existing code somehow, which is still going to have to be ported to your environment.
Or you could skip all that and implement VST. There are even libraries that present a standard interface and output plugins in all the major formats, so you could target that instead. (I forget the name of the framework I'm thinking of but it was written by the Cockos guy and his DAW's ReaPlugs effects suite targets that library)
How would you motivate a synth/effects developer want to spend time on your project? Unless you hire them, and if you're going to ask them to "write reusable code," they're going to point to their existing portfolio of plugins.
It doesn't really matter who does the work, if you were writing a new DAW, you'd probably do it. But if you were working on an established & popular DAW, plugin developers could be convinced to do it if there was benefit to them (more optimizations, more features, more convenience, less bugs, etc).
I don't think it is currently popular for plugin developers to implement against VST themselves, the libraries you mention seem to be gaining a lot of traction, at least from my experience from trying to catalog open-source plugins on github.
I don't understand why you are saying that either, please explain. Nobody needs to work for just one, as any code can be linked into any number of DAWs.
How is that much different from a plugin system? Because now you have developers writing bespoke code with a shim for each host DAW. At that point you've pretty much developed your own modular component protocol... literally a step away from a plugin system
It's not different, and in fact most DAWs are already doing something along these lines internally to actually implement an abstraction layer that can load various plugin formats and have them all interact. That's actually my key point: it wouldn't be significantly different for users.
But there are (at least) 12-20 DAWs, each with their own (internal) abstraction designs. It's bad enough having to write (or being able to write) plugins for:
AAX, VST, AU, LV2 (at least)
Now you'd be talking about 12-20+ different "plugin APIs". That's not going to fly.
As per your other comment, it seems things are already going in that direction, if developers are treating their plugins as being written in a format that could be described as "AU that is compatible with Logic but nothing else" or something like that (substitute for any format/DAW combination that the developer wants). So I would say that in a way it has already flown regardless of how we feel about it.
Merging plugins "with the DAW" is entirely feasible in the open source world where I live and work, and we do that sort of thing when appropriate.
But the reliability issues do not come from the fact that plugins are dynamically loaded shared objects (mostly); they come from the vastly different levels of skill AND very different interpretations of subtle aspects of plugins APIs (notably threading, GUI<->DSP communication and more). These are hard to get rid of if you've really got a diverse, distributed and largely independent "team" of developers working on something.
I still don't get how that is related, in my opinion, that is unlikely to change until there are tools that make it easy to resolve all those various threading/communication/latency issues on the developer side before a plugin is even distributed. So as long as you can just distribute any old shared object to get around that, I don't expect to see it to improve.
Maybe someone could come up with another way to do it that works across DAWs, and then plugins could still happen, but I consider this unlikely because with that there is never going to be a reliable way for the host to verify it that works better than what we have now (user reports X plugin works with Y host, etc).
It's not hard to stress test plugins. There are a few good tools for some formats out there. They can verify things that most DAWs will not (e.g. by deliberately cross-threading calls and so forth).
The problem is getting people to care. For years, on macOS, the "standard" for plugin developers has been "does it work in Logic?" There were even plugins that would fail auval(1) (the command line AudioUnit validator), but somehow pass in Logic. As far as their developers were concerned, the plugin worked. Working on Ardour, I've seen at least a dozen plugins that used ambiguity in the AU spec to justify why "well, it works in Logic even if it doesn't follow your interpretation of the spec" was the end of their interest.
That's what I mean. The solution there would be to get Logic to run "auval" before loading the plugin which ensures that developers don't do that. But for users of Logic doing that serves no benefit as it doesn't actually guarantee the plugins run more reliably, so it won't happen. And even if it did happen, there would be no guarantee that the stress testing tool wasn't also written against other ambiguities in the spec.
Edit: I also think testing is hard in general for plugin developers to use correctly. I personally would prefer to see plugin APIs designed in a better way that make it so it's hard to accidentally cause race conditions.
Logic does actually use auval these days, so things are improving. But this still doesn't catch the cases where the issue is something that auval can't or doesn't test (e.g stuff related to GUI interaction).
Plugin API design is an art, indeed. There was recently a brief bubble of activity on KVR among a number of independent plugin devs who found many things to dislike about VST3. A long discussion ensued, there was some talk of picking up LV2 instead, then it all evaporated and nothing was left. The last time the industry tried this was in around 2003, and nothing came (directly) of that effort either.
That's also another aspect where the plugin API vendors have not much incentive to fix things, because users can just use another thing on top of it (JUCE, DPF, etc) that makes better progress towards solving these issues from the developer perspective, but still DAWs are left with the tractability problem for other plugins that don't use that.
To me, if there is any interest in solving that, I would just expect someone to put a new plugin backend in JUCE/DPF that is specific to the DAW and then compile that together with the plugins into a big giant build. That's more what I mean by "dropping plugins", it's how you avoid the MxN problem too. But I think that many DAWs (including Ardour) gain little benefit from doing this at this time, so if that was what your original sentiment was, I agree.
>Honestly the sound quality of most DAWs' built-in effects and synths are garbage.
Have you ever tried Reason? Then again I'm actually using Reason as a plugin (via vst3 to vst2 wrapper) so I can sequence it easily with renoise. Because why not just load a DAW in your DAW.
Hah, I have a copy of Reason Lite that I got for free that loads as a VST. Mainly use it for ReBirth, but now even that's redundant since FL has its own 303 emulation that's pretty good
Back in the day producers used to complain that you could recognize a Reason track from a mile away. FL used to have the same problem. 'Garbage' is probably an overstatement nowadays (at least judging by the FLStudio demo tracks, their quality has been steadily improving over time) but the meme persists.
Suffice it to say that it's non-obvious to me where to start to go about getting a stable and mobile (ie laptop) experience. I'd like nothing better than to receive a response that makes me feel sheepish for thinking that Linux is the problem, and if anyone can give out good pointers I'd imagine you can.
Well, first of all let's start with noting that hardware can prevent you from ever getting a stable response. Some explanatory background on that here:
On Linux you do not (as a rule) install device drivers for your devices. They come with the system or they (generally) don't exist. I know of only one audio interface manufacturer who ever maintained their own drivers outside of the kernel tree (i.e. not part of mainstream Linux) and even they have had their drivers integrated now.
Next, since you're on a laptop, you're relieved of the unenviable task of figuring out whether to use a PCI(.) bus device or a USB interface. USB is your only option. The good news here is that any USB audio interface that works with an iPad also works on Linux. Why? Because iPad doesn't allow driver installs, and so manufacturers have been forced to make sure their devices work with a generic USB audio class device driver, just like they need to do on Linux. With very few exceptions, you can more or less buy any contemporary USB audio interface these days, just plug it into your Linux laptop (or desktop or whatever), and it will work.
What can be an issue is a lack of ability to configure the internals of the device. Some manufacturers e.g. MOTU have taken the delightful step of doing this by putting an http server on the device, and thus allowing you to configure it from any browser on anything at all. Others have used just generic USB audio class features, allowing it to be controlled from the basic Linux utilities for this sort of thing. And still more continue to only provide Windows/macOS-native configuration utilities. For some devices, dedicated Linux equivalents exist. Best place to check on that would be to start at linuxmusicians.com and use their forums.
Beyond the hardware, it's hard to give more advice because it depends on the experience/workflow you want to use. If you're looking for something Ableton Live-like, Bitwig is likely your best option. If you want a more traditional linear timeline-y DAW ala ProTools, Logic etc., then Reaper, Ardour or Mixbus would probably be good choices. If you want to do software modular, VCV Rack is head and shoulders above anything else (and runs on other platforms too).
There's a very large suite of LV2 plugins on Linux. Stay away from CALF even though they look pretty. The others range from functional to excellent. Your rating will depend on your workflow and aesthetics. You will not find libre plugins that do what deeply-DSP-oriented proprietary plugins do (e.g. Izotope, Melodyne), though you may be satisfied with things in the same ballpark (e.g. Noise Repellent and AutoTalent).
There's a growing body of VST3 plugins for Linux. If you're looking for amazing (non-libre) synths, U-he has all (?) their products available in a perpetual beta for Linux. Great stuff. There are plenty of libre synths too. There's an LV2 version of Vital called Vitalium which is more stable than the VST3 version; this synth has had rave reviews from many different reviewers.
Sample libraries are a problem because most of them are created for Kontakt. You have a choice of running Kontakt inside a Windows VST adapter (e.g. yabridge) or using other formats such as SFZ or DecentSampler, both of which have both free and libre players. pianobook.co.uk has hundreds of somewhat interesting sample libraries, many (but definitely not even most) of them available in DS format.
There are egregious DSP errors, including zipper noise and flat-out incorrect EQ-ing, plus they are one of the most reliable sources of crashes for Ardour users (at least). There are better alternatives even just within libre-space for everything that CALF does (they do, however, look very nice).
> Also consider that there are no good audio drivers for Linux (like Asio for example) so you're almost forced to stay in windows or Mac...
ASIO, really? Sorry but you couldn’t pay me to go back to that broken piece of crap after switching to Linux and JACK2. I’m actually traumatized by that piece of software, thinking of moments where ASIO would just break and cause my Live session to collapse into a glitchy cacophony of latency-induced noise. I’ve seen this happen on several computers with different Windows installations and external audio hardware and the problem always ends up being ASIO. Some of the producers I knew swore off anything that wasn’t a Mac because of this exact problem.
The problem with audio production on Linux in 2021 isn’t the audio protocol. It’s that most free and open source audio production software for Linux is dreadful to use. UX is actually very important for DAWs. I want to like Ardour but it’s a miserable piece of software to try to make music in. Feels like a chore to perform any action, kills my vibe, would not recommend. After trying really hard to become comfortable using it, I finally gave up and bought Bitwig. It’s a proprietary DAW and kinda expensive but I’ve been producing music with it for a couple of years and it’s a dream to use - sort of a spiritual successor to Ableton IMO.
> No plug-in or DAW has a CLI…
Most people who make music don’t care about this. I’m a software developer and musician who only uses Linux and I don’t care about this. In my opinion, Linux developers of free and open source creative software should spend less time building these features for other developers and focus more on making their software feel good to create with. If I feel bad trying to use your clunky-ass UI to make my art / music / whatever then I’m not going to hold myself back because it has a free software license. I’m going to find a piece of software that gets out of the way and lets me make what I want to make.
> I want to like Ardour but it’s a miserable piece of software to try to make music in. Feels like a chore to perform any action, kills my vibe, would not recommend. After trying really hard to become comfortable using it, I finally gave up and bought Bitwig. It’s a proprietary DAW and kinda expensive but I’ve been producing music with it for a couple of years and it’s a dream to use - sort of a spiritual successor to Ableton IMO.
As I've mentioned above, I get all kinds of email about Ardour, some declaring their love for it, and some much more condemnatory than anything you've said here.
The point is that "trying to make music" isn't much of a description: people's workflows for "making music" vary dramatically. Not many years ago, more or less the only way to do this was to record yourself playing one or more instruments and/or singing. These days, there are many fundamentally different workflows, and countless minor variations of each one. If Bitwig works for you, it's no surprise that Ardour doesn't. There's a bunch of people for whom the opposite is true. You have to be prepared to try different tools and figure out which ones work for you.
Finally, ASIO and JACK don't really at the same level. JACK on Windows actually uses ASIO. The comparison to ASIO on Linux is ALSA, and sure, I'd agree that it's better than ASIO is most ways (though maybe not 100%).
> The point is that "trying to make music" isn't much of a description: people's workflows for "making music" vary dramatically. Not many years ago, more or less the only way to do this was to record yourself playing one or more instruments and/or singing. These days, there are many fundamentally different workflows, and countless minor variations of each one.
Excellent point and apologies if that comment came across as inflammatory. I really respect the work you and the Ardour team have done even if it's not for me (and infinite thanks for your work on JACK, it truly is a special piece of software). My frustration has more to do with there not being a FOSS DAW that gives me that true Ableton-like experience. I understand why though, this stuff is hard to build and one workflow does not fit all as you point out.
Ardour is really great for recording and mixing. For a more "contemporary" workflow you might want to try zrythm¹, it's getting better and better. (I still use Bitwig though…) If you exclusively make electronic music you could also look into LMMS², it's more of an electronic-music-toy than an actual DAW but thats not necessarily a bad thing.
> For a more "contemporary" workflow you might want to try zrythm¹, it's getting better and better.
Oh wow, Zrythm looks awesome! Thank you for the suggestion, I'll be taking this DAW for a spin sometime soon. :)
> Ardour is really great for recording and mixing.
Yeah, I'm actually warming up to Ardour as a general mix & mastering environment. It reminds me of Logic Pro in that sense, being more suited for final touches than composition (in my personal workflow).
> If you exclusively make electronic music you could also look into LMMS², it's more of an electronic-music-toy than an actual DAW but thats not necessarily a bad thing.
How is LMMS these days? I tried it sometime last year and had a lot of fun but it crashed too much for my personal comfort (tbf that could have just been whatever buggy LV2 / VST plugins I was testing). It comes a bit closer to the "look and feel" I look for in a DAW - kinda reminds me of older versions of FL Studio which is kewl because that's the software I learned how to produce music on.
> I’m a software developer and musician who only uses Linux and I don’t care about this.
I am a software developer and musician who uses Linux and I do care about this. I run headless, and control my audio software through custom logic and hardware while playing live. I ended up writing a custom synthesizer and see because I couldn't find anything that works well for my use case
(I'm still open to something else; my synth doesn't sound very good. Designing custom sounds is not something I'm great at or something where I really want to focus.)
You can run Ardour headless and control it 100% using OSC (from the command line, with oscsend, or from a touch device (phone/tablet) using eg. TouchOSC). You could also get significant but not as extensive control using MIDI.
> I am a software developer and musician who uses Linux and I do care about this. I run headless, and control my audio software through custom logic and hardware while playing live. I ended up writing a custom synthesizer and see because I couldn't find anything that works well for my use case
That's pretty cool. Most modern DAWs allow you to define per-controller triggers for custom logic in the form of MIDI events. I guess you could write a CLI that maps custom commands to MIDI events and allows you to send those events to your DAW when they are called. It's not exactly what you're describing (and maybe it doesn't fit your use case) but is that something you've considered?
Havng had zero problems with this (already many years ago, in the days where getting low latency on linux was extremely hard) on a variety of machines but all with pretty decent cards is it possible that the problem has nothing to do with ASIO but rather with crappy drivers / manufacturers? Or perhaps you just had bad luck?
JACK2? Sorry my ignorance, it's been a while I abandoned Linux audio for Ableton on Windows with a focusrite Scarlet. Did they solve that JACK/alsa problem? Without running a2j in the background?
Which problem are you referring to? a2jmidi should work fine as long as you have access to the device. But in any case that should not be necessary anymore if you use pipewire, which should be able to manage all the devices at once.
The problem was that extra a2jmidi hoop adding a lot of friction to my creative process. Also I wanted to keep it on indefinitely to use it as an instrument, and a2jmidi would crash after a few days.
Also there was the night spent recompiling the right version of bison in the middle of Qsampler's dependency hell, so that I could have a piano sound. That was all in 2017.
I'm not sure what you mean extra hoop? You just start a2jmidi and then connect the device. Crashes would indeed be an issue, generally if you want to mitigate those, you would want to:
- Auto restart any important services (with systemd or similar)
- Use JACK/Pipewire session management
- Report the crash to the developers (of a2jmidi in this case, but it could be anything)
I honestly have never used LinuxSampler so I can't comment on that, I believe they have some strange licensing thing going on.
I have switched to Pipewire but last time I tried JACK2 there were tools that would auto-start a2jmidi for all available midi ports. This is trivial to do. If hotplug was a concern then someone could just write it to run based on udev triggers.
I don't run a2j or even have it installed so in my case it doesn't seem to be a problem. My audio production setup isn't highly complex FWIW but with the following configuration I have had no issues with audio or MIDI input and output. All of my devices are just plug-and-play for both Bitwig and qjackctl:
- Distro: Arch Linux
- Audio backend: JACK2 and PipeWire
- USB Audio Interface: Behringer U-Phoria UMC404HD
- USB controllers: Akai APC 40 mkii, Casio CTK-6200
- Microphone: Zoom H6
I run this exact setup on my Ryzen desktop and a Thinkpad T480 with no problems. I've also tried routing the audio output of various software directly into my DAW using qjackctl, works perfectly fine.
I kind of dropped out of Linux audiosome time ago. Can I ask why both pipewire and jack? I thought pipewire was supposed to implement most (all?) the stuff that jackd does.
Sorry, looking back at my wording I can see how that's confusing. You are correct. PipeWire replaces the standard JACK libraries with its own ABI-compatible implementations. This allows any JACK-compatible application to support audio through PipeWire using the same APIs it would normally use for JACK. This is also how PipeWire handles support for PulseAudio, ALSA and other multimedia libraries; it kinda reminds me of Wine for Linux audio protocols, if my understanding is correct anyway.
To use PipeWire in place of JACK, you have to install a specific package (`pipewire-jack` on Arch Linux) and run all of your audio applications using a wrapper command called `pw-jack`. You can update the `.desktop` entries for audio software on your system to automatically run this command; I've done that and everything I use launches correctly, tbh I forget that PipeWire is there. I just use Bitwig, qjackctl, Catia, etc. and they all think they're using JACK but really everything is being handled by PipeWire. Pretty kewl and it's been working perfectly for me for quite some time. :)
Pipewire is a layer above most of the really important stuff.
Pedalboard is also not a realtime audio environment (as was clarified by one of its developers here on the HN thread last week). In that sense it is extremely different from Bespoke (and nearly everything else).
Since you're a software dev you may have explored supercollider and other environments where you can employ great tools.
I've been looking for a hybrid UI/UX + Audio programming environment that would combine the freedom of code with the visual cues of a DAW but haven't found the ideal fit.
So far I've been rolling my own with cl-collider, a common lisp client for Supercollider which uses some lisp tricks (macros) to good effect.
Been using DAWs for 20 years now. Of course they aren’t changing... what even should change? They work, and do what they’re supposed to.
Most plugins these days are stable, and don’t crash. I haven’t had issues in like 10 years honestly. At least if you’re paying for them from groups who have a reputation.
Git for a music project would be detrimental, going to be really honest.
Linux can run things just fine. JACK has been a relatively stable audio platform. RT pre-emption can help. The only issue with Linux I see is software support. But with Apple making some less than stellar choices, I expect linux to become more important to the daw/digital recording ecosystem
One fantastic one that does work well on linux (reaper) also has a scripting interface. Not a CLI but actually more useful.
> One messy plug-in and you can lose hours and hours of work
If you’re meaning from crashes, Bitwig has configurable levels of process isolation for plugins - I guess the trade off is performance, I haven’t tested it out so can’t comment on how well it works.
Although we are still debating with some smart people about this (since actual measurements call it into question), this was our (Ardour dev team's) take on this question:
The answer is "too much overhead" but the overhead isn't coming from where I assumed it would. I thought it could be too expensive to pass the required amount of data through the kernel (at least 48000 samples per track per second), but that's not the problem, it turns out it's the context switches. Huh.
edit: I now also remembered that virtual memory is a thing and you can share a chunk of physical memory between processes to avoid the need to copy anything at all.
Context switches on Linux are a pretty heavy affair, this is the result of some choices in the distant past when the number of context switches per second was much higher than on most other platforms and so it was deemed to be 'good enough'. Unfortunately this rules out a lot of real time or near real time work, especially when the workload is divided over multiple user space processes.
I know of no evidence that Linux context switching on x86/x86_64 is slower than any other OS, and some suggestions that it is faster (Linux does not save/restore FP state, which Windows (at least at one point) does).
Linux is as capable or more capable of realtime work than any other general purpose OS, and the latency numbers from actual measurement are excellent (when using RT_PREEMPT etc).
Back in the day when the Linux kernel was first written there was a huge argument between Linus and Andrew Tanenbaum about whether or not the micro or macro kernel road was the superior one.
Tanenbaum argued that a microkernel was lighter, and could switch context faster than a macrokernel (the likes of which UNIX was typically reincarnated with). Linus argued that throughput, not latency is what matters to end users. At that time your typical OS switched tasks 18.5 times per second and Linux did substantially better than that. Case closed, the throughput argument won.
But now, many years later the consequences of that mean that we are switching contexts orders of magnitude slower than we could have because the context contains a lot more information than it strictly speaking has to. My own QnX clone switched 10K / second on a 486/33, and yes, the IPC mechanism meant that throughput suffered but for real time applications with a lot of the hard stuff in userspace context switches are far more important than throughput (and incidentally, also for perceived responsiveness of the OS and apps).
The latency numbers are excellent from the perspective of very forgiving applications, a typical DAW runs with 1K or even larger sample buffers which is acceptable, but for many real time applications that is an eternity and so those are not typically built using Linux as the core but some dedicated RTOS.
edit: I had 100K / second before, this was in error. It's been 30 years ;)
you will find that on Linux a context switch takes about 30 usec. More recent measurements that take account of the effect of the TLB flush put the range at 10-300usec.
That means that in 2010, on Linux, you could reasonably expect to do at least 30k/sec. In 2021, with realistic audio processing workloads, the range is probably 3-50k/sec.
The 486 is a much lower register count than contemporary processors, which accounts for the faster context switching.
Modern audio processing software on Linux can run with 64 sample buffers, not 1k.
This recent paper on RT linux on RPi/Beagleboard single board systems concludes that on some of these relatively "low power" systems, 95% of latencies are in the 40-60usec range, which is completely adequate for the majority of RTOS tasks (but not all).
>"The majority of Linux kernels’ measurements with PREEMPT_RT-patched kernel show the minimum response latency to be below 50 μs, both in user and kernel space. The maximum worst-case response latency (wcrl) reached 147 μs for RPi3 and 160 μs for BBB in user space, and 67 μs and 76 μs, respectively, in kernel space (average values). Most of the latencies are quite below this maximum (90% and 95%, respectively, for user space and kernel space). In general, it seems that maximal latencies do not often cross these values."
[ ... ]
"As an outcome, Linux kernels patched with PREEMPT_RT on such devices have the ability to run in a deterministic way as long as a latency value of about 160 μs, as an upper bound, is an acceptable safety margin. Such results reconfirm the reliability of such COTS devices running Linux with real-time support and extend their life cycle for the running applications."
This slide presentation offers up very similar numbers with graphs, also on ARM systems (I think):
This article shows cyclictest, a very minimal scheduling latency tester, getting the following results on an x86_64 system:
"The average average latency (Avg) is 4.875 us and the average maximum latency (Max) is 20.750 us, with the Max latency on 23 us. So, the average latency raises by 1.875 us, while the average maximum raises by 1.875 us, with the maximum latency raised by 2 us."
> "Maximum observed latency values generally range from a few microseconds on single-CPU systems to 250 microseconds on non-uniform memory access systems, which are acceptable values for a vast range of applications with sub-millisecond timing precision requirements. This way, PREEMPT_RT Linux closely fulfills theoretical fully-preemptive system assumptions that consider atomic scheduling operations with negligible overheads."
I'm not sure where you're getting your current info from, but I'm extremely confident that it's wrong. If I had to guess, you have not kept up with the impact of the PREEMPT_RT patchset on the kernel, nor scheduling improvements in general, but I don't know (obviously).
The last time that I've been actively involved with the development of real time control of time critical hardware on linux was about 2007 (very high speed stepper motor driven plasmacutter, slow down in a curve and you've ruined the workpiece), so for sure I'm out of the loop but I do have a fairly large Linux audio setup with all of the real time patches installed and clearly if it is possible to run with 64 sample buffers I have not been able to do so on my hardware, 1K really is the minimum before I get - inevitably, unfortunately - dropouts under relatively light load.
It might be worth documenting my setup (reproduced across three different machines, a laptop, an 'all-in-one' and a very beefy desktop), to see what could be improved because that difference is substantial.
> but I do have a fairly large Linux audio setup with all of the real time patches installed and clearly if it is possible to run with 64 sample buffers I have not been able to do so on my hardware, 1K really is the minimum before I get - inevitably, unfortunately - dropouts under relatively light load.
that sounds very weird, I don't even run a RT kernel and I have no trouble running at 64 with a fair amount of plug-ins and even 32 samples when I just want some live guitar effects (i7 6900k, RME multiface 2). My only configuration is installing this AUR package: https://archlinux.org/packages/community/any/realtime-privil...
I've linked to it elsewhere in these comments, but this tries to describe in broad terms why any given x86* computer may not be able to meet your latency goals:
There's a wide variety of reasons, all of which can interact. It's one of the few good arguments for buying Apple hardware, where this is not an issue.
Over the years I've been working on pro-audio/music creation on Linux (22+ years), I've had a couple of systems that could perform reliably at 64 samples/buffer. My current, based on a Ryzen Threadripper 2950X, can get down to 256 but not 128 or 64.
Ok, so at a guess then that 64 is an optimum configured set of hardware bought specifically with the goal of reaching that minimum, and for more realistic 'run of the mill' hardware it would be 256 and up?
If someone were to put together a guaranteed low latency config and keep it patched using a custom distro (assuming say 'Ubuntu Studio' would not be up to the task, would there be a market for that? Are there such suppliers? What specifically is different in Apple hardware that it works there?
I read that page earlier, its helpful, but more helpful would be a shopping list that says 'get this: it will work, assuming you install this particular distro'. And after independent verification you could then add alternatives for each slot. For me for instance a big question would if NVidia video cards would break the latency guarantees (their driver is pretty opaque) by keeping interrupts masked for too long in their drivers. If that would be a deal breaker then I'd have to set up a system only for studio use.
The problem with "shopping lists" is that, at least in the past, it's turned out that companies like e.g. mobo manufacturers change the chipsets in the corners of these devices without even changing the product ID. If I told you a mobo to buy, there's no guarantee that you'll actually get what I was recommending.
Lots of efforts have been made over the years to create "audio PC" companies. Even with the Windows market within their intent, I don't know of a single one that has lasted more than a year or two. How much of that is a market problem and how much of it is a problem of actually sourcing reliable components, I don't know. I do know that when large scale mixing console companies find mobos that work for them, they buy dozens of them, just to ensure they don't get switched out by the manufacturer.
Apple stuff works because Apple sort of has to care about this workflow functioning correctly. There's no magic beyond careful selection of components and then rigorously sourcing them for the duration of a given Apple product's lifetime.
I have no actual evidence on the video adapter front, but my limited experience would keep me aware from NVidia if I was trying to build a low latency audio workstation. Back in the olden days (say, 2002), there were companies like Matrox who deliberately made video adapters that were "2D only, targetting audio professionals". These cards were properly engineered to get the hell off the bus ASAP, and didn't have of the 3D capabilities that audio professionals (while wearing that hat) really don't tend to need.
My holy grail project sounds like something you and I could collaborate on someday. Essentially a way to represent a DAW project in plain text so it can be versioned in Git and easy to collaborate on. Happy to discuss anytime obiefernandez@gmail.com
damn, the audio wave visualization on the wires in the thing that's like the Bitwig grid editor is just BRILLIANT.
probably would be a bit much in a complex finished instrument but that's amazingly intuitive for the building phase, or for reading someone else's instrument.
i wish there a way to translate old Reaktor library stuff into more modern synth GUIs. there's some amazing gold in there but it is nigh impossible to understand between Reaktor's uh... challenging UI and the total lack of documentation for the signal paths to try and explain them to a relative novice. you can very easily see _what's_ built, but god help you try to understand why on your own without adding a ton of scopes everywhere manually
The oscilloscope-as-wire idea is one of those rare ideas that seems obvious in hindsight, yet no one (to my knowledge) has done it before, at least in a box-and-pin UI.
I wouldn’t be surprised to see it start popping up elsewhere, similar to how Ableton made everyone realize they had somehow been living without retrospective MIDI capture.
hey! I'm the guy who developed bespoke. I too thought that the waveform-in-wire thing was original, but a few years after I put it in, I realized that I had subconsciously stolen it from ReacTable. there's nothing new under the sun, everything is a remix!
Not quite the same but I love the way the ports in VCV Rack are LED's who's intensity changes with the signal voltage, really makes debugging at a glance easier.
Bespoke Plus is $5 and Pro is $15; there should be no check in the $5 line for Pro, OR if both lines are checked, the second line should be $10 and not $15.
It depends a bit on interpretation: if you pay 15, you of course have also 5 fewer in your pocket, so the 5 fewer can be included in the 15 fewer, than it's correct.
Replies and downvotes mark a strong disagreement. So okay, my interpretation is not mainstream, but I still think it's a possible interpretation.
The difference in interpretation comes from the nature of "features": are they actions or verifications? A verification is (usually) idempotent / has no consequence on the state of the world, but an action isn't.
Renoise has been my "daily driver" for a good number of years now, but I'm looking at getting into more modular or programming (like live coding, but without the live bit) stuff as I want to be doing more generative stuff.
Still, can 100% recommend renoise for what it is, and more, and I doubt I'll ever fully stop using it
Sunvox was my first thought as well. Watching the intro video it seems in bespoke the modules' internals are more visible. Though it's missing the sequencer portion compared with sunvox.
Really well thought out interface, looks super easy to quickly make a bunch of edits -- The SHIFT+Touch to connect modules is nice and I love that you can always just export the last 30 minutes. Looks like a ton of work went into the documentation as well -- can't wait to dive in!
Haha this is the most amazing feature matrix I've ever seen.
On a more serious note, modular music is an extremely interesting and growing area and just about every module is surprisingly expensive; I'm curious to how well this translates to virtual racks.
I've never heard about modular music, but I know must VSTs are extremely expensive. And they're expensive to even seriously try.
I want to get into music production but a barrier is that Omnisphere and FL Studio are $500 and have a super-limited trial version. As a grad student I'm not going to spend $500 for a piece of software I might be interested in using.
I would much rather have it be like software development where almost everything is free. And instead of paying upfront, synths / effects can make money by taking a cut of your revenue (I don't think that's like software development but it means synth producers still make revenue).
Valhalla plugins are restricted to Windows or macOS, unless you are willing to use a Windows VST bridge such as yabridge.
No reason to get 5 pin DIN MIDI at this point; almost all devices offer USB MIDI and its as good as DIN MIDI in almost all scenarios.
[ EDIT: VCV Rack ] 2.0 will be out "soonish" which will offer an "official" VST (and if we're lucky, LV2 also) plugin, though at a price.
People's mileage will vary when it comes to the DAW. As the author of another (libre & open source) DAW, I get emails that vary from "Oh my god, I've used X and Y and Z and yours is so much easier to use and incredibly fast and reliable" to "how can you look at yourself in a mirror when you make such shit software". Reaper works for a bunch of people, but not for another bunch, as is the case for most DAWs.
almost all devices offer USB MIDI and its as good as DIN MIDI in almost all scenarios
DAWless setups are definitely a thing, and you need DIN MIDI to connect the keyboard directly to a sound module (USB can only be connected to a computer).
But the setup being described doesn't involve any sound modules, and for general ease of use and extensibility, I would say that at this point USB MIDI probably wins. If you want to go to a sound module of some sort, there are some cheap and reliable USB->DIN MIDI cables available (along with some cheap and totally unreliable ones).
USB → DIN MIDI adapters are expensive, unreliable (they may or may not work, depending on how the MIDI over USB device exactly converts MIDI in to USB), and another moving part to have in a recording studio.
It is a $50-$100 extra investment to get a quality keyboard with DIN MIDI, but those quality keyboards come with better software and have a better build quality to them.
It’s a lot better to spend the extra money up front to get a keyboard with a DIN MIDI connection (e.g. an Arturia Keylab or Novation Launchkey) than to have something which will need a hacked together USB-to-DIN box (and I notice I haven’t seen any names of make and models of MIDI USB to DIN boxes which supposedly will always work) if they ever want to go DAWless.
I have a bunch of older stuff that only has 5 pin MIDI. The adapter I use is made by Yamaha, it hasn't let me down even once over a year and a half of pretty heavy use.
USB to Midi cables usually still need a computer or a synth with USB host functionality.
You can not connect a USB Midi controller to a 5 pin din midi synth.
Some synths offer a USB Midi host functionality like the 1010music Blackbox, the Deluge and most of the Raspberry Pi based synths like the Monome Norns.
If you're not going to make money off it, my opinion is you can use cracked VSTs without any concerns of "is it right".
In fact, as with a lot of pirated soft/media the experience is superior. Licensing and DRM of music software is a headache - dongles, software centers and other bloat. Scene groups like R2R even optimize performance and patch out bugs in addition to cracking protections, making their releases superior than that of the original developers.
Otherwise have a look at Splice rent-to-own plugin licensing.
There are sooooo many VST plugins out there. If you don't see the value in the cost of Omnisphere don't buy it. Look for other options. It's an extremely competitive market.
Ableton Live Suite has a tremendous amount of tools out of the box and contains everything you need to make music with. Look for second hand copies on forums. You'll likely be able to pick a copy up for $400 or so. You could buy that and never buy any software again.
> And instead of paying upfront, synths / effects can make money by taking a cut of your revenue
Hahahaha, have you asked how much the average electronic music producer makes vs the average software dev? ;)
There are also numerous plug-in discount stores (Audio Plugin Deals, Plugin Boutique, Emmett VST Buzz, and others) that regularly sell mainstream VSTs at hugely discounted sale prices.
Free DAWs are harder to find, but you can get intro-level DAWs with enough features to get started for less than $200.
On a Mac Logic Pro is $199, which is a full-featured DAW with a solid collection of virtual instruments.
Second on watching the plugin stores, particularly Plugin Boutique. Some of their sales are nuts, and they even have a very well organized free section.
Ableton Live Lite (1 step up from intro) comes free with a lot of music gear. I had a bunch of licenses lying around because it came with my USB audio box, my MIDI keyboard, etc etc.
Also consider FL Studio. Unlike nearly everyone else it comes with free lifetime updates. Like Ableton, every edition except the cheapest has a ton of plugins to cover nearly every need. My license is almost 20 years old, and I'm still running the latest version. Easily the best money my broke-student ass ever spent.
Of that list, Dexed is the only one appearing in my tracks regularly. This is because I’m more likely to use Serum, Massive, or Pigments for the other duties. But Dexed, especially after downloading many preset libraries, is freaking amazing in its DX7 early FM synthesis emulation.
It doesn't have to be. Logic Pro X was $199 when I got it. GB is free, and pretty useful. I haven't used cakewalk, but it's free.
I imagine there are oodles of cheap DAW(s) that will at least let you sequence and mix everything together. Technically, I don't think you need a DAW, if you're just playing (though obviously that's a very limited use case). I feel like you could even record tracks into Audacity straight from the instrument
There are TONS of great sounding free VSTs and some very good cheap ones. Some of them can be gotten at a deep discount (though you're usually shelling out $50-100+), but the MSRP is something outrageous. I don't want to shill, but you can "rent-to-own" for zero fee now (don't know how many DAWs are available, but plugins for sure.
It doesn't HAVE to be expensive, though there's probably some stuff you'll inevitably be tempted to buy, as with any hobbies. Is dropping eg. $100-500 on a hobby once or twice a year "expensive"? It's also pretty easy to get into and buy things as you go. You really don't have to drop thousands of dollars on tons of software.
You don't have to pay a dime. Don't quote me on this, but I think there's a major free modular simulator that's pretty good. You could get some solid vintage synths, some drum machine/groovebox, and effects for like $50-80.
If you think about everything going into it that's a ton of functionality for your buck, especially compared with traditional hardware synths. I have $3-5k in my rig (plus I buy VSTs), and it's still relatively basic. I don't own a single high end synth, my most expensive would be considered midrange at ~$1200... ONE BOX ... you could buy so much fucking software for that, it's crazy.
BTW almost none of this is necessary for you to experiment and create. You need very little... like 1-5 vsts, some utility(s), and minimal plug-in. I encourage OP and anyone reading this to create if they have the urge. I'm 100% positive you can get into it at any price point. Paying more money won't help you as much as you think here. Much better to keep it small and master one box at a time.
>I would much rather have it be like software development where almost everything is free. And instead of paying upfront, synths / effects can make money by taking a cut of your revenue (I don't think that's like software development but it means synth producers still make revenue).
I have to disagree with you there. First of all, I don't agree with this generalization, as it seems very focused on your own background. There are lots of people selling software (including design tools), sometimes for outrageous prices, if it's sought after.
It seems like almost all software is sold. I don't know how this business model is supposed to work. This is how small to medium sized software companies work. They have to sell to as many people as possible TBH, because most of their userbase is going to make $0.
I think your frame of reference is out of wack regarding what stuff costs, because we're getting screaming bargains on soft things (eg. media, newspapers, ect.) This "everything must be" free attitude is really toxic and having some troubling effects. I think it would also be hard to prove what people are using, and enforce this. Even for recordings, but so much money is made live.
Wow, what a tirade. Sorry folks, but I wanted to get that out there.
To echo a little, as a dev who gets paid to write software, I’m happy to pay other devs for quality software.
Quality DSP from Xfer Records (Serum), Fabfilter (Many amazing data viz plugs and good sounding compressor, limiter, gate, multi band eq, saturator, etc), izoTope (trash2 distortion) etc etc etc have been worth every cent for their clarity and performance.
Carefully shop around for some of these and you will realize that money does actually buy real quality sometimes. But only sometimes. Most quality products has free limited demos that can be very informative.
Sometimes free/libre stuff is also amazing, like Dexed. And perhaps also Bespoke - I’ll definitely be trying it out, and sending $$$ if I think I’ll use it.
While I have used modern DAWs in recent years, the music making tool I've used the most was Jeskola Buzz[1], it's a weird mix of music tracker[1] and modular setup[2] (but not modular in the same way a modular synth works, you connect "machines" such as sequencer->generator->effect->[more effects]->mixer).
This was when I was a "poor" student and I couldn't spend money on music.
Later I could afford real modular/semi-modular synths and I enjoy them a lot but I still appreciate being able to connect cables on a screen where you can save the state and rapidly recall it, rather than having to free up a table, setup all the modular synths, connect them together etc. - So I think I'm going to enjoy Bespoke Synth a lot.
Another alternative that I have enjoyed is VCV Rack[3] and its little brother for iPad/iPhone miRack[4].
PS I loved the "Feature Matrix" on the Bespoke home page ahah :-D
There's also NerdSEQ: https://xor-electronics.com/nerdseq/ + actual Eurorack modules! :D I like the Ableton clip style main view in NerdSEQ which allows you to live mix and match patterns, haven't come across as good a flow for that in regular software sequencers, especially ones that let you do chains of patterns of different lengths and polyrhythm them, while having some of them be automation parameters for some part of the DSP and others notes for other parts of the same chain (Buzz, Buze, SunVox kind of let you do that).
Yes Buzz! Absolutely loved that software. So powerful and esoteric yet fairly easy to use (if you were familiar with trackers at least).
It was a shame there was the whole thing of losing the source code that killed development for years, and that it never made the leap to cross platform, otherwise I’m sure I’d still use it today!
Have you tried Drambo on iOS? I guess in some ways it’s the closest thing I’ve found to Buzz, in that it operates at the same level of “granularity” in terms of the modules, and in that it has sequencing (step based in this case, though piano roll is coming) built in. It’s easily my most used iOS music making app these days, really brilliant bit of software.
MiRack is also really fun, although I find dealing with the myriad UIs for each module a bit challenging sometimes (though it is a fun touch!) - Drambo has a much more standardised layout for each module (like how Buzz was just sliders IIRC).
> It was a shame there was the whole thing of losing the source code
Yes, I can remember that very well! Eventually Oskari rebuilt the app basically from scratch, using .NET (at least IIRC) and that's what you get today if you go to the site and download it. It's not the original app but it maintains the same API for modules. Oskari nowadays is a researcher and he published some interesting articles: https://arxiv.org/abs/1407.5042
> Have you tried Drambo on iOS?
Nice, thank you! I had never heard of it but after watching a couple of YouTube videos I decided I'm going to download it
Ah interesting! I think I had moved away from Buzz by the time it was rebuilt but have very fond memories of the software and the community around it.
I hope you enjoy Drambo! It can take a little while to get into the swing of it but if you used to use Buzz I really think you’ll enjoy it. There are a tonne of tutorials on YouTube about it to get going. Being able to host AUv3 plugins is awesome too!
Buzz was my first "DAW" too. I immediately thought this kinda looked like what the buzz patching area might look like had development continued to now. I'm on mac now, but the thing that reminds me of Buzz the most currently is SunVox which is free. I haven't played with it in ages, mainly b/c I use live now. Did you spend any time in the tracker communities when Buzz was around? I was on em411 in various guises. Nothing like that community online now - too much noise everywhere.
I have tried SunVox on the iPad, it's a very nice application.
On tracker websites I used to be mastazi just like my HN username, but I never released my own music I think. However I made a CD and gave it to friends, I might still have a copy somewhere at my parents' home in Italy. That's the only memory I have, because I lost all of my Buzz project files in a HD failure many years ago.
The modern web is different from what we have back then, but if you look hard enough you can still find corners of the web that look familiar :-)
Wow, talk about a flash from the past! I was super-into jeskola buzz, I also worked with game development of very small games (<400KiB) and we wanted better sound effects in those, so I basically developed a softsynth that was very similar to jeskola buzz! I mean, it was sort-of horrible but it did produce some very interesting sounds with tens or hundreds of bytes in data size, which was a hugh improvement.
Very interesting. A possible improvement could be some "knob linkable objects" (can't imagine the correct name) that could be tied to analog inputs (GPIOs, ADC connected to i2c or USB, etc). The purpose would be to be able to modify certain parameters live on the fly, should anyone want to create a physical synth out of this software and a *PI like small SBC.
Also, I like a lot the way it links inputs and outputs just by dragging the mouse. Does anyone know if there is any general purpose library to do that?
I mean, Ideally I create some list nodes, then use that library to link them in a certain order by using the mouse.
> Very interesting. A possible improvement could be some "knob linkable objects" (can't imagine the correct name) that could be tied to analog inputs (GPIOs, ADC connected to i2c or USB, etc). The purpose would be to be able to modify certain parameters live on the fly, should anyone want to create a physical synth out of this software and a *PI like small SBC.
This is usually called "MIDI mapping" or similar and is available in basically every DAW these days.
> Also, I like a lot the way it links inputs and outputs just by dragging the mouse. Does anyone know if there is any general purpose library to do that? I mean, Ideally I create some list nodes, then use that library to link them in a certain order by using the mouse.
Something like qjackctl (with Jack, obviously) could do this for you, as you can route things however you want in a drag-and-drop UI.
> This is usually called "MIDI mapping" or similar and is available in basically every DAW these days.
I'm aware of MIDI mapping, my question was about the possibility to make a stand alone sysnthesizer (with keyboard and knobs) in which the necessary controls would be read from analog inputs while still displaying them on the screen as if they were changed with the mouse, which is not easy to do when playing live.
> Something like qjackctl (with Jack, obviously) could do this for you, as you can route things however you want in a drag-and-drop UI.
Sorry, my second sentence wasn't clear at all. I'd love to see a general purpose (not related to sound or any other use) library to allow the graphical representation of structures links (list nodes) together as this software and others do with generators, effects, etc. Ideally, it should operate on the header of a structure which contains the relevant fields for linking to others. When I alter the links on the screen, it does the same on the represented nodes.
That would be the way I would build for example a drum machine in which each structure contains also a pattern and I can move them at will back and forth, replicate them, set their own fields (number of repeats, etc). This would be again a sound related application, but what I'm looking for is something really general purpose.
> I'm aware of MIDI mapping, my question was about the possibility to make a stand alone sysnthesizer (with keyboard and knobs) in which the necessary controls would be read from analog inputs while still displaying them on the screen as if they were changed with the mouse, which is not easy to do when playing live.
I see, I understood the opposite way! But yeah, what you are describing also exists! Many (high-end) mixers have motorized faders, also using MIDI if I remember correctly, so you can change the fader in your DAW and it's represented in meat space.
To be fair, most control surfaces and mixers these days tend to have rotary encoders, so the only thing involved in "representing in meat space" is changing the LED indicator around/near the encoder. Motors are only required for actual faders that have real linear movement.
This looks very interesting, checking it out now (download links were broken, but I found releases on github). For people new to this type of software, definitely also check out VCV Rack for a more skeuomorphic take on open source software modular.
Is there a modular audio environment like vcv/reaktor/max, where I could plug stuff together by typing text, instrad of mousing pips? I honestly tried to get into pd, but it felt like typing book using the character map.
Saw a good talk at PyCon a couple of years ago about FoxDot, which is a wrapper around SuperCollider. Bit foggy on the details now, but it seemed like a good place to start.
Yes, most audio programming languages allow you to create DSP graphs by connecting nodes together. This is often done with some kind of pipe operator (for example, ChucK has the "chuck operator", Faust has a bunch of operators for connecting batches of nodes in different ways).
My favorite approach is in Sporth: because it's concatenative, you don't need any operator, you just type the things you want to connect. Shameless plug, I made a playground for it: https://audiomasher.org/browse
I'm the author of Scheme for Max and Scheme for Pd, open source (sibling) projects for scripting, live coding, and doing algorithmic music and sequencing in Max/MSP, Ableton Live, and Pure Data in S7 Scheme. I work in them using an OSC bridge from Vim so that I have a REPL in my editor controlling the environment entirely with lisp.
Can anyone in the know compare and contrast bespoke's feature set with Max for Live? (https://www.ableton.com/en/live/max-for-live/) It also has a circuit diagram-ish UI and supports scripting with Node. Put another way, does Ableton already offer Ableton smashed to bits with a baseball bat?
(fwiw, you have to pay for the $1k suite version of Ableton to get Max, so Bespoke could still be a great alternative even if they do a lot of the same things)
I only found out about Bespoke a few minutes ago, but I will say using Max as a programmer can be incredibly frustrating. I've made a handful of nontrivial M4L devices and have run up into tons of weird decisions, limitations, bugs, and plenty of Ableton crashes. (Caveat: this information in this post is mostly from a couple years ago and might be out of date, I haven't gotten the latest Ableton)
The JS support is really weird. It's only JS 1.6 (from 2005), and had weird glitches (like loading two instances of the same device causing the first device to stop working), and I couldn't get the timing tighter than about 30ms. Ideally you could write code that runs at audio rate.
There's also "gen", which is a Max-specific scripting language that is presumably real-time suitable through a JIT. Unfortunately you need a separate Max license to use it, even the full Ableton Live Suite doesn't give you gen support. You can sorta hack around and use it by manually editing the .maxpat files (which are almost JSON), copying from a device that uses gen, but there are lots of weird glitches going this route.
A list of a few annoying things about M4L:
* Documentation is pretty sparse and/or low quality, and weirdly split into two (help and references).
* All variables are global across devices by default, local (device-specific) variables need the prefix "---", which is barely documented
* Tons of annoying UX issues, like entering an invalid value in the inspector just reverts to the old value. You can't enter an empty string for parameter values, that reverts too (you need to enter a literal ""). Certain functionality is only available depending on whether the device is "locked", so you have to lock/unlock the view all the time if you're working with e.g. subpatchers
* Abstraction is quite annoying to do. There's three different types of patches, and it's not really clear what the difference is between them. Creating subpatches and then duplicating them creates two different subpatches--changes in one are not shared with others.
* ...and a ton of other things. I have a big text document of these gripes I was intending to turn into a blog post, but haven't gotten around to it.
Maybe I'm wrong and there's better ways to do some of these things, but overall my experience learning M4L was pretty bad. If it wasn't the only way to do certain advanced things in Ableton, I'd never touch it again.
I'm also a programmer, and it sounds to me like you're expecting Max to be something it isn't. It's not meant to be a regular programming environment, it was created to be accessible to non-programmers. But you can extend it with C, C++, Java, Csound, Chuck, SuperCollider, etc.
I will agree however that the JS implementation is neutered, probably because they have to worry about support volumes. This is one of the reasons I created Scheme For Max, which unlike the JS object, allows running in the high priority thread, does hot code reloading, and is open source and can be recompiled to your liking. Now that I have Scheme for Max, I love Max (and Max4Live) to bits, they are a fantastic environment, and I do all the coding in S7 Scheme or C. :-)
I really don't think being accessible to non-programmers is the cause of my issues. If anything, I'd say that targeting non-programmers might exacerbate these issues, because those users would be less likely to realize how unintuitive some aspects of Max are, thinking it's because programming in general is difficult.
I didn't touch too much on the specifics in my post, but there are lots of little design oddities I ran into when doing relatively simple tasks. Like creating a multiplexer for messages: the [switch] object has an "empty" channel for input 0 (and regular inputs are 1-indexed), so you'll often need an extra +1 object for the control input. And the inputs of [switch] have no memory, so every time the control changes, you need to send a bang to resend the message in the now-active channel. Or say you want to multiplex signals. That uses the [selector~] object (why not [switch~]?), and has the same +1 issue as [switch]. But what if you want a short crossfade when switching inputs to avoid harsh transients? "Good luck" is all I'll say here.
I'm not generally trying to do anything super-complicated with Max. I have indeed used the C extension API when it makes sense (building a wavetable synth), and it was OK. I still contend that the main visual programming environment is not very good, for programmers and non-programmers alike.
...anyways, all that said, Scheme For Max looks really cool :)
I spent some time figuring out nicer ways to work with it in order to build an Octatrack-style parameter crossfader for M4L, it provides some abstractions and setup to make using Typescript with Max a bit more pleasant. Still plenty of limitations but I was able to get my device working pretty well in the end. Apologies for lack of docs!
Yeesh, thanks for sharing all this. I would eagerly read that blog post!
I've gotten really into Ableton this past year and I've been curious whether I should get into Max for Live. Being a programmer and looking at their marketing materials, it seemed like it should tap into the right parts of my brain. But seeing your comment now ... maybe not the right move. Especially because I'm not looking to accomplish anything special, I just want a sandbox to play with digital audio concepts.
I would recommend the Cipriani and Giri books and plain old Max (or Pd). They have written the best introductions to playing with digital audio, full stop. Best computer music books made, IMHO.
No problem! Maybe I'm just being too negative, after all there are lots of people having fun creating stuff with it (and I like having the devices I've made). But there was definitely a big impedance mismatch of what I had in my head vs. how to implement it, so I wouldn't personally use it for any exploratory sandbox-y type stuff.
Paid my $15. Great effort. A bit cryptic, but a nice deviation from a standard DAW experience. A bit weird how a lot of things you would expect to be separate modules are combined (like LFOs are created by right clicking on parameters and filters are buried somewhere under "effect chain" module. Naming is a tad weird as well. LFO is LFO, but VCO is a sig gen. I get that it is not really "voltage controlled" but it has been an industry standard naming for decades. I will definitely play more with it and hope it matures.
Inside of similar-ish computer environments (CSound, SuperCollider, Chuck, PureData), these things have been known as "gens" (short for generator) for at least as long as [ EDIT: digital synthesis ] has existed (since it dates back to Max Miller and the origin of the MusicN language family).
[ I edited it because I realized that VCO's actually go back to about 1910 ]
Another instance where "Linux == Ubuntu". At least regarding the dependency install script which is just a bunch of "apt-get"s.
Sad that it has come to this.
Tbh I don’t blame them all that much. Even as someone who is enthusiastic about Linux and has been a Linux user for many years I find it difficult to support other distros than the one I actually use. And lately I’ve been booting my main Linux box more and more rarely too, as my MBP M1 with macOS is suitable for almost everything of what I do.
So for example when I recently went to describe in a README how to install some software that I’m workin on, I relied mostly on my memory and secondary sources in order to try and give a pointer to users on various distros for how to install the dependencies in question. And for example from what I could find for openSUSE Leap 15.3, both of the pieces of software are/were not in the official package repos at the time so I simply stated that, linking to the relevant pages under software.opensuse.org that told me this, but not having run openSUSE myself for years I am not sure the reason for it or indeed if it’s even completely correct.
I guess there’d be room for some CI service where instead of a specific Docker image like many use you’d instead list the dependencies in a kind of meta format and the service would install the corresponding packages and run the tests across many distros. Then the service could generate scripts or readme instructions for each distro.
At the moment I think realistically in most cases it will need to be that people who run various distros take it upon themselves to sort it out and to submit pull requests to projects about how to install and use on any given distro.
My own preferred Linux distro for desktop is Debian-based too. KDE Neon.
But so, I think it may be worth it that you try and submit a PR to the OP for adding instructions or an install script adapted to your own distro of choice.
Although ultimately, if the software grows big enough eventually someone will add it to the package repositories of each distro and then there will be no need for manually or scriptually installing the deps, because the deps will be specified in the package repos. And for example if you use Arch I guess someone is bound to add it to AUR if it’s there already.
I am running Ubuntu 18.04 and after having run the script, installing Python3.8 and running ldconfig, it still complains that 'libpython3.8.so.1.0' cannot be found.
This is awesome! It looks a bit like the love child between Visio and GNU Radio Companion re-spun for audio frequencies :-). I found the explainer video linked to the github repo[1] as a good way to figure out what its doing.
One thing I'd love to see in a DAW someday is houdini-like functionality. It'd be cool if there was this node-based environment that went a step further than just generating sound and could generate midi clips, automation clips, etc. and have it integrated into the DAW. Like you could see what was generated in the arrangement view.
The sequencer I'm working on, https://ossia.io, has a plug-in API which allows that. But no interesting "meta-creation" plug-ins so far for it ^^' at some point I'd like to provide primitive composition-like tools like some of the OpenMusic objects for instance, and there has been work towards e.g. segmenting a sound file with audio improvisation algorithms by an intern.
This is incredibly cool. Looking forward to loading it up.
Reminds me a bit of Reaktor's builder environment, but based on that demo video, it seems like a more useable, better thought out version (and you can't beat that price / feature matrix).
So this is of course very cool, but I would say that one of the main appeals of a hardware modular is the tactile nature of it. It feels very different to experiment with vs plugging virtual cables in software.
are designed such that you can easily assign fine-grained inputs on the controller to whatever software parameter you like. Traditional modular required you build a lot of things you'd expect from a physical instrument incrementally, and there's some interesting music that comes from that alone, but it's a real joy to have something you can use to construct, say, lifelike vibrato that responds to multidimensional inputs, without a shitton of work.
much more cost-effective too. you can use a single $300 controller and software to make the same shit that'd require several discrete multi-grand dollar setups if fully implemented in hardware
Yes, this is where I came from. I spent a great deal of time building stuff in Reaktor/Max and software is still useful for some things. However dedicated hardware (which granted is an expensive hobby) wins on intangibles like happy accidents, touch and feel, muscle memory. It feels less like working with a computer and more like experimenting in an audio laboratory.
I think with instruments it’s more about what keeps you in the creative flow than anything else. At least for me hardware has been 10x more productive than software in terms of actually making music.
Is there a headless mode, for synthesizing sounds and performing FX, from an API? Or is this like 99.9% of music software that the authors assume only a human will be the user?
I've been doing research on music AI, inverse synthesis, and the like, and shockingly few open-source software packages were usable for creating training sets.
There's a scripting interface[1] which might help you do some of what you want, but I'm not sure if patches can be run headless.
If you haven't seen them already, SuperCollider, Chuck, PD, Faust and/or the Web Audio API might be better suited to generating training sets in an offline fashion.
It's quite a common sight if you often download installers from smaller publishers. On MacOs you get similar warnings (even though Mac "notarization"[1] works differently compared to Windows certification[2]) Obviously it's up to you if you want to consider the application trusted or not. My rule of thumb for open source projects is that I trust them if they have lots of "stars", and Bespoke has 1.2k https://github.com/awwbees/BespokeSynth/
(but I'm not suggesting you follow the same rule, I'm not responsible for anything that happens etc. etc.)
Surely there are some— every artist has a different process. But I doubt there are many.
I'm less of an audio artist than a visual artist (which I do professionally to some extent) but I imagine the reasons are similar to why few people make visual art using scripts and imagemagick or some similar workflow.
Most creative output is birthed in a more freeform state of play, to some extent, rather than being reasoned about and assembled like code. Coding itself can be playful, but when your goal is expressing emotions and ideas in art, having to pipe that through a rigid, logical process is much less expressive than grabbing something, even a virtual something, and making some sort of gesture with it. So unless you're trying to express something algorithmic, code is a layer of abstraction that operates in a very different way with persnickety bounds that just aren't useful in most creative expression.
I've done generative design using action script in Photoshop and made some really cool algorithmic photo collages, but even as a long-time developer, the frequency with which I think "this could look cool if I set up a function to do it like this" is pretty rare.
I think all the arts eventually gain a calculated precision to them, just at different stages of the process. With music, structure arrives at the very beginning: rhythm and harmonic structure have some concrete rules to make them cohere. Likewise, visual arts usually require taking measurement from a reference to build up proportionate construction; if it's a cartoon this tends to be redefined into a composite of simple mathematical ratios of primitives, while a more realistic look develops from something more measured and scientific in nature(using a proportionate divider really is lab work).
Once you get past the preliminaries, play can begin...except, you can usually return to structure by adding another layer of it. Harmonization principles in music have layers of structure to them, and "expressive" pop harmony can often be seen as a very calculated thing of crafting maximal tension and release in a short period. Likewise, you can definitely go in the direction of technical visuals, either with detailed illustration or computer renderings. Often there's a desire to digitally emulate analog workflows without literally using those workflows, which results in some additional technical considerations around achieving that.
I think this is how all content creation software eventually grows into a spaceship panel UI - even if you have a specific thing in mind for each process layer and can reduce it to a preset or template, you have to configure the software to get there.
Even then I'd say the difference is less extreme. When I'm designing physical objects, my free-wheeling creative energies will largely be expended during the sketching phase, and when I'm in the CAD phase, that's more like coding— it's more about getting things to line up, be the proper dimension, etc. In music or visual art, what you're creating is the expressive end product, not a precisely crafted representation of the end product.
Then again, I'm not architect or mechanical engineer so maybe that's different for different folks.
modular + live coding in a single environment sounds fantastic! To be honest even just another modular based DAW excites me at the moment, VCV is fantastic but like with traditional DAWs theres many different ways to break an egg
damn, I've been looking for something similar to Max/PPOOLL but more accessible (especially on windows/linux) after reading about Tim Hecker using it and this seems like it could get to that point
the other link has a little, it's kinda just something i've seen around in different places when I was super curious about his music and how you'd produce stuff like that, specific ones I can find right now are in random threads[1] and in a red bull music interview he talks about it a tiny bit[2]
that's definitely got a lot of stuff too, though from what I've seen the PPOOLL stuff seems a bit more high level closer to bespoke synth, I guess the ideal thing is something that spans both, though bespoke is one of the easiest to use ones I've seen yet which is nice