Sadly it's not really that simple (speaking as someone who has shipped JUCE apps on iOS and Android) – sure, the code might compile on every platform, but in reality there are still a bunch of other factors that can make stuff not work, or perform badly, or complicated to test, or whatever. I suspect really it's that companies are not willing to do the testing or support required to say they support Linux.
There's major Linux progress and I'd expect more to come soon. JUCE 7 introduced LV2 and I think more developers will release to Linux sooner than later.
It's the first I'd heard about it, apparently it happened in April 2020.
Kinda weird that PACE works with intellectual property and licensing, and not in audio software development themselves.
> JUCE Announces Acquisition by PACE
> The JUCE team is delighted to announce the acquisition of JUCE by PACE Anti-Piracy Inc.
> PACE specialises in software IP protection and secure licensing, and has been making developer tools for software creators for over 35 years. As PACE’s tools are used by a large number of world-class audio software publishers the company understands the importance of JUCE as a foundational piece of industry software infrastructure.
> Kinda weird that PACE works with intellectual property and licensing, and not in audio software development themselves.
A lot of their customers use JUCE, so there is a connection even if not very strong.
What is also good (in my opinion) is that they are not a big player of the audio software. If JUCE had been acquired by, for example, Steinberg, I would be very worried.
I have been interested in coding stuff for making sound or modifying sound of a given file for a while. Never have done any coding for sound tools. Any pointers, where I could start? Hopefully not too math heavy?
Think you're gonna have to be a bit more specific what kind of sounds you want to make, or how you'd like to modify them. The space is very wide in general, when it comes to "generating sound". Also, a lot of sound generator/modification will involve quite a bit of math, but with the right environment, you can make it more around trying different things and seeing what sounds good, rather than writing algorithms with text.
A very easy way to get started would be to use something like NoiseCraft, here is one example of what you could do: https://noisecraft.app/608
Then if you'd like to create more advanced stuff (or rather, faster at creating more advanced stuff, NoiseCraft can do a lot in the right hands), Max for Live is a pretty solid environment: https://www.ableton.com/en/live/max-for-live/
I would like to be able to output sound files like some *.wav or so directly from a programming language, writing code, which works with frequences, amplitude, channels, creating effects like ramp off or other things one can see in tools like audacity.
This is probably more mathematical than using existing tools. I definitely want to touch the code, and I would like to have an understanding, so that I could do it in almost any programming language.
Perhaps there is something beginner friendly, that also explains the math behind it for non mathematic / physics degree owners? Maybe I should look for something like "sound processing/generation from scratch".
Overtone is pretty great for being a programming environment where you can hear the results live as you evaluate code (https://github.com/overtone/overtone)
It's using SuperCollider under the hood (https://supercollider.github.io/) which describes itself as "A platform for audio synthesis and algorithmic composition"
You can pretty much achieve anything in terms of programmable music composition with Overtone/SuperCollider.
Having played around with Csound a fair bit, it has some pretty good functions for creating instruments and sounds but the API for arranging in it is genuinely horrible to use. Sticking the Csound guts in a plugin and arranging in a DAW makes a lot of sense to me.
Csound has a lot of readymade oscillators, filters, and reverbs but most of them sound pretty mediocre, because they're made by academics, not professional plugin developers.
There is a world of difference between a simple four pole filter and a really good juicy Moog filter emulation. Csound has the former, but it only has poor attempts at the latter.
The original point of Csound was to have unlimited anything connected in unlimited ways. But you need a code front end to generate the Csound code to handle any kind of non-trivial complexity, and that's a huge project in its own right.
So Putting Csound into VSTs makes no sense at all. You may as well just write raw C++ and package it using JUCE. The code will be faster and more efficient and just as cross-platform. And if you want to use readymade C++ libraries for basic building blocks they're not hard to find.
Pure data vs Max/MSP is a similar situation. The former is totally free but it simply sounds worse than the latter even with oscillators alone somehow. I think a lot of non-musicians don't realise the huge chasm between a "digital" and cheap sounding thing, and something that sounds warm and full out of the box. It's also just not clear what to do to achieve such a thing generally
You see the assumption is that if you are using Cabbage, you are either learning Csound or you already know it. So Cabbage allows you to sidestep the abysmal arrangement API in Csound for something more intuitive by embedding it in a DAW. But now I realise this makes no sense at all and will advise any hobbyists currently doing this they need to start learning C++ and thinking about cross-platform support.
It makes sense when you don't want to write C++. Also - Csound already has some built-in functions - so you can use them and dont bother with implementing primitives.
I've always been interested in learning how to write an audio plugin. As someone who loves playing with digital music production, I think there is a lot of room for innovation here still by outsiders who might not be new to programming, but could be new to audio. The untrained ear can be a great tool to discover new and interesting sounds that could be the backdrop of the next generation of music.
I just started learning myself, and this weekend got VS Code to launch Reaper as a debug target so I can read print statements from my VST built with JUCE. This is where the fun begins.
Open to helping anyone else get to this point as well.
Unfortunately Steinberg has a death grip on this space and the SDKs are horrendous. Not to mention the extremely commercialized VST UI framework scene.
It's a sad state, and it's been a sad state for years and years.
There's also the CLAP plugin format which might have a chance, supported by Bitwig and with Avid & Reaper adding it - seems lots of people want to escape Steinberg licensing if they can.
[0]: https://juce.com/ [1]: https://www.bitwig.com/stories/clap-the-new-audio-plug-in-st... [1]: https://github.com/free-audio/clap-juce-extensions