Hacker News new | past | comments | ask | show | jobs | submit login
The Design of the Roland Juno Syntheziser's Oscillators (thea.codes)
241 points by cushychicken on Dec 31, 2020 | hide | past | favorite | 81 comments



Highly recommend checking out the WebAudio API[1] for anyone interested in this kind of thing.

It’s got some obvious shortcomings and wild browser inconsistencies, but lets you go through much of the same design process in making your own synth.

1] https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_A...


[flagged]


Hacker News does not like JavaScript.


Downvoting makes me think about bunch of demented hounds chasing after anything that deviates from a party line


Very nice article! The technique of using an op amp with capacitor as an integrator is also a key component of analog computers. By hooking up a few integrators, an analog computer can quickly solve differential equations (in the 1960s, much faster than a digital computer could).


This is also why op-amps are named that - Operational Amplifiers, they can do many mathematical operations.

My college electronics prof was old-school enough who told us back in his day it was “standard practice” to use pp-amps to solve differential equations from their Calc class.


Prior to moving to PDP-11's and paper-tape, the special-effects company I worked for many years ago used analogue computers stuffed with op-amps to control the motion of camera rigs - i.e. analogue motion control. I never saw one in operation, but apparently a cardinal sin was opening the door when the system was in operation (the temperature change would cause enough error to ruin the shot).


This is the best article on DCO that I ever seen, so many thanks to the author. Often digital oscillators are unfairly misrepresented and it's difficult to find a fair comparison on the fundamentals of it.


You'd think that making a fully digital square or sawtooth wave would be trivial compared to this, but in fact it's much, much more difficult because the digital wave must avoid aliasing, an issue the analog wave generator needn't concern itself with.


Depends on how you approach 'trivial'. I've been hacking on Music Thing Modular Chord Organs for some time: they are a little Eurorack oscillator that can play chords, based on a Teensy. The Teensy audio library determines what waves you can get, and there's a 12 bit DAC built in.

You can specify the sampling rate of the Teensy with a bit of fiddling, and the limits depend on what you're asking the audio library to do. The stock Chord Organ's set up to run at 44.1k, using a couple of submixes in the library to do things like set waveform levels independently.

You can also set an oscillator to drive the output pin directly, with just one waveform.

I've got a couple of 'em running at 2,822,400hz instead of 44,100hz. That's 2,822k for one oscillator (I can get four running at 768k), by telling the Teensy its sample rate is something hilariously high, just to run some square and 12-bit triangle and sawtooth waves.

I avoid aliasing pretty well :) and it's definitely a fully digital DAC output. It's just good at being free from aliasing because I'm not concerned about what the sampling rate 'means' in terms of 'usable notes'. The only thing I care about is getting my waves nicer. The Teensy also has a rectangular wave, which is PWM adjustable. That, at 2,822k or even 768k, is quite nice even when it's a really thin pulse…

And of course you could use the square to drive an integrator :)


None of this was possible in the early 80s when the Junos were designed.

The Cortex M4 in the Teensy runs at 200MHz. In the early 80s consumer-grade DSP didn't exist at all. There were some exotic studio processors with slow microprocessors supported by custom hardware multipliers, but 8MHz 16-bit microprocessors were considered fast, expensive, and exotic.

Roland had a stellar run with their hardware designs. The Junos may be one of the best synthesizers ever made hitting the bullseye for price, character, simple but flexible programmability, and a legendary sweet but powerful sound.

There's a finesse to this kind of audio design that IMO makes a lot of Eurorack seem clumsy and uninspired in comparison.


Agreed. Its also pretty amazing what Yamaha was able to accomplish in the 1980s with the DX7 and successors using entirely digital synthesis without capable DSPs or microcontrollers. Some interesting background here: https://www.gearslutz.com/board/showpost.php?p=10602091&post...


yeah, from technology point of view 80s synths are incredibly interesting, btw recent discussion on DX7 - https://news.ycombinator.com/item?id=25592265

The thing with eurorack that can feel uninspiring because it's never one music instrument, especially if you rewire it often and it becomes this abstract blob of music tech that is never finished, which makes it both addictive and expensive.


The Alpha Juno already had a high resolution digital oscillator.


This is how the oscillators on the Novation Peak work too, using FPGAs to generate the signal at 24mhz apparently (https://novationmusic.com/en/peak-explained)

Also, Teensy is amazing! I got one recently to play with some audio stuff and it’s so much fun. Amazingly capable board really, love that it can do USB MIDI etc.


What’s wrong with a digital AA filter?

Also, aliasing as a concept doesn’t show up in purely digital systems.


The problem is that if you sample a mathematical waveform such as a sawtooth wave, it contains infinitely high frequencies, so the act of sampling creates aliasing.

There are a lot of solutions to this in the literature, such as wavetables that are band-limited, but for simple virtual analog synthesizers, there's a technique that is close to magic in its simplicity and quality: PolyBLEP. Check it out.


Complexity.

Digital AA is very hard to design, just the mathematics involved are extremely difficult to grasp, much harder than analog.

Aliasing shows up SPECIALLY in purely digital systems. Take a sample every half period of a periodic signal like a sinusoidal wave. Your signal output will be constant or zero.

Take a digital sample(calculate a frame) of a wheel in a 3D game (draw the wheel at a specific position), each 30th or 60th of a second. The wheel will go backwards at some speeds(stroboscopic effect, a form of alias).


There's nothing wrong with digital AA filter today, but they weren't as good or as cheap computationally in 1982 when the Juno was released. Most digital audio stuff used analog AA filters back then. Also, the Juno was polyphonic, so handling 6 sawtooths at the same time plus antialiasing would be too hard on the poor chip!

And aliasing does happens in digital oscillators when you're using a fixed clock and sample rate: If you have flexible clock (per-voice) you can just increment the output value in equal steps (and then set it to zero at the end of the "sawtooth") and you won't get aliasing. But if you have a fixed clock, you have to use a (very simple) equation to calculate the position of the soundwave at a given sample number. So you get aliasing when you approach Nyquist frequency, and you need the anti-aliasing filter to fix it.

And since it was 1982 the sampling rate of the chip was probably not that high, so you would get aliasing much earlier than at ~22kHz.


That's an amazing claim.

Try drawing what would appear to be a perfect square wave in a digital wave. Up, flat, down, flat. Guess what: you've just introduced enormous amounts of aliasing.


That's not what aliasing means.

"Lollypop" sampling vs. real DAC output: https://mk0soundguyshosprmrt.kinstacdn.com/wp-content/upload...

As you've observed, the output of a real digital circuit includes the flat parts, unlike the infinitely thin "lollypops" that you see in DSP textbooks.

But the jaggies themselves are not aliasing, they're actually the result of anti-aliasing. The sample-and-hold reconstruction filter that practically every DAC uses is an anti-aliasing filter, albeit a crude one, but if you've chosen your sampling frequency correctly, the harmonics from the jaggies exist entirely outside of your passband. A simple RC filter removes them almost perfectly.

This is aliasing: https://upload.wikimedia.org/wikipedia/commons/thumb/2/28/Al...

Digitally sampling an analog input signal is usually where you need to actually put some design effort into the AA filter, because you don't necessarily know whether the analog signal at your input has a lot of energy just outside your passband. Such an input can cause you to see an alias that's comparable to or louder than your signal of interest, and there's no way to get around that with direct sampling except for arbitrarily sharp AA filters.

But with a DAC, you know exactly what you're putting into your AA filter (your sampled signal convolved with a box), and you can easily arrange things so that the AA filter both works perfectly (sampling > nyquist) and is easy to design (sampling >> nyquist).


A naive digital square wave will absolutely generate aliasing, there is an entire area of audio DSP research aimed to generate alias-free waveforms (another poster mentioned BLEP which is a popular technique). You can trivially see this by looking at a spectrogram of a naively synthesized waveform- all kinds of junk will appear that is also clearly audible.

This is because the "naive" approach to e.g. a digital square wave is to sample a logically continuous function defined as:

  if (less than halfway through the cycle period) -1
  else 1
Firstly, this is almost always going to be out of tune because the transitions from -1 to 1 and 1 to -1 necessarily fall across a digital sample boundary. For any frequency not evenly dividing the sample rate the transition will be slightly early or late.

Secondly, the sharp rising edge of the square wave, from a "sum of sines" perspective, requires sinusoid components that are not part of a conventional square wave to represent them digitally. If you were to digitally sample an analog square wave through an ADC it would look quite different from the result of this function.

The naive digital sawtooth wave suffers from similar issues due to its discontinuity.


You and I are in agreement.

An ideal square wave is not a signal that is band-limited to the audio range, so attempting to sample it directly (the naive approach) doesn't work. Instead, you want to sample a band-limited approximation of the square wave (which is what BLEP, etc. do).

If you're synthesizing, the naive approach doesn't work because the interaction that occurs between the square wave transitions and the sample rate introduces energy into the passband as a beat frequency. Yes, this interaction is aliasing, which is what we expect to get when sampling a signal that is not band-limited to our passband. That BLEP oscillator or other polyphase techniques are needed in order to synthesize a correctly sampled band-limited approximation of a square wave.

Similarly, if you're sampling an analog square wave with a high slew rate, you absolutely must have an analog anti-aliasing filter in front of your ADC, or you will end up with exactly the same beat frequencies in your sampled signal.


An ideal square or saw wave contains infinitely high frequencies because of the discontinuities.

When you represent an ideal square or saw wave as a series of digital values you are effectively sampling it at a much lower sampling rate than the infinite rate required for a perfectly sharp square wave. This introduces aliasing is why a naive implementation of a square or saw wave will sound terrible.

There are a whole bunch of techniques you can use to solve this problem without excessive CPU usage but they are non-trivial.


A list of things that don't respond to infinitely high frequencies:

Your ears

Your speakers

The op-amps in your fancypants analog synthesizers

--

More critically, signals having high frequency components is. not. aliasing. Neither is distortion.

Out-of-band signals being erroneously remapped into the passband via the sampling process is aliasing.

That's why it's called an alias. The remapped out-of-band signal is indistinguishable from a different in-band signal.


I think you need to revise your understanding of what aliasing is, because you are - to be blunt - wrong.

Any process that generates or distorts a waveform at any sample rate will generate aliasing in the digital domain - unless it has no >Nyquist components.

And most waveforms do have >Nyquist components. Sine waves <Nyquist don't, but ramps, steps, and virtually all forms of distortion do.

If you don't understand this go back and reread the basics, because hundreds of DSP engineers have spent countless hours devising ways to handle this problem.


You miss a lot of nuance when you're busy being blunt.

An ideal signal that you want to approximate may contain infinite frequency components, but this property alone is not what gives you aliasing artifacts. Sampling incorrectly does. You can absolutely approximate this signal without aliasing artifacts, but you need to sample a band-limited approximation of the signal, not the signal itself. That's what BLEP does.

Attempting to sample the ideal signal directly (the naive approach) will indeed give you aliasing artifacts. But we already know that the ideal signal isn't band-limited, so we should also know that sampling it directly (at any frequency) is always folly.

Do you see what I was trying to highlight? "Oh no, the signal has high frequency components!" is not the problem. Incorrect sampling is the problem. Aliasing is an effect, not a cause. You don't "get aliasing" just because the ideal signal has high frequency components, you get aliasing because you sampled the wrong thing.

You can call it aliasing if you want... you wouldn't be wrong. It's just not particularly instructive, and referring to everything as aliasing without being more specific about where it's coming from is what leads people down rabbit holes like running a DAC at 2+ MHz because nobody showed them how to use a polyphase filter. Sure, it works, it's just wasteful, and you can accomplish the same thing by sampling better rather than faster.

Then again, your condescending response leads me to believe that you're not actually interested in being instructive.


Aliasing occurs when you try to sample frequences above half your sampling rate. This is the famous Nyquist theorem. This might help you understand why this can happen in naively implemented digital oscillators:

http://www.martin-finke.de/blog/articles/audio-plugins-018-p...



> That's not what aliasing means.

?? Aliasing doesn't just happen when you're converting an analog signal to a digital one. It can also be introduced when you're manually building or converting a digital signal with the intention of it sounding like a given analog signal, or like the ultimate output of another digital signal. It was this usage I was referring to. For example, if you downsample a digital signal from another digital signal without filtering out frequencies above Nyquist. Or in the example I gave, if you draw a square wave with the expectation that it sound like an analog square wave, not realizing that the sharp corners you've drawn didn't add in high-frequency odd harmonics, but rather partials with Nyquist-reflected aliased frequencies. It's this second case which causes problems for naive approaches to digital wave-design: you can't filter out the incorrectly-introduced reflected frequencies after the fact, but rather must build the wave in the first place to not have them.


Sorry, I misunderstood your original comment. When you said:

> Up, flat, down, flat. Guess what: you've just introduced enormous amounts of aliasing.

I thought you were suggesting that the jagged, stair-step output of a DAC is what causes aliasing, which is a common misconception.

I was trying to point out that stair-step output is just the reconstruction filter of the DAC operating as intended, and that a reconstruction filter is an anti-aliasing filter. In the case of a DAC's sample-and-hold, it's an easy way to select the baseband alias. [1]

I think one reason this thread went off the rails is that there are two anti-aliasing filters in a digital synthesizer. I was initially referring to the hardware anti-aliasing filter at the output of the DAC, which is also called a reconstruction filter or an anti-imaging filter. Everybody else was referring to the digital anti-aliasing filter that you need in order to create a band-limited approximation of an infinite bandwidth ideal signal so that you can actually sample it, because attempting to sample the ideal signal directly is always incorrect.

Given that the article is about the precursor to digital synthesis, I suppose I should have realized that people were going to be more interested in discussing the latter. However, now we're getting to the second reason this thread went off the rails (and the reason I usually regret participating in audiophile threads): Differences in terminology, points of focus, and "Well, Ackchyually" audiophile-grade condescension cause people to read past each other and continue to argue despite largely being in agreement.

If you read my downstream responses (https://news.ycombinator.com/item?id=25601970 and https://news.ycombinator.com/item?id=25602267), I hope it's clear that I both understand and agree with the actual argument you wanted to make.

[1] Aside: If you were to insert sinusoids at twice the sampling frequency instead of flats in between each sample, you'd be selecting the alias centered around twice the sampling frequency instead of the one at baseband. This is (roughly) how a mixer works.


> I thought you were suggesting that the jagged, stair-step output of a DAC is what causes aliasing, which is a common misconception.

Oh, heavens no. Just describing the naive way to draw a square wave in PCM.


It's not possible to get aliasing without a reference signal or reference frequency. A single purely digital signal, perhaps clocked or perhaps not, has nothing to alias against. So the claim has something to it!

Once you introduce a master clock, especially one clocking an ADC, then aliasing can and does indeed show up.


In the digital domain Nyquist is a normalised ratio. It is not tied to a specific hardware clock rate.

So it's perfectly possible - in fact dangerously easy - to generate waveforms with components that are >Nyquist.

It doesn't matter if the hardware runs at gigahertz frequencies or subsonic frequencies. In fact it doesn't matter if there's never any hardware at all.

Because there is always an implied sample rate of 1 x fs, and any signal which generates components of more than 0.5 x fs will alias.


I don't consider a "digital signal" to have any sort of implied clock or sample rate whatsoever. This appears to be a terminology mismatch between audio people and electrical engineers, if the rest of this thread is any guide.

Here is my view of things: along the time axis, signals can be either continuous-time or discretized/discrete-time. Along the intensity axis, for example voltage, signals can be either continuously-variable or discretized/discrete-valued.

I consider an "analog signal" to be one that's continuous in time and in value. I consider a "digital signal" to be one that's continuous in time but discrete in value; for instance, the output of a (asynchronous) logic gate.

Your definition of "digital domain" seems to be discrete-time, discrete-valued signals; for example, the readings out of an ADC or commands into a DAC.

The difference between the two would nicely explain a lot of the confusion here.


Aliasing only appears when you try to make an analog signal. A digital signal is not beholden to channels. It is a pure mathematical entity.


All right, mister fancypants. I decide to create a sawtooth signal by repeatedly drawing a diagonal line, then a vertical line in my digital PCM sample array. It looks like a sawtooth when I plot it, but the sound that comes out doesn't sound like a sawtooth: it has lots of reflected harmonics caused by those sharp corners. What is your term for the artifact that I had unintentionally introduced?


Aliasing. That isn’t a digital signal.


Of course it's a digital signal. It's a PCM array.

If what you mean is "the artifact isn't 'aliasing' until it finally goes through the DAC", my answer to you is: that's a useless and counterproductive distinction. I inadvertently introduced an artifact into my digital signal that I cannot easily remove. That this artifact magically gains a label only once it passes through the DAC is pedantic nonsense that helps no one.


This is not some useless academic distinction. You are measuring an analog signal. You do not worry about aliasing in all digital systems, even if they are representing something you might typically make into an analog signal. Why would you want a digital square wave? Who knows. But aliasing is not in the digital signal.


Thanks to the author for this. My amateur-hour electronics dream is to level up from simple Arduino/control voltage hacking to an actual, workable polyphonic synthesizer.


For a little extra inspiration have a look over Look Mum No Computer's projects. He's a little out there but has a deep knowledge and boundless enthusiasm.

https://www.lookmumnocomputer.com/projects


He's a great inspiration for people starting out, because he's proof that you don't need to be the most meticulous or "clean" engineer in order to make incredible stuff. He's very prolific though.


http://musicfromouterspace.com is a great place to start.

Going from simple dc arduino inputs to high precision analog is a bear, but super rewarding. Get ready to get your soldering skills right and start caring about resistor tolerances!


I learned electronics from Ray's diagrams, he is truly missed.

Another of my synth DIY mentors (with whom I worked on the Arturia MiniBrute) is Yves Usson: http://yusynth.net/index_en.php


WILL attract alien visitations - CAUTION ADVISED WILL DEFINITELY cut into TV viewing time - CAUTION ADVISED Stimulates plant growth and calms goldfish.

Well thanks. There goes all my free time... But seriously, that website looks like an amazing resource


Ha! Been there. It’s surprisingly tricky (I know lots of software, not so much hardware). It was as great fun to work on though.


Great article!

For those interested in an open source design embodying these techniques, I have an open source hardware design here: https://github.com/russellmcc/dco with a blog write-up here: https://www.russellmcc.com/posts/2013-12-01-DCO.html .

Also of relevance is an open source hardware design for the later roland alpha juno digital oscillator design: https://github.com/russellmcc/alphaosc https://www.russellmcc.com/posts/2019-06-14-Alpha-DCO.html


Just want to say that I was caught off-guard by the quality of the article, the text, the media, all of it. Bravo to the author.


If you like the sound of the Juno and don’t want to pay vintage synth prices for one, I highly recommend the Behringer DeepMind.

It is clearly modeled after the Juno, and adds a lot of nice features (2 DCOs per voice, 2 more-capable LFOs, an arpeggiator, full FX suite...) and you can get the 12-voice keyboard model for about half of what you’d pay for a Juno 106.

It’s got a big screen in the middle, but the front panel is designed so that you almost never have to use it, even when building patches. They have a great shortcut system for assigning modulation that’s easy to pick up and very fast to use. It’s just a great synth all around.


Great article. Thanks for the share, and if Thea reads this, thanks for the article. I look forward to reading it more thoroughly.


I can only second this. It was surprisingly easy to follow along even though I know very little about electronics (I do program synths, though). The interactive illustrations are also great.


> The biggest problem with the VCO design is that the control voltage that determines the frequency is generated by a complex circuit that is very sensitive to temperature drift and manufacturing tolerances. This means that the generated control voltage might not match up to exactly to what it should be for the desired note and it'll end up sounding out of tune. What's worse is that even if you adjust for this you'll need to re-adjust as the instrument gets warmer!

Instability ironically prized and cherished by synth enthusiasts who will claim it makes old synthesizers sound "better" than modern ones...


Scientifically perfect output has rarely been a primary desideratum of musical instrumentation and in fact is somewhat limiting in terms of the number of timbres you can create. Ergo why historic analog synths included multiple oscillators per-voice with independent tuning. The early Junos famously included a chorus to thicken up its single DCO per-voice. Later digital Roland synths introduced the "super saw" which is many superimposed detuned oscillators. Going back further, a full orchestra or choir ensemble is certainly not exactly in tune across every performer.

And as someone who makes digital synths for a living even 100% accurate fully digital oscillators are difficult to fully realize in practice. Aliasing concerns abound at every stage of a softsynth- there are well-known formulas for calculating alias-free waveforms at a fixed frequency, but these start to get expensive when you also are considering PWM and frequency modulation which are essential tools for achieving a variety of different sounds. And aliasing sounds a lot worse than analog imperfections.


A lot of the best commercial softsynths now support all kinds of audio-rate modulations everywhere in the signal chain. I guess they must just be throwing CPU power at it and oversampling the crap out of everything.


Basically - yes. There's now enough CPU to do this.

But there's more to that analog sound. The opamps and the rest of the circuitry, including supposedly passive elements like resistors and capacitors, all add tiny amounts of distortion and other parasitic interaction with the rest of the synth which varies with frequency content, level, and dynamics.

Human ears are incredibly sensitive to these variations, and the circuits in 70s and 80s synths had a lot of components, all adding their own colour. Which is why if you A/B the hardware with the software, the software rarely wins.

In DSP terms there are differences between functional/abstract models, component-level models, and component-level models that include these complications and imperfections. There hasn't been a lot of work done on the latter, and if they were modelled accurately they would likely need an order of magnitude more DSP cycles than current VSTs.


This is true although I find that the current emulations are good enough for a lot of sounds, especially when they’re buried in a mix and played back on a low bit rate stream on bad speakers.


>aliasing sounds a lot worse than analog imperfections

The Commodore Amiga has the crudest possible anti-aliasing filter: a single fixed-frequency low-pass filter shared by all 4 channels. It does not work very well, so aliasing is a defining characteristic of the Amiga sound. Aliasing adds high frequency content that compensates for the low sample rates required by memory limitations, and extra harmonic complexity that compensates for the low channel count. I think Amiga modules sound worse if played by something with good anti-aliasing filters.


Brian Eno’s quote comes to mind:

> Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit - all of these will be cherished and emulated as soon as they can be avoided.


In my (pedantic) understanding, the Amiga doesn't suffer from aliasing, because it uses a period register used to divide a very high frequency (rather than a frequency register), and always spends an integer number of clocks (output samples) on each input sample. However, it suffers from imaging (a reconstruction artifact) due to using a stair-step (zero-order hold) as a reconstruction filter. And IMO reconstruction artifacts sound worse (not harmonically spaced) when the sampling rate isn't an integer multiple of the note's frequency (the note pitch's period isn't an integer number of samples).

Aliasing (a sampling artifact) is an issue in synthesizers which run at low/fixed sampling rates and sample waveforms to produce output (often using frequency registers). This includes modern software synthesis, and also hardware like DS (ZOH, samples, almost like Amiga), N163 (ZOH, short wavetables), Yamaha (FM on sine waves, IDK the details), and to a lesser extent SNES (Gaussian reconstruction, samples).

Reconstruction artifacts (extra frequencies at x * fundamental, may exceed sampling rate) are different from sampling/aliasing artifacts (frequencies are taken modulo the sampling rate). But they can combine (like on DS hardware).


I was making analog synthesizers as a hobby back in the 80s while studying in the University.

For this kind of circuitry I would reduce temperature drift by enclosing it into a heated thermostat kept well above room temperature. This way after initial warm up period there is very little drift.


I used to dabble with making tunes with a friend - we had access to a Juno 106, and a couple of Juno VST plugins. We'd do a bunch of takes with both, and then later we'd choose "blind" which versions to use. The real thing won every time... we tried to analyze why, but never really figured it out - though it's probably just because the music we were making was a bit wonky anyway ;)


I think a lot of "blind" preferences like this are likely due to appreciating the accidental noise introduced in the signal path of the hardware instrument. The Juno chorus in particular is/was notoriously noisy. Add to that some crappy audio cables to get the sound out of the machine and into your recording device, ground loop hum, nearby AM radio... No software emulation is going to include all that "unwanted" sound unless it's deliberating modeling it as well.

Personally I'm from the camp that likes super-clean audio, so I was overjoyed when software synths fixed all the problems that we used to have with hardware, but there's definitely a camp that really likes the "warmth" or unpredictability of the original machine's output. Sometimes I think people spend more time adding plugins to dirty up their sound than they did making the original patch in the first place.


We ran the VST synths through the same signal path as the Juno (as a test - with good DACs)... but yeah, I've long-since given up on hardware synths: they're fun, but just take up too much space. I miss combining the sounds though - most of our tracks would be 90% VSTs, 10% analogue. I think you're right though: my mission now is to try and "recreate" that 10% with plugins.


My biggest problem with software has been that it perhaps limits creativity by providing such an unlimited palette.

My first synthesizer was a Juno 60, and I owned a bunch of other vintage and modern ones over the years. With hardware, it was a whole ritual to turn everything on, each instrument had its own smell, it put me into a mood of "let's play with this machine and see what happens".

When I moved to software, I finally had all the sounds I always wanted but could never afford, I finally no longer had to futz around trying to reprogram a patch I liked, I finally could send any instrument to any effects box... but there were less happy accidents. It felt a bit more like my day job, which also just involves sitting in front of a computer.

I think it can be like that for a lot of people, where the tactility and limitations of the gear can inspire a different approach to music-making. I imagine that aspect also plays in to the emotion that some people have that hardware instruments "sound" better.

All that said, I would never go back to hardware. As you say, it's hard to beat the convenience of a laptop.


It's about the UI. Most hardware synths have one-knob-per-function on a UI the size of a large panel. You can mould the sound in a tactile way.

With a VST, everything goes through a mouse. And you'll probably spend some of your time getting distracted by moving windows around to show/hide other VSTs.

It's a much more cerebral experience.

My biggest problem with software is that it's so limited. VSTs and DAWs ape hardware studios far too literally.

I'd like to be able to connect anything to anything - including VST internals to the internals in other VSTs. And add some generative/programmed elements. But most DAWs either don't allow that at all, or they only allow it with severe limitations.


Yes, the big physical panels are what make them playable. When you learn a physical synth, you get the benefit of muscle memory, so tweaking the filter cutoff “just so” during a performance can be done almost without thought.

I’m excited to see what happens in VR. The tactile feedback isn’t there, but I bet the muscle memory is. I don’t have a VR setup, but Synthspace might convince me to get one this year—it looks better every time I check on its progress. A VR modular synth kit without the insane cost? Yes please!

https://youtu.be/l-za23kxY7E


What I find interesting is the line between an instrument and music, where does it lie? If you play piano you focus on the music played, but if you play a synthesizer the focus is much on the sound. Same applies to acoustic guitar vs. electric guitar with all them effects-boxes attached.


>We ran the VST synths through the same signal path as the Juno

Yes, but there's also noise inside the Juno at the DAC, chorus, etc.


Synth nerds are among the most ardent gear fetishizers there are. Personally I'm very happy to have my gear finally reduced to a laptop and a set of headphones.


Cannot upvote this enough. The portability is one thing. Another is reliability. I love Roland and Korg, but I purchased the last two Korg flagships and they both stopped functioning right after the warranty period. They are essentially custom computer hardware with none of the service options or reliability available in the PC/Mac world. Also Korg somehow gets their distinct sounds near flawlessly in software, even though their iPhone apps are a tiny fraction of the price of the hardware in real life.


> reliability

Pre-COVID I had a semi-regular synth jam session with a dude 100% invested in software synths. First 30 minutes (no exaggeration) of every session was him fighting with Ableton or whatever, and rebooting his Mac, trying to get something trivial to work, like MIDI clock or input monitoring.

Meanwhile I just plugged my Minilogue into the power strip and mixer and did my own thing.

There's something to be said for a musical instrument not to be joined at the hip to a general-purpose networked computing device.


Can’t argue. I love playing a Minilogue or an Electribe or a Roland JD-Xi. They are all superb instruments for jamming. And you are so right, host environments can be frustratingly buggy. On the other hand, the Korg Gadgets on iOS are pretty damn reliable.

My only argument is that with software versions you’re much more likely to get upgrades when bugs occur. And I personally will simply no longer buy a Korg flagship because of my troubles.


I think it could be worse. On the whole, synth wankers tend to be cool with modding and reissues. In constrast, it seems that guitar people ascribe magic qualities to very specific models from certain years and are paranoid about keeping these instruments as authentic as possible. With synths you have people looking at MS-20 serials to make sure it has their favored filter inside, but that's usually where the madness ends.


With guitars there's also a whole subculture of DIY modification and ecosystem of aftermarket parts to support it. It helps that Fender guitars were designed to be modular to begin with (AFAIK for easier manufacturing and repairs, the hackability was unintended), and for most components there are only a few common form factors so you don't need any woodworking skills to get started. Attempts to reproduce the exact '59 'burst sound are certainly part of the fun, but there's much more to it.


Developers model and reproduce this drift in software synthesizers because it's pleasing to the ear of many...


Love the animations / illustrations in this. Way nicer to look at than LTSpice.


> The output voltage across the range can vary as much as 1 Volt, but that's fine- your ear won't really be able to tell the difference.

If I understand correctly, the output from the sawtooth has a voltage offset? Can that lead to DC passing through speakers? And would the pulse-width comparator input need an equivalent voltage offset?


Yes, all of the waveforms out of the waveshaper have a DC offset. The synthesizer removes that when mixing them and sending them through the filter.


Did you mean that the amplitude of the sawtooth can change by up to 1 volt (due to imperfections in the differentiator and amplitude compensator)? And if the PWM is based on comparing the sawtooth with a fixed voltage, and the sawtooth amplitude changes with pitch, does the PWM width change with pitch?


You're correct. The pulse-width of the saw wave is very dependent on the amplitude of the saw, so when there's a 1 volt deficit in the saw wave's amplitude you lose about 8% of the pulse width. It's really not that noticeable in practice, though, and if anything it adds to the sonic character of the juno.


I love the article. I did electronic music myself in the past, and this is very very good and detailed explanation.

It motivates me to try it back.

I have to find a little time to do something interesting now that I have the resources that I had not in the past(if only I had the time I had then).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: