With microcontrollers being as fast as they are, and most having some hardware PWM capability even the very tiny ones, like the ATTiny85 can be startlingly capable synths, if you're satisfied with ~15khz 8bit audio fidelity.
At 8/16 mhz there are plenty of cycles for multi-channel audio synthesis. Not much room for samples, but a lot can be done with envelopes, saw, square, and noise waveforms.
> I could almost swear that you can even create a class-D amplifier with the ATTiny85, with a clock rate of ~15kHz.
Pretty sure you can do that. A Class D amplifier is to a class A/B traditional one what a switching power supply is to a linear one, so I would expect to be able to make a class D amplifier using for example a PWM driven H bridge usually employed to drive electric motors, or a chip used to make switching power supplies. If they can work at high frequencies (say 50KHz or more) then they should work; all it's needed is modulating their input or feedback pins the right way.
Probably not HiFi for the sensitive ear, still good enough.
That would be really cool. Never got around to trying it, but I'm really curious about the subjective quality on the 12bit A/D conversion looping back to PWM D/A.
-The 12 bit A/D provides a (theoretical) 72dB of S/N, so if you can do that at a decent sample rate to get sufficient bandwidth, the digitizing should be good.
As long as the PWM D/A has a good clock to keep the PWM ratio accurate, I think it should be eminently listenable, given a proper reconstruction filter and a decent power supply.
Check this out -- really good discussion of bits and SNR: https://www.xiph.org/video/vid2.shtml jump to the 11:00 mark to find out just how many 'equivalent bits' those old cassette 'mix tapes' of yours had...
Interesting, thanks for sharing. I'd be interested in experimenting a hybrid approach in which the microcontroller takes MIDI in then generates basic waves (saw, square, sine etc) to keep resource consumption as low as possible while maximizing poliphony (layering with detune, etc), then leaving to analog circuitry noise generation, filtering, enevelopes etc. It would make things more complicated hardware-wise but I'm sure the sound would improve significantly, at least for us old beards who still love analog synths from the 70s.
I have an Ensoniq SQ-80 synthesizer that did something like that. A microcontroller generates 8 channels of sounds from a PCM wavetable, and those are fed into 8 analog filters before mixing down to stereo. But all of the waveforms and envelopes were digital.
Does anyone know where to find suitable equivalent electrical models of speakers and piezo buzzers?
So, for example, a speaker will become an inductor with a series resistor to represent losses, and another resistor to represent power that is converted to audio.
The reason for asking is that I want to be able to plug it into a simulator (spice) and also simulate the mechanical part (using an electrical equivalent circuit).
I'm using this for guitar pickup (2.5H 5kOhm, 30pF, 50mV@440Hz), you can modify it to be more like dynamic speaker (e.g. 8ohm Rs, smaller L0 which you measure from actual speaker)
.subckt sub_pickup_1 a b
L0 a 1 2.5 NT=1
Rs 1 b 5000
Cp a b 3.0E-11
Rp a b 10000000
V1 a 2 DC 0 AC 0.01 0 SIN(0 0.05 440 0 0 0)
R1 2 b 1
.ends sub_pickup_1
xL1 7 0 sub_pickup_1
This simple code produces a square wave which can sound ok when fed through a buzzer which has a string resonant frequency. The harmonics are attenuated enough that it sounds ok playing notes close to the resonant frequency. For anything else you'll want a separate DAC (digital to analog converter) which can use I2C, I2S or SPI digital signals and outputs an analogue signal.
You can also use a scheme a little bit like PWM called PCM where the GPIO pin can be switched on and off at a frequency much higher than audio (typically 2Mhz or higher) and the duty cycle is varied at the sample rate. This can be implemented on an Arduino using code like https://github.com/damellis/PCM/blob/master/PCM.c
Note also that one of the main benefits of a dedicated DAC is that many implement digital oversampling: they use a high-quality digital antialiasing filter to upsample the audio to the 100 kHz-1 MHz range, so that a simpler (= cheaper) analog antialiasing filter can be used on the output.
I did something similar a long time ago trying to make a little box that played the Kool-Aid guy saying "Oh Yeah" when you hit a button. It was relatively easy to make it work, by resampling and quantizing the .wav, then loading that into an Arduino timer/PWM generator. But as others have said, quality wasn't great.
A big issue I ran into was filling up the data memory, which was what required such limiting quantization. I'm curious if anyone has tried doing this sort of audio but using a noise-shaping quantizer (i.e. Delta Sigma Modulation), as that was my next plan but I forgot about the project.
With DSM, you can generate a higher frequency 1 bit sequence, that when filtered recovers the original audio signal. The benefit here being that for the same amount of storage space, you can push the quantization noise into higher frequencies that aren't as prevalent in speech/music, then filter it off.
I'd be more interested to see methods of playing notes like this asynchronously; without having it block the main loop while the notes play.
If you've got multiple cores it's no problem: Just run the note-playing code on a different core. However, if you've only got one core you'll need some method to play those notes without disrupting the main loop.
It's pretty easy to do with one core, too - I would guess that the author didn't use interrupts or peripherals because they were writing about MCUs in general rather than a specific chipset. The K210 they used as an example is actually targeted towards ML applications.
Most chips will have some sort of PWM or advanced timer peripheral that can generate waveforms on a pin without direct CPU intervention. You write to registers to change the frequency, duty cycle, etc.
Many chips also include asynchronous DACs, but they're usually too low-resolution for high-fidelity audio.
Most microcontrollers also include some form of DMA, which can shuttle data between peripherals and memory without CPU intervention. You can also user timer peripherals to trigger DMA transfers, which lets you send buffered data to a DAC on a KHz schedule while the CPU does other things.
And of course, you can usually use timer peripherals to periodically interrupt the main thread when you don't want to wait in a busy-loop.
It's a little bit more complexity, but cheesy 'multitasking' can be achieved with interrupts on HW timers. The main loop sets the synth 'state' and the interrupts act on it.
In DuinoTune [1] the song playback had a per song 'tick' timer interrupt at 12 per beat which handles note on/off, envelope, pitch/audio glides. And a per-sample timer interrupt which does a mixdown of the voices and updates the hardware PWM duty-cycle to generate the waveform.
You set up a timer to fire a "calculate next sample" interrupt at ~4khz, then your main thread just walks through your MIDI file or whatever and tweaks the synthesizer settings.
I mean... a microcontroller has only one 'loop'. If you don't want to 'block' it, you either 1) use external devices, i.e. a synth chip or 2) compose the multiple notes you want into one signal. If you're doing PWM only then option 2 will rapidly start to present interesting challenges.
Related... the genius and simply idea of how archive a 1:8 ratio compression and preserve an acceptable quality for free. The Binary Time Constant sound compression algorithm : https://www.romanblack.com/btc_alg.htm
At 8/16 mhz there are plenty of cycles for multi-channel audio synthesis. Not much room for samples, but a lot can be done with envelopes, saw, square, and noise waveforms.
https://github.com/blakelivingston/DuinoTune
https://www.youtube.com/watch?v=G3baH5iTcFM