I'm going to push back on the comment in the article about the build system being overkill. While "tiny" is a relative term, not all projects built for the RP2040 are tiny – my PicoGUS project relies on several external libraries and being able to bring them in via CMake is pretty great. Also while the Pico itself is fixed hardware, the SDK targets many other boards built on the RP2040. The Raspberry Pi team definitely have things architected for future versions of the microcontroller as well as on-host testing of the SDK.
We (the Pigweed team) are big fans of the RP2040/Pico ecosystem. We built out a pretty deep GN-based build system for it; check out the source code [1] for our Gameboy-style electronic badge Kudzu [2] for an example. We're also prototyping Bazel integrations (since Bazel is our primary build system [3] going forward). Pigweed might be overkill [4] for small hobby projects but we think it might be pretty nice for big/complex projects built on top of RP2040. I know you said you're happy with CMake (and we too think CMake is great; Pigweed also has CMake support), I'm just excited that there are other build system options and figured others might be interested to hear that other options exist.
I tried to evaluate pigweed a little bit by importing just a single module (pw_result) into my existing CMake-based MCU project. I guess you guys technically do have support, but it seemed unnecessarily difficult to pull in. Just pull the source code in, and link against pw_result right? Nope. I ended up having to spend a few hours working out what the heck facades are, various handlers for I/O, implement my own crash handlers, all the while having to poke through/model existing handlers that had "DO NOT USE IN PRODUCTION" written all over it.
I did end up getting it working, but it was clear that GN/Bazel is the blessed build system and CMake is just an afterthought.
In contrast, I pulled in expected-lite [1] with a single line of CMake's FetchContent, #include'd the header, and it just worked.
Yes I should amend [1] to say "Pigweed has some CMake support". We're aware it's way too hard to do simple CMake stuff. Sorry if I / we oversold it. It's on our radar to do a lot better with CMake docs, examples, etc. and your comment might be just what was needed to light a fire under our butts
We wrote an explainer on facades, did you see that / did it help? I will comb through the docs and make sure any module that uses facades links to the explainer https://pigweed.dev/docs/facades.html
Thank you for trying it out and for the feedback, and sorry again for the pain
For my notes, did you just try it out right now or was this in the past?
[1] Just checked, I no longer have edit access to my previous comment
I looked through what I did and in the end, to bring in pw_result, I had to define a failure handler, pull in three separate cmake files for various definitions, define an ASSERT action (because a result type requires an ASSERT fail, apparently?), and set a backend and a handler for pw_assert. At some point I remember it complaining left and right about not having the right I/O methods for printing, etc. I just wanted to try the result type, and for some reason I needed to define I/O mechanisms for my platform.
Is this the correct procedure? I have no idea, because the cmake documentation had (has?) _zero_ examples. If they existed, I couldn't find them. I eventually got it to compile with the following snippet, but I can hardly believe that this is the "intended" way to do it.
* I will make sure we follow through on the plan to include CMake examples in our WIP "examples" repo [1] and will make sure pigweed.dev links to those examples consistently.
* Our new module docs guidelines require every module to have a quickstart section that includes how to set up the module in every build system we support, including CMake [2]. That should help a bit but based on your experience it sounds like there's a lot more we need to cover in these quickstart sections.
(I don't want to overcommit on behalf of other teammates but as docs lead these things are safely within my realm to change.)
i don't think the build system is overkill. but it's the same phenomenon as docker: if you grew up with it then it's easy to use and solves all your problems easily and you've probably forgotten that it was ever difficult. but if not, then it's this arcane monolith that you've no hope of actually understanding. this applies to every general-purpose build system.
It would be really difficult to find an experienced C++ developer these days who isn't comfortable with the basics of CMake, and that's all the Pico SDK demands from you.
If you're simply a hobbyist grudgingly writing C/C++ then sure, something like Arduino would be a lot more appealing.
i've been writing C++ half my life, C a couple years less, stopped using them daily 4 years ago, but i've shipped enough projects with them to be "experienced".
exactly one of those projects use CMake (an open source Qt app that's had a couple million downloads), and the infra for that was pretty finalized by the time i got there. if "comfortable with the basics" means "add a new .c file to the list of stuff to compile", sure, easy. if it includes "detecting the presence of some optional dependency and then conditionally compiling+linking a .so in response" then not at all. more importantly though, a large number of cmake directives i cannot guess their effects except by consulting the manual.
there's a lot of build systems out there. and a lot of technologies tend to come in pairs: a Qt C++ program is likely to use CMake, a GTK C/C++ program is likely to use meson, an SDL C/C++ program is likely to use make. not that there's no variance, but it's weirdly easy to specialize more narrowly than you think.
I think the discontent stems from "all this complexity to generate programs for a FIXED hardware with a fixed set of features", which I have some sympathy for. I do wonder however, whether the Pi Foundation wasn't looking ahead here, expecting more MCUs, with different features, to come after the RP2040.
CMake is pretty great, in comparison to the older build tools for C and C++. But in comparison to any modern system (in other languages), it absolutely sucks.
I did something similar on a Raspberry Pi using the GPIO pins one afternoon when bored. I made a 4-bit R2R DAC [1] using typical 5% tolerance resistors and fed it directly into a small speaker. I shrunk a 16 bit audio file down to 4 bits and then toggled the GPIO pins in a loop in Python from Linux userspace around 40 kHz. The result was easily intelligible speech, and easily recognizable, if not exactly pleasant, music. I was surprised how good the results were considering it was made of literally nothing but wire. It was quite noisy, which I assume was from the clock jitter running with an inexactly timed update loop, and the zero filtering.
If you're doing that from user space, the noise will be from scheduling delays as Linux context-switches your process to run other stuff.
You can minimise that type of effect thru use of cpu-affinity but not get rid of it completely.
A company I'm working was getting awful SFDR values on a 12-bit DAC and not meeting requirements, which I didn't loosen up on. It turns out that they were using linux userspace to set the values for rapid bringup and moving to a bare metal application fixed it up. 250 times a second linux interrupts were causing delayed samples.
The RP2040's DMA engine can update the PWM registers without burning a CPU.
Use two DMA channels and you can play a sequence of samples, all with the DMA hardware after the CPU sets it up.
(I don't know if you can do that from micropython)
I did this in a little speaking ohmeter hack I did a couple of years ago:
https://gitlab.com/penguin42/ohmy
DMA chaining is simply magic. First stumbled across it in a Microchip PIC. Does anyone know when it was invented? All I could find was this 2003 IBM patent https://patents.google.com/patent/US20030229733A1/en but I have a feeling Microchip had it earlier?
Meinolf Schneider in his Esprit and Oxyd games on the Atari ST used some creative sound chip programming in the 1980s to play arbitrary samples, also without a DAC. I found the source code for that published somewhere years later.
I don't know if this qualifies but one our biggest WTF moment as kids was when we got our hands (in 1986?) on a floppy disk for the Commodore 64 that'd play the song "Everybody was kung-fu fighting":
Yup, an unmodified C64 playing 48 kHz 8-bit audio! It uses a 1 MB EPROM cartridge, which is kinda period correct, because while expensive, it would have been doable 40 years ago. No REU-style DMA tricks or any cheating.
Using PWM to play audio is flawed (but usable for many applications), other approaches such a sigma-delta will lead to much better results, but with a lot more effort. https://github.com/tierneytim/Pico-USB-audio seems like a reasonable implementation at first glance.
Bresenham's algorithm (which was invented around 1960 to draw straight lines at arbitrary angles on pixellated displays) was groundbreaking because it didn't require any multiply or add operations.
Sigma-Delta is basically the same idea as Bresenham's algorithm in a different kind of space.
What do you mean? Class D amps are essentially PWM driven amplifiers. Essentially any speaker in existence offers enough low pass filtering to be directly wired to the amplifier, as well.
Generally they use pulse density modulation, rather than pulse width modulation, which increases the frequency of the noise and makes it easier to filter out. Sigma delta modulation, which the comment you replied to mentioned, is a way of doing pulse density modulation.
Imagine if you did dithering to produce grayscale images with a black and white screen by just using different sizes of rectangles; you'd have to have a very high resolution screen before you wouldn't be able to see the individual rectangles. That's what pulse width modulation is like. Whereas pulse density modulation, using a sigma-delta modulator, is more like a proper dithering algorithm, where you spread the pixels out evenly across space, you don't just use different size rectangles to represent different shades of gray.
Also, there's a world of difference between PWM out of a CPU running a non realtime multitasking OS with continuous scheduling and varying timings and the same PWM produced by a dedicated chip that does nothing else.
A couple of years ago I wrote a MIDI player with a bluepill board (STM32F1 microcontroller) and a waveform generator (AD9833) in Rust.
It was actually quite simple and a lot of fun to write (and hear!).
The little 8 bit atmel attiny85 has a great fastpwm feature. It can get almost 20khz output,with 8 bit duty cycle control. This is plenty to make a multichannel chip tune soft synth. Sounds good just hooking the pwm pin directly to an earbud.
Wouldn't you typically want to run that output through an RC (low pass resistor-capacitor) filter to average out the voltage and eliminate high frequency PWM noise? Is that built into the pico?
You'd normally do that, yes, but it's not strictly required in all applications. The speaker itself might already act as a low-pass filter simply because it can't physically react fast enough for the high-frequency part.
It's a reconstruction filter, the simplest DAC you could build.
Most of the digital parts of DAC chips are not there for conversion, but getting data ready for conversion through a reconstruction filter. You don't need that if you're driving a PWM signal through a cap to integrate it.
Or a better way to navigate the gazillion posts in his site. There are no categories, not even a way to list all posts by title only that I could find.
Though browsers should really protect you against that already. ('Should', not 'would'.) Because SSL doesn't mean that there won't be malware on the site injected via some other means, ie by someone hijacking the server or by an otherwise malicious server.
Well the full-page insecure advanced continue with http is even worse. I (we) understand there's no real reason to need it, but if you're serving the general public (as isn't the case here) you absolutely need TLS whatever you're doing in 2024 IMO. Personally if I were blogging even for this audience I'd want to avoid that friction, but I think it's not like antirez doesn't know about it, I assumed it's a sort of protest.
circuitpython uses PWM to do the same, see for example https://learn.adafruit.com/circuitpython-essentials/circuitp.... I tried connecting a speaker with just a single MOSFET as a digital amplifier, and there was nothing to hear from the original PWM signal, it's probably completely out of the speaker's output range (didn't check though).
Last year when I did a lot of sound-related projects, I implemented the YM3012 DAC on a Pico: https://blog.qiqitori.com/2023/03/raspberry-pi-pico-implemen... I used PIO to read the digital input and used PWM for the sound output. I verified that the conversion was correct but unfortunately never really put in a lot of effort to try to make it sound good. There's a bit of a hiss.
I did the same thing for a device I made based on the Pi Pico[1], where audio is processed every sample and it stores a bunch of wav files in the flash that can be mangled together. the flash is fast enough that you can jump between samples really quickly to get cool "tunneling" effects that are surprisingly good sounding for such a tiny + cheap little chip.
I assume there is probably some way to hook up the dma hardware and gpio state machine to get perfect timing of delta sigma modulated audio with no involvement of the CPU.
Hrm there probably is, but the pio commands don't have addition so the integration part of the delta sigma modulator could be trouble. You could preprocess but it would create enormous files.
I did do a delta sigma using the pios but fed via cpu, essentially used just a look up table of amplitudes that fed a bitstream into the pios to get a psuedo 133 MSPs dac.
I think it's possible to get a bit-perfect output sequence by having a bunch of lookup tables for waveforms, and then another table allowing you to chain together a sequence of waveforms (and phase, via offsets into waveforms). That should let you generate any sequence that a perfect delta sigma modulator would have output, with far lower ram requirements than the whole bitstream.
I'm dating myself here but I remember similar techniques back in the original IBM PC days to play sampled audio through the built-in PC speaker: https://en.wikipedia.org/wiki/RealSound
Accessing address 0xC030 (PEEK(-16336) or STA $C030) would cause the machine's internal speaker to click. I assume it was the result of the voltage to the speaker toggling from digital on to digital off.
POKE(-16336) was quieter. I figured it was because it was implemented as a quick read-then-write instruction sequence, too fast for the speaker diaphragm to move end-to-end (effectively a quieter sound). Assembly behavior was different.
In spite of its simplicity, there were games with amazing PWM sound effects (crude by today's standards, but magical in the early 1980s). The software company Muse even produced a sort of speech synthesizer that played sampled words, like an audio version of a ransom note. It sounded wonderfully awful.
The Apple II also had a 1 bit ADC. The cassette audio input is a level/threshold detector and with the right software is able to measure the frequency of pure tones.
I remember finding some little schematic for a DAC and getting the parts at Radio Shack and wiring it up so I could connect the PC to a stereo and play tracker music or something. At least I think it was a DAC. I'm guessing this was before we got the SoundBlaster and Silpheed for Christmas. That was a fun Christmas. Maybe early 90s?
It's difficult to do that though isn't it, because it's inherently then also a function of the recording and your playback equipment.
Much like you can't expect a (useful) recording in a speaker review.
It'd be a bit more reasonable here I suppose because it's almost certainly going to be the weakest link, so everything else is to some extent preserving characteristics of it.
My expectation is that the PWM output would sound characteristic / interesting enough that it would be worth hearing a recording -- but maybe I'm setting myself up to be surprised if it has a reasonable amount of fidelity.
Nice stereo I2S 16-bit 48kHz is like 1 USD chip. Plus I2S is very efficient to implement with the RP2040 PIO hardware. You can even use the blocking write call to time the signal without any timers or callbacks necessary.
> How can be sure that it matches the MicroPython speed? well, indeed it is not a perfect match, so I added “x=1” statements to delay it a bit to kinda match the pitch that looked correct.