What I got out of it was that it's a language which is meant to be both fully featured and easy for kids to pick up, to get interested in music, programming, or both. Maybe not so much programming as just "computational thinking". It has less incidental complexity than EDSLs (like Overtone) or traditional tools which expose more of the synthesis pipeline. Compared to most video game/graphics frameworks, which are also used in education, you can get to more polished content faster.
nested with_synth and with_fx calls seem less intuitive than an object oriented approach, which mirror traditional circuits. (synth.connect(fx1) fx1.connect(output)). what was the reasoning behind this? i only ask because i could see it making a difference in terms of education. and, what makes this more suitable for schools than ChucK, Supercollider, etc.?
that aside, i really wish someone would make a language agnostic livecoding VST so we can start incorporating stuff like Sonic Pi into our real workflows. i sense an integration like that would also gain some attention from people who previously ignored livecoding. i'm sure a lot of students screw around with Fruity Loops or Ableton and would find livecoding much more accessible if it tapped into the tooling they already understood.
There's a basic problem with live coding, which is defining the problem domain.
Live coding systems are only really good for building teeny tiny little toy automata, which usually do simple stuff with note lists, and maybe apply a bit of randomness or some simple repeating functions to a parameter or five.
Weapons-grade commercial music is much more complicated. The sounds are richer, the arrangements and mixes are immensely complex (even for simple songs), and generally there's a ridiculous amount of care and detail.
But... if you start building in machine learning, database searches, and super-complex DSP patches, and the stuff you really need to make non-trivial music, you're not really live coding any more - you're plugging pre-existing modules and data together by typing.
Also, watching people typing is kind of dull.
I think there are solutions to both problems, but REPL-music systems are only ever going to be a step on the way to them.
I'd be surprised if live coding ever makes it into the mainstream in the way that (say) eSports almost have.
The "problem domain" is not as narrow as you would have it be. People are entertained by music of all sorts, not only "Weapons-grade commercial music" (whatever that is).
A step on the way is far more productive than saying it is impossible and ignoring it.
There will be Bachs, Beethovens and Lennons of the REPL. First, they need their instruments.
I disagree for a few reasons - language is very good at describing complicated things, minimal music is great, and there's nothing at all stopping live coders from using rich sounds.
Then why don't live coders use rich sounds, or make complicated things?
As for 'minimal music' - most of it isn't really all that minimal, and it certainly can't be reduced to trivial algos.
And if your only option is to be minimal, maybe that's not so great.
Point is that only a tiny subset of the population has ever been interested in listening to music defined and created exclusively by writing code. Live coding caters very nicely to them.
No one else seems to be listening.
Over the same period, CGI has moved from basic wireframes to epic photorealism-with-AI. You can see this stuff in games and movies, and it's awesome.
Why aren't code musicians interested in pushing the boundaries of code music by aiming for something with similar complexity?
Here's a thing: after nearly sixty years of code music, exactly zero code music composers have troubled the public imagination.
Over the same period we've had literally hundreds of new electronic genres and styles, and even people who don't much like electronica can name at least a couple of people who make it.
We've also had game music coders writing complex AIs for arranging and generating music on the fly in game contexts.
But live coders don't seem interested in any of this, because typing.
>> "Then why don't live coders use rich sounds, or make complicated things?"
because there is no reason, because livecoding hasn't matured at the same rate as other production software. at the risk of painting a broad stroke, the people who care about livecoding don't really care about rich sounds.
do you recall how EDM sounded in the 90s and early 2000s compared to now? and they were using the same exact tools as they use today. but i'm pretty sure a lot of techno producers were literally limited to like 6 tracks or something. and the sounds they used were god awful. not because they preferred it, but because they were technically limited by these parameters.
but people still made music, people still danced, and the genre progressed along with it's tooling over the next 20 years.
the music is still evolving side by side with the software we build to produce it. livecoding is just another instrument that hasn't been exploited to it's fullest advantage (as a "live" coding or a production tool).
i've played in touring bands, and i produce electronic music for fun now. i've been a software engineer about my whole life. i would love more programming language options in my production. the same way tweaking a VST can result in more interesting melodies than you originally intended, programming languages can be a great tool to help explore audio in unique ways.
Live coders do use rich sounds (why wouldn't they?) and make complicated things (a value judgement, I guess).
Well e.g. Autechre, Aphex Twin, BT and Holly Herndon work with code, and I'd say are pretty influential.
Improv is generally non-commercial due to the lack of end product, but we're seeing algoraves with hundreds of people turning up and having fun. It's still developing but is promising and we've seen some nice successes. Maybe there are live coding performances/situations you haven't experienced yet..
Also I particularly enjoy collaboration with live musicians, that's where the various trade offs play off each other and become something else.
The closest I've seen to this in reality is Max for Live. The speed of development is astounding, and the results are professional in quality, and quite robust. It would be brilliant if the resulting product could be "VST-ized".
It's just a pity it's not cheap (around $700 to $800 for Live Studio with Max). I can see this hampering it's potential as an education tool outside of anything but specialist audio training organizations.
Before Max 5, making vst plugins with was very much supported, and was explicitly dropped in favor of Max for Live. Here's a blog post exploring some alternatives from when the drop in support was announced: http://createdigitalmusic.com/2009/05/cycling-74-ditches-plu...
I think Bret Victor underplays the importance of liveness ("almost worthless", eh?), and doesn't touch on the possibilities of making programming a shared experience, that is itself culturally meaningful, inclusive activity, e.g. musical. This is what Sam Aaron and other live coders have been working on for the past decade. It's a totally different approach really, through making music together, nonstop for years.. Sonic Pi is impressive and successful because it's grounded in actual arts practice, not extrapolations from rigged demos.
For a long long time I have wanted to do a language for programming music. I like how easy this is to get started with it almost feels like the old tracker days.
How is your conception of what is involved in 'do[ing] a language for programming music' different to super-colider or ChucK or puredata? Not being aggressive just interested in how you see the problem.
When playing music the ability to quickly get up and running is more important than the flexibility of the system. I.e. you wanna quickly get to the point when you have fun and then you optimize later.
But don't get me wrong supercollider is a fantastic tool and was a real innovation.
I'm glad someone mentioned EarSketch. As it's more focused on teaching programming and computing principles by making music, as opposed to a general purpose computer music language/environment like Chuck, Pure Data, and Supercollider. I was involved with the project during Grad school when it was completely attached to a DAW (Reaper), but it can now run in a browser. I'm usually not that excited about audio applications in a browser, but I think this is a great use case.
This is great. As someone who's tried to get into coded music, the biggest hurdle for me has always been getting the pipeline set up properly (including learning VIM commands). This handles all of that.
https://www.youtube.com/watch?v=3_zW63dcZB0
The whole talk is stimulating, but the parts relevant to Sonic Pi are its Overtone (http://overtone.github.io/) lineage at about 9:05, a demo at about 17:36, and the experience using it in schools (http://www.raspberrypi.org/sonic-pi-live-summer-school/) at about 26:48.
What I got out of it was that it's a language which is meant to be both fully featured and easy for kids to pick up, to get interested in music, programming, or both. Maybe not so much programming as just "computational thinking". It has less incidental complexity than EDSLs (like Overtone) or traditional tools which expose more of the synthesis pipeline. Compared to most video game/graphics frameworks, which are also used in education, you can get to more polished content faster.