Hacker News new | past | comments | ask | show | jobs | submit login

Something I noticed about all these projects (and they are quite a few) is that they are quite old (some going back ~2 decades or so).

I wonder what is the dynamic behind that longevity. Music hasn't changed ofcourse but on tech side I would think there are significant new possibilities.

Is it something related to the difficulty of implementing low level algorithms (famously a lot of linear algebra stuff goes back decades and rests on C / Fortran libraries). Is it that there isn't enough interest to justify new efforts or are those systems already near "perfection"?




> some going back ~2 decades or so

Some indeed started long ago; e.g. Common Music had its roots in the eighties and became a pretty popular composition environment during the nineties; in 2008 there was even a fundamental redesign and it's still developed. But there was also a certain abundance of new tools; many were released that essentially had the same goal and offered the same features, just implemented a little differently.

> Music hasn't changed ofcourse

Music and the way music is composed, produced and consumed has changed tremendously over the years, and so have the tools. In the past the focus was on algorithmic composition, and tools allowed to extend the possibilities of a composer also to sound design, but it took years until computers were fast enough to render a composition in real-time. Then came the time of the DAW's when eventually everyone could produce music with little investments. In the last ten years live coding became popular which is yet another way of composing and producing music, with new requirements for interactivity and ergonomic efficiency of the tools, and yet a different view of the composer and the composition process.


However, and this is something I wrote about recently in my thesis, we are now experiencing somewhat of a renaissance of older approaches as it is only fairly recently that it has become practical to run the older (lisp-based) algorithmic tools in real-time. I can run Scheme during live playback and have the GC finish it's business fast enough to use it with very acceptable audio latencies within Ableton Live for example. I'm just now embarking on a PhD in this area actually, and its pretty exciting how many previously dusty things can be used in exciting new ways.

I discuss this a bit here: https://iainctduncan.github.io/papers/introduction.html


Thanks for the thesis; looks interesting, will have a close look at it. Actually, I'm not really sure whether "Lisp has always been an elegant and productive way to represent and build music"; from my point of view the optimal way to represent music incorporating all relevant dimensions and relations has yet to be found; actually Dannenberg himself switched to SAL, away from Lisp (see his algorithmic composition book).

What's the topic of your PhD? Who is your supervisor?


Well, Dannenberg's SAL is really a sugar layer over XLisp though right? I remember citing his writing somewhere from the book with Simoni on why Lisp is great for music. My impression was that SAL's raison d'etre was to be more accessible to people for whom lisp is too weird than anything else, but perhaps I am not completely correct.

My PhD is an interdisciplinary between music and CS, working with George Tzanetakis and Andy Schloss at University of Victoria in BC Canada, continuing the same work I did for the Masters. So that is Scheme for Max and other Scheme related music work, targeting algorithmic composition, mainstream production, and live coding contexts. Some of the current initiatives include a browser based Scheme algorithmic music system (not yet published, but using a WASM C++ worker for a sample-based scheduler), further work on s4m, s4pd, and Ableton Live tools, integrations with Csound (I wrote the csound6~ port for Max), likely a standalone host (similar to Grace), an object system and score tool, and some actual music!

(EDIT: I was wrong and thinking Nyquist, not SAL!)


> Dannenberg's SAL is really a sugar layer over XLisp though right?

SAL was designed and published in 2008 by Rick Taube; the present implementations are transpilers to Scheme or XLisp, but SAL is quite different from both Scheme and Lisp (in syntax and semantics).

> My PhD is...

Sounds interesting; but what is the actual research focus (besides the programming work and tool implementation)?


Ah right, typed too soon, I was thinking of Nyquist, which is over Xlisp. YMMV, but for me personally I find Lisp much nicer to represent music than SAL.

My PhD is interdisciplinary, so it is not a pure research CS - it's a combination of CS work, project work, and music composition & performance. The research side will be into how Scheme and recent developments in the Scheme side of PLT can be used in modern machines/environments for exploring algorithimic composition and live-coded composition and improvisation.


> I find Lisp much nicer to represent music than SAL

Lisp and Scheme are indeed very flexible to represent about anything, but on a very fundamental and general purpose level; of course you can do some kind of "DSL" with macros, but it's still "Lispy". SAL has less degrees of freedom and helps users not to get lost or bogged down, but it's still a general purpose programming language, not a specification system covering the essence of musical structures and processes.

> My PhD is interdisciplinary, so it is not a pure research CS..

Here in Switzerland "interdisciplinary" means that the research topics concern more than one research area, not that it's not pure research. Usually the PhD students are more challenged than with a "traditional" PhD, because there are more professors involved, each with his own research focus, that is most important to him (so it happens that the student does actually work for more than one PhD).

Concerning Scheme, depending on how big the "composition" is and whether it does sound generation and has to be controlled in real-time, a traditional byte-code interpreter will likely get to its limits, even on present HW.


Well sure, you can bring any real-time sound system to its limits if you want to - some of my additive synthesis experiments do that very quickly even on pure C++.

But as far as practical use goes, I run 16 algorithmic sequencers implemented completely in s7 in real-time, inside Ableton Live, at an output latency of 8ms, and do so for long enough for compositions. This is while Live does a ton of software synthesis and FX dsp too, and all of the sequencers can be altered on the fly without audio dropouts! This is on the cheapest M1 you can get too. So it's absolutely practical for real time work. This is also without even a ton of attention to real-time GC in s7 - I haven't dug into that yet, but Bill has told me that while he did a lot of work to make it fast, it doesn't use an implementation specifically targeted at lowest possible pause times.

Whether the tradeoffs of Scheme vs other options are worth it for a particular composer/producer/performer varies of course, but it really is time to put to rest the notion that we can't run a Scheme interpreter for real-time music generation.




The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: