reading about Eno's ideas on organization and variety makes me want to share some perspectives directly from my experience with music performance practice, specifically in live coding.
For a long time, the common practice in live coding, which you might see on platforms like Flok.cc (https://flok.cc) supporting various interesting languages, has been like this: Everyone gets their own 'space' or editor. From there, they send messages to a central audio server to control their own sound synthesis.
This is heavily influenced by architectures like SuperCollider's client-server model, where the server is seen as a neutral entity.
Crucially, this relies a lot on social rules, not system guarantees. You could technically control someone else's track, or even mute everything. People generally restrain themselves.
A downside is that one person's error can sometimes crash the entire server for everyone.
Later, while developing my own live coding language, Glicol (https://glicol.org), I started exploring a different approach, beginning with a very naive version:
I implemented a shared editor, much like Google Docs. Everyone types in the same space, and what you see is (ideally) what you hear, a direct code-to-sound mapping.
The problems with this naive system were significant. You couldn't even reliably re-run the code, because you couldn't guarantee if a teammate was halfway through typing 0.1 (maybe they only typed 0.) or had only typed part of a keyword.
So, I improved the Glicol system: We still use a shared interface for coding, but there's a kind of consensus mechanism. When you press Cmd+Enter (or equivalent), the code doesn't execute instantly. Instead, it's like raising your hand – it signals "I'm ready". The code only updates after everyone involved has signaled they are ready. This gives the last person to signal 'ready' a bit more responsibility to ensure the musical code change makes sense.
I'm just sharing these experiences from the music-making side, without judgment on which approach is better.
For a long time, the common practice in live coding, which you might see on platforms like Flok.cc (https://flok.cc) supporting various interesting languages, has been like this: Everyone gets their own 'space' or editor. From there, they send messages to a central audio server to control their own sound synthesis.
This is heavily influenced by architectures like SuperCollider's client-server model, where the server is seen as a neutral entity.
Crucially, this relies a lot on social rules, not system guarantees. You could technically control someone else's track, or even mute everything. People generally restrain themselves.
A downside is that one person's error can sometimes crash the entire server for everyone.
Later, while developing my own live coding language, Glicol (https://glicol.org), I started exploring a different approach, beginning with a very naive version: I implemented a shared editor, much like Google Docs. Everyone types in the same space, and what you see is (ideally) what you hear, a direct code-to-sound mapping.
The problems with this naive system were significant. You couldn't even reliably re-run the code, because you couldn't guarantee if a teammate was halfway through typing 0.1 (maybe they only typed 0.) or had only typed part of a keyword.
So, I improved the Glicol system: We still use a shared interface for coding, but there's a kind of consensus mechanism. When you press Cmd+Enter (or equivalent), the code doesn't execute instantly. Instead, it's like raising your hand – it signals "I'm ready". The code only updates after everyone involved has signaled they are ready. This gives the last person to signal 'ready' a bit more responsibility to ensure the musical code change makes sense.
I'm just sharing these experiences from the music-making side, without judgment on which approach is better.