This is a great project, well done. Algorithmic music is fascinating to me, I feel like there's a lot of untapped potential in the concept particularly with more 'complex' genres where higher levels of composition and structural continuity are required. This tends to work much better with loop-based genres but I'd personally love to do a side project on something that could write melodies from some sort of generative grammar for example. A lot of these exist but the results are mixed very often.
For me the interesting part is not just in getting a computer to do all the work but using it as a compositional aide. You can think of it at writing music at a higher level of abstraction. Rather than writing a score you can write the rules of the score and use a stochastic process to spit out permutations for example. Eg imagine writing a program that could spit out Eric Satie-esque melodies.
As someone who writes mainly loop based genres I agree and have thought about much the same stack of abstractions. Acid techno is a great starting point as it's not known for complexity, being traditionally produced with a 909+303+303, devices that support only a 1 bar loop (a genre with origins that predate wide availability of digital audio workstations). But like yourself I'm interested in more complex genres.
One example of more general EDM levels of abstraction for me might go
1a. this tune needs to play with space (stereo/reverb) more
1b. we need some interesting background sounds
therefore
2. we need an interesting, wide background sound to start at bar 32 and end at 64
Dropping in something from a sample library to achieve that wouldn't satisfy me as every step of constructing that sound is following other rules of abstraction e.g. how should it fit with the existing tonal balance? with the existing rhythm? etc. But at the same time the process is 1% inspiration 99% perspiration - it's following unwritten almost-rules which would be fascinating to capture in an algorithm if I could. They wouldn't have to be completely general rules, just my own personal ones.
> 909+303+303, devices that support only a 1 bar loop
Ha, no, you can input a great deal more than a single bar into them. They were designed to let users store a variety of patterns, which can then be assembled into a traditional intro-verse-chorus song structure, if you so wish.
Doh I stand corrected :) Though still nowhere near as much potential for refining the end result as you get with a DAW. That's not to say live performance doesn't have value of course!
Yep I've done it. Suuuper tedious. Which is why I suspect even though you can do 2 or more bar phrases, single bar loops (vamps?) are pretty typical, instead varying the filter envelope and distortion over time.
Off topic but I found my preferred method of 'live' performance (for dnb) was to write a tune in the DAW with the intention that a few of the lead parts be played live, then switch off those parts and render what's effectively a backing track. Meant I could sort the mixing/mastering beforehand and keep in the flow of rocking out on synths when it came to the gig. I did use ableton but only to auto load synth patches/effect chains when each backing track was triggered.
I tried many other approaches involving 5 people driving a live looping rig but the above was worked best for me, which I guess made my act a 2010 equivalent of the guy wearing the burgandy velvet suit in the hotel bar ;-)
I get why people want machine-aided composition. But I'd love to see a project like this go in the opposite direction.
Maybe put a row of buttons at the top: "I am [Loving it] [Grooving] [Fine] [A Bit Bored] [About to Quit]". Then use that (plus browser data and window-close data) to train ML models to maximize engagement. Or perhaps to hook it up to facial expression recognition.
When I watch friends DJ, there's this great feedback loop between how the crowd's behaving and what they're putting on. They're clearly using the music to achieve certain mental states among listeners. I'd love to see how well that can be done algorithmically.
So many of the selections that great DJs make are curveballs and surprising choices. This is less true in more samey genres, but true pioneering selectors will tend to surprise you with their taste and juxtapositions in a way that's hard or maybe even impossible to really capture in an algorithm.
Put another way, even in something as simple a song selection, the weirdness and contingencies in human creative decision making is a feature not a bug.
The efficiency and cleverness of algorithms can create create great powerful recommendation engines but there's nothing like the idiosyncrasies of a person as a curator.
I'm not entirely convinced by your argument. Some years ago there was an episode of the Gadget Show where they were testing an app for suggesting food ideas. I can't remember how it was supposed to work, it was too long ago, but it was something to do with having profiles of different food types programmed into it and then it used some sort of algorithm to compare taste profiles and come up with combinations that should work.
The combination it came up with that they then tried selling on a food stall was chocolate on pizza. It wasn't to everybody's taste but some liked it, with one describing it as "weirdly delicious".
It doesn't seem completely ridiculous that if an AI could be trained to recognise patterns of music that worked well together that it could analyse a large corpus of recorded music and come up with surprising mixes that you wouldn't think would work, but do.
I don't doubt it's possible to come out of leftfield for an AI DJ but I still wonder if it could be a tastemaker like a great DJ. I don't just want great individual mixes, I want consistent, surprising yet tasteful selections that define an idiosyncratic style.
Many restaurant chains sell chocolate on their pizza base as a dessert; chocolate is also known to complement many savoury dishes as a 'secret ingredient'.
There's a restaurant in London that incorporates cacao in some form or other in every dish. It's a bit of a gimmick in that in a lot of them it's not really noticeable, or just used very sparingly. It's probably easier to do with the constraint of "some form of cacao" than chocolate, though.
For sure! I'll always appreciate the deeper stuff that only humans can do. I just think it would be interesting to see how much of it we could automate. E.g., could it get to the level of decent background music?
Put a camera on the crowd, and motion-track the bouncing of heads. That'll give you two banger metrics: one, how far they're all bouncing/jumping/nodding, and two, how consistently they are.
That would be enough to construct a robot DJ that could track the variations it was making, and rate them for 'better/worse'. Then you just make a cool-looking puppet to store the camera in, that can move around and possibly wave an actuator like a proper DJ, and the rest is machine learning.
The loudness war has ended, prepare yourself for the bobbing war.
There was an interesting youtube video I watched, "All my homies hate Skrillex" that said that the evolution of agressive Skrillex-style "dubstep" over the older, more introspective dubstep was a result of smoking being banned indoors in UK clubs.
This made the smokers have to leave every now and then, which forced the DJs to have to play more agressive, instantly impactful music to draw their attention when they returned.
I guess the video maker knows what he's talking about, he's the one who was at the clubs and clearly has great knowledge about the genre.
He said that he didn't care either way about the smoke, but personally I'd be much more likely to stay in a club with clear air than one filled with horrible smoke.
I love clubs, but hate it when I smell like a ash tray afterwards, despite not smoking.
Another theory about the changing for more aggressive beats: maybe the dj's themself were affected and got angry having to wait for their next cigarette?
I wonder if you could do the feedback mechanism in a more subtle way, for example by pairing the musical composition component to an interactive activity like playing an action game, where movements and so on in the game trigger a response in the music.
There are lots of games that do this kind of thing at a rhythm level, like Necrodancer and more recently BPM [1], but I think there'd be the potential to do a lot more with it than just rewarding beat-alignment with a pre-cooked solo that blazes over top of the base track.
Agreed, but I'd like to take it a step further. Rather than fitting new compositional ideas into the patterns and structures of our existing music theories, I'd like to use computation to create new theories.
As an example, I've become really interested in the fact that tempo and meter are taken as constant, grid-like structures in pretty much all existing music practices (even within, say, Gamelan, though there you do get a lot of pushing and pulling). Typically, you have polyrhythms and polymeters to fit more interesting patterns into the fixed grid, and accelerando and ritardando to adjust the rate of grid traversal, but that's about it. Hardly anyone applies algorithmic/geometric thinking to the grid itself — likely because more complex rhythmic foundations would make human performance nigh impossible.
To explore this space, I created my own little generative music system that takes a handful of simple motifs (a la Riley or Reich) and stacks them into a recursive temporal structure, which is then pushed logarithmically toward a tempo of 0 or infinity. There are some rules so that the "performers" only play at comprehensible scales, and that pitch modulation keeps the piece interesting to a listener. You can listen to it here, if you'd like: https://ivanish.ca/diminished-fifth/
I'd love to see more folks working on tools to make these sorts of theory-stretching ideas easier to access and explore. For instance, I've really been struggling with how to hand-compose music that can fit within a nonlinear/recursive structure of time. Existing tools like Tidal or Max/Pd were built to support the existing theory. I think we need new tools that allow you to design the theory itself.
People create new theories every time they make a piece of music. Computer music tools allow them to work with those theories. Still lots of work to do. The idea of there being "a music theory" is perpetuated by music schools obsessed with the music of a handful of dead white guys.
> tempo and meter are taken as constant, grid-like structures in pretty much all existing music practices
Wholeheartedly agree using computers as a compositional aide is a fantastic thing, it's really quite satisfying to define some rules by which the program can modify the inputs (melodies, chords, drum patterns) and have it output interesting results.
I'm working on a procedure to modify the drum breaks in a conventional way, meaning I have to think less about keeping them interesting while live coding.
Autechre have been doing that for quite a while now using Max et al., including computer based composition. Irlite [0] (and Bladelores) both feature some form of compositional changes throughout the track.
This kind of algorithmic music has been a thing for decades. I have a funny memory of going to a university open day as a child in the late 80s/early 90s and a presentation by some boffin with a synthesizer was what made me want to grow up and become a computer programmer.
Back in the early days, I think CSound[0] was the big software, not because it can make especially interesting sounds compared to more "musical" software synths, but because it's a proper programming language which gives people the freedom to do these higher level abstractions.
In the hardware world, a lot of this developed out of arpeggiators and analog sequencers. I remember years ago I had a "P3" sequencer[1] which implemented a lot of these sorts of algorithmic pattern generaton tools - you could do things like quantize to a scale, then set a sequence of percentages that themselves impacted the likelihood of a particular note being played. I see there is a new version of this sequencer[2] too.
Lots of analog sequencers provide similar features, and if you have a modular setup you can ramp the tempo way down and have them control chord progressions or swells instead of 16th note patterns. Pretty sure this is how a bunch of live ambient music was done back in the day.
That was a fairly niche corner of electronic composition, but even in the mainstream of 20 years ago stuff like the Emu Proteus sound modules featured pretty advanced programmable arpeggiators where you could essentially write the skeleton of a musical sequence and then modify what notes of it actually ended up getting played by deciding which original notes to start from. I always came at this more from that minimal/techno/sequence-based side, but then I went to a wedding and saw a wedding singer playing an "accompaniment" keyboard which showed the other side - entire chord sequences and backing instrumentation getting generated in real-time based just on what chord the keyboard player chose to hit with their left hand. It's surely only a small step forward from there to being able to input a higher level algorithm that could develop a whole song.
Excellent project, also serves to provide some quick audio latency test for devices. Quality of playback seems to vary between browsers and devices (DDG/FF -Android); I presume it's consistent and better in iOS devices as it's known for better audio latency(Don't have one nearby to test).
If the author is reading this, Would love to hear about the quirks related to audio playback in browsers/devices you've found so far in developing this.
This reminds me of the "generative design" trend in architecture, which has been attempted numerous times with typically underwhelming results. See here for an interesting essay (with nice illustrations) arguing that it's a doomed effort:
The question is whether the use of AI could be elevated above a crutch like autotune to actually being used as an instrument like a synthesizer. In other words: is it being used to amplify the ability of the musician or compensate for their lack of skill?
This generates much better patterns than any other music web project I've tried. Awesome work.
If you had no music experience, snuck into a club, impersonated a DJ, and just pressed play on this, I think even without touching it, no one would notice anything's amiss. If you messed around with it occasionally, you'd probably get hired back. (And that's a compliment to the app; definitely not a diss of acid techno.)
I doubt it. Perhaps you were lucky with the piece you listened to, but I checked it for 5 minutes and the off-beat things it generates are pretty much unheard of in this genre. Not that there's no off-beat produced by humans, but it doesn't sound anything like this. Simply because what the machine is doing here brakes the rhythm too much, making it harder to dance on, and after all that's what this music is for.
I think you were unlucky, actually. I've been playing around with it on and off for a few hours now and sometimes it gets locked into the behaviour you describe, but for the most part it's very standard 4/4.
Yeah I checked again and there's definitely more 'standard' acid in there. Now it's just waiting until someone records it and posts it as being theirs :)
The whole Miku Hatsune[0] phenomenon has the front end of this idea. If someone combines something like OP (which is the 2nd best classical algorithmic music project I've listened to) with some musical AI like Jukebox[1], most of the back end would be done.
The missing parts would be the choreography and visual design. I also wouldn't personally say it's done unless it is marketing itself. At which point we are in singularity territory.
This is a rant, from someone who's been playing music for 3 decades, rock, electronic, and classical. Also I've been meditating a lot. I started realizing recently the huge separation between thoughts about music and music itself, just as there is between thoughts and action. The world we live in, of temporary digital media, is a mirror of our thought processes. Our whole modern day and environs are spent in thoughts and media, with almost no action... particularly since we started working at home and ordering food in!
Upon seeing a web page's "impressionistic interpretation/reduction" of a decade or so of music driven primarily by economic and social constraints (warehouse techno), it's so easy for people to pile up thoughts. It's so inexpensive and risk-free for people to say, "oh yeah, that there is good techno/acid," without trying it out on a dancefloor, without having listened to hours and hours of it.
There has strangely been a long-term dream of computer scientists to replace composition. To "spit out" songs as someone put it. It's usually too scary to ask ourselves "why?" because it usually is a game of validating one's own mind against others' impressions. For some reason, auto-composition seems like some kind of holy grail, but of what?! Saving money buying music? A fantasy of abundance? A kind of "gotcha!" that a pure thought-person has outwitted a silly irl composer? What do you actually get for creating an intelligence that wins a Turing test? You certainly don't get sweaty friends deliriously dancing on drugs at 3 am. You typically just get another social promotion in the direction of aiding greater powers at their control over the world. Is that what it's about? Closing ourselves off from human musical expression in exchange for increased financial standing? Get a job bc you proved you can fool some of them of the time? To validate a work ethic that regards music as frivolous by demonstrating that it can be simulated accurately enough?
It's not obvious that every decent musical piece is a more complex and interesting story than its notes. That every new synthesis engine can only ever interpolate its inputs as opposed to incorporate new ideas and more importantly experiences. Experiences that relate to a person and group. We all have a sublime attraction to the story of Beethoven - to having been giving a profound gift and slowly lost part of it. We look for ourselves in his work, where did he break down? How did he handle his unfortunate circumstances?
We perceive music in terms of passion - what it cost an individual in hours of life, blisters, health, money, dedication, etc. We revere Kurt Cobain for pouring out everything he had into his music. If a computer program wrote "You Know You're Right," and we knew there wasn't actually someone real who "never failed to feel....PAAAAAAAAIN," it wouldn't matter to us. Because we're all diffractions of some crazy spiritual force no one understands, but it seems like music is a form of "interdigiation" between us. So why plug our own listening energy into a random number generator and call it good? With respect to ppl saying, "now we can have acid anywhere, anytime," I say, "dial up great mixes on youtube, etc." There are real DJs who put together songs in streams that have even greater meanings than the individual songs - bc, again, music is more than just individuals, it's a collective act.
And no matter what the tech of the day is, it will always be applied as if it were to be the "final," perfect means of autocomposing - remember fractal music in the 90s? PCA synthesis in the early 2000s? So as a fun programming challenge, i say to people, sure, write these programs. But why must we persist in proclaiming their relevance to our active lives when they only resonate in our thoughts.
I think you have a point that there is a certain type of music fan that really does want to make a human connection with the artist through the music, and algorithmic composition may never really be enough for those fans.
But it's also the case that one of the things that makes electronic music appealing to a lot of people is that it generally doesn't have the cult of personality that exists in other genres. For sure, some parts of the scene ended up idolizing DJs instead, but I'd say a significant subset of techno fans are specifically not interested in the artistic motivation behind a piece and are instead just looking to hear some cool sounds.
That's why electronic music is a great place to experiment with algorithmic composition, and that's why all these people joking about ravers dancing to washing machines because they're so drugged up kinda missed the point of what we liked about the music in the first place. It's not about telling a story, or communicating an emotion, it's just about creating a cool sound. It doesn't need to be more complicated than that. If a piece of machinery can create a cool sound incidentally, why wouldn't we dance to it?
Of course, as I commented elsewhere, I don't think this particular example is especially notable. The music it generates is not much more interesting than what you'd get if you anyway just hit a "random pattern" button on a 303 clone that lets you constrain the result to a scale. But that doesn't matter, because it's still a nifty project and a bit of fun.
If you really are searching for the true intent of the artist, then you have that with algorithmic composition too. Click round the rest of this developer/artist's site and you'll find lots of little projects and experiments with music and software creations that - even if you don't see them as art, are at the very least the product of a creative hobbyist. There's your passion!
> What do you actually get for creating an intelligence that wins a Turing test?
> it's just about creating a cool sound.
I specifically want to create novel hard/dark/minimal techno bangers with similar effort to curating a playlist.
I want a toy that makes music that will keep me and my friends up into the wee hours, without the toil, without the bleeding egoic labor.
And I want people with good taste to have the same ability without putting thousands of hours into learning music theory and arcane details about DAWs and synthesizers and other... Traditional... Tools.
I want to twist the nips of a sexy thump machine and get a good feedback loop[0] going between jerky knees and the stupid-smart NPC musician in my laptop.
I want to spend the night in a tent with eyes that digests my tribe with its acid colors and plays us like so many marionettes until, exhausted, we cut the power and go home. And there wasn't a Guru making it happen, no Chad waiting his turn on the deck, just sound, lightning, and loving this f'n party, man.
The other points about writing songs that express humanity, and the inventor's hubris and all that hit the mark. But deleting the boundary between bedroom production and the rave itself... I'd be hard pressed to imagine anything more fun and exciting.
I think I'm interpreting algorithmic composition in a different way than you are. I am someone who has always been profoundly affected by music. This is to the point where some songs consistently give me frision or make my eyes water up. Music, for me, is a profoundly emotional experience. I was just never very good at creating music; that is, until a few years ago.
I got into making music using DAWs and learned a lot of theory. I focus on making Synthwave/Outrun style music. Not the most technically complex genre sure, but there is a lot of room for creativity as the genre isn't very well defined. I also enjoy how Synthwave isn't really about musical complexity or technicality; instead, it's about the atmosphere. It's nostalgia for a time that never existed in a sense.
Now, all of this to say that I'm still not a great musician. I've been learning banjo for the past year, and getting pretty good at it.
I'll get to the point though: to me, algorithmic music isn't an end, it's a means to an end. The end in this case is composing music. Algorithmic music can be a source of ideas and inspiration in a way that nothing else can. This is especially true if we are able to specify the rule set for music generation. How many times have I sat down at the keyboard and tried to write a melody over some really cool rhythmic bass line I came up with? Countless times. If I could, for example, plug in the a key, rhythmic signature and feeling I'm going for and generate a melody, I would be able to finish more songs. I could use the generated melody as a starting point; it might spark some new ideas. This would be even cooler if I could provide the algorithm with a wav file or some midi and have it try to generate a bass line, melody, chord progression etc.
So, I guess I see algorithmic music generation less as something to replace human made music and more as a tool to aid in sparking creativity in humans composing music.
3am and really dancing.. not just doing repetitive motions on the straight pulse but living trough an embodied composition based in reception. the feel when u understand the music with your whole body so well that you can anticipate whats next, so you can even rythmically juxtapose it with what you dance and all that in a life-form with others sharing this experience as value in and of itself. damn i miss dancing. stupid covid
>There has strangely been a long-term dream of computer scientists to replace composition. To "spit out" songs as someone put it. It's usually too scary to ask ourselves "why?" because it usually is a game of validating one's own mind against others' impressions. For some reason, auto-composition seems like some kind of holy grail, but of what?! Saving money buying music? A fantasy of abundance? A kind of "gotcha!" that a pure thought-person has outwitted a silly irl composer? What do you actually get for creating an intelligence that wins a Turing test? You certainly don't get sweaty friends deliriously dancing on drugs at 3 am. You typically just get another social promotion in the direction of aiding greater powers at their control over the world. Is that what it's about? Closing ourselves off from human musical expression in exchange for increased financial standing? Get a job bc you proved you can fool some of them of the time? To validate a work ethic that regards music as frivolous by demonstrating that it can be simulated accurately enough?
To me, music is music, regardless of the creator. If it sounds good, it is good, whether it was composed by a human, a program, or a combination of the two.
If your issue is that this generative music doesn't sound sufficiently good compared to a good human producer/composer, then that's fine. The rest just feels like some kind of weird projection onto my post that I don't understand.
Computer-generated music is not at all considered an interesting goal for financial reasons. I don't even know what you're trying to mean by that. I think it's interesting because:
- It's one of the areas where the best humans still greatly outperform the best programs
- I believe computers do have the potential to one day create excellent, artistic music
I don't align with your view of experiences, passion, cost to an individual, etc. I think Beethoven's music would sound just as good regardless of if he were deaf or not deaf or a mass murderer or anything else. I think art stands on its own, with the backstory just as interesting trivia for those who want to know more about its creator.
I get what you're saying, but in my opinion the general line of thought sounds a tad too pessimistic or maybe even fatalistic. Perhaps drivent by recent pandemic events? In short: I don't think that one or more people making music, live or otherwise, and others joining to enjoy/dance/go-wild/pick-your-poison on that music is ever going to disappear.
> If you had no music experience, snuck into a club, impersonated a DJ, and just pressed play on this, I think even without touching it, no one would notice anything's amiss. If you messed around with it occasionally, you'd probably get hired back
Some sufficiently intoxicated people will dance to anything, though a good dj feels an obligation to make sure the thing being played is of good quality, regardless.
As to who gets hired, that's more a matter of networking than musical talent.
Sufficiently intoxicated people will dance to anything
I've heard this sentiment before and I always wonder whether it has been given enough thought or whether it's really true for you and the people you know. I mean, me and me friends have left more than one party because we'd rather stroll around the neighbourhood than endure the music the DJ was playing. Admittedly that happened more when not using drugs but it's not like drugs suddenly just make you accept anything. Not even MDMA, which is the one which has plenty of potential in that direction.
> Sufficiently intoxicated people will dance to anything
There's an urban legend saying Richard D James (Aphex Twin) threw a microphone in a kitchen blender at a concert and people were dancing to that sound...
I was at one of those gigs before several years ago. Possibly because the volume was too loud, or the music generally too discordant but about 85% of the crowd went upstairs to the chill out room for the majority of the set. I remember a distinct "nope" feeling. The artist himself was off stage twiddling his knobs too.
Those who remained were selling merch, behind the bar, or stood at the back with arms folded.
Those people need not even be intoxicated, because noise music is its own genre, and some of the people enjoying RDJ will probably be responsive to it; RDJ has had a few moments when he approached the Japanese scene. Indeed, when artists like Merzbow perform at festivals, it is clear that the noise stimulates some people's dance reflexes.
Saying that I have had a similar thing with tracks that have police sirens in them when listening in the car, it takes a moment to realise it isn't an actual emergency vehicle.
Was it because the music was bad though, or because you were opinionated against it?
One may not like certain music, but like with other art it gets into tricky territory to objectively call it bad :) Anyway: yes, it was because we found the music not good. Or not up to par with our expectations. Or not matching our current mood.
I don't have strong opinions against music styles anymore since I went through puberty, the only period in a lifetime where that can be forgiven. Haha, I was one of those idiots who thought metal was the only true music and anything with an electronic beat under it had to die and rot in hell and fans thereof needed to be actively made fun of.
Of course life is so much better if you just listen to literally anything and decide based on what you hear. Which didn't take long to realize. That being said, there are a lot of genres which usually don't do anything at all for me, i.e. which don't make me feel a thing. And others which I almost cannot stand listening to. Most of the time individual songs though. Completely normal, but not the same as being opinionated against it.
I mean I don't even know if people actually listen to acid or if it's a genre of music that online projects like this seem to latch on to. I don't recall (but I'm pretty insular) it being a thing anywhere.
Trance/dance music, sure. I guess acid is a subgenre?
Yes it's a subgenre, and yes it's a thing and has been for a couple of decades. And it even has its own subgenres/'mixins' like acidcore or goa with acid influences (not sure that one has a name).
That tiled room gets me through the workday. In that ilk: Fear n Loathing. Art of Minimal. Trippy Cat Music. Beamer. A bit further from that sound but still great for coding: Cercle, Radio Intense, Cafe de Anatolia.
Maybe besides https://openai.com/blog/musenet/? The present solution is rather primitive; just a bit of random with time and scale quantisation; but some people seem to be quickly satisfied.
I have listened (so no touch or watching it) for a couple of minutes and while the patterns all sounded fine (no an expert in that genre but have been to a number of raves) in isolation many of the changes didn’t feel intentional and at times outright jarring without any transition leading up to it.
I reckon in a club situation you would play the patterns for longer which would make it less notable. I do think it is quite impressive work, but overall it still sounds to too much just like switching between patterns without any bigger “musical arc” (not sure what the correct terminology I’m looking for is) in mind. But to be fair that is what a whole lot of peoples calling themselves producers sound like as well.
Some of the patterns sound really good, but can you change the bpm? I think a DJ set at the same bpm all night would be kind of boring. I love acid techno, but I mix it into other (sub)genres, and have never heard a set where it was only acid techno. That said, it sounds like some of the drum patterns are more electro than techno, which is nice and provides some variety.
I like this genre, but I don't find this generated music satisfying. My brain is anticipating some 'rule of three', which I don't feel it's following and I therefore feel disappointment.
I'm expecting it to switch an established pattern up (more than it does) on the third repeat, just as I get used to it, and to have fractal-like layers of patterns, which I didn't observe.
I agree. This is a very cool project, mainly because the actual synth sounds are good, so it sounds like a legitimate live acid musician hitting the good old "randomize" button on their 303 clone, but it still sounds a bit like a carefully managed "randomize".
I think what makes it work for people is that it evokes that kind of IDM-ish hipster acid music where hearing something a bit unmusical without a 4-to-the-floor "release" is acceptable. But I'm not sure this system would have as much luck putting together an actual "acid banger" like Lochi's London Acid City, Purple Plejade's Blanche, Hardfloor's Acperience etc.
The problem is that a computer can't know when it really came up with a "banger" of a sequence. When you have a real musician, they can sense when a particular sequence really "clicks" for people, somehow it just accents in places that sound cool. Then they can move ahead with that sequence, tease it and build it and bring it to resolution. This is the holy grail of generated music, I think, to somehow get a computer to a place where it can recognize whatever that quality is.
Yeah, very nice. This gets something right that so many other generative music experiments get wrong, which is variation in note length (aka tempo). You see other generative music use Markov Chain for note choice but not note length and it quickly becomes very "same-y", whereas this has, for the vast majority of melodies it generates, a natural feel (to me).
I wish I understood the WebAudio API better to get a better handle on how the instruments are created.
This made me want to grind lap times in Wipeout 2097. The soundtrack and atmosphere in that game was really similar to this, and like nothing I'd heard or seen before at the time. It probably shaped my taste in so many ways. Fond memories!
EDIT: For those sharing the nostalgia, it turns out that the composer of the soundtrack, Cold Storage, remastered it and put it up on streaming services. It's a two part EP called SlipStream.
I was going to write a post saying I was glad a "later" wipeout game had the effect as the original had on me, then I looked into it and saw that Wipeout 2097 was called Wipeout XL in North American, and was the exact game I played :D So, I'll just echo what you said: the soundtrack was foundational in my adult music tastes :)
Glad to hear it! Just out of curiosity, where did you end up taste wise? I found DnB pretty early on and it has always remained the backbone of my rotation. Mostly dark stuff like Current Value, Noisia, S.P.Y etc.
Yeah, I wrote that as a 11 year old me when I heard that track on WipeOut3 for the first time. I was still discovering electronic music via a bunch of unwanted Euphoria compliation albums.
Wipeout3 was often played in both my PlayStation and CD player.
His bandcamp page has a lot of varied stuff, including his recent stab at improv ambient he knocked out in about 4 hours of work on this album: https://coldstorage.bandcamp.com/album/drift
This confused me as I only knew the Playstation version which features a selection of artists, I'll have to give the full Cold Storage soundtrack a listen!
Making interesting music of other genres is a lot more difficult to do algorithmically. One of the big perks of acid techno (a pair of TB-303s and a TR-808 or TR-909 drum machine, and some simple effects processors) is how easy it is to get really interesting sounding stuff out of random patterns and simple filter sweeps.
I will also vouch for and throw my 2¢ in for DI.fm. I've been subscribed since 2014 and while I have my qualms (namely a few stations that haven't seen a playlist change since I signed up, or their somewhat recent attempt to switch to a playlist-based setup instead of internet radio), it's a top-notch service that continues to please.
WOW! I had absolutely no idea about these, my DI.FM credentials (Google sign-in) worked flawlessly and registers my premium subscription.
> I rarely visit their website as they allow you to save your favourite stations as a play list for your preferred media player.
Agreed 100%, plus the ability to choose the stream quality for your playlist-stations is an added bonus, not even mentioning the hardware player support. It feels rare anymore to have a quality, reasonably-priced, bloat-free service with user-friendly features, but they seriously deliver
I love acid music, such a simple formula with infinite variation. There's something so delicious about the wobble of a 303 bassline.
I just want to mention Tin Man for those who are interested in acid. He writes, mostly, quite palatable (often mournful) tunes at house music tempo, but he has a good range and goes full banger at times. A real wizard with a heart for the genre.
Also. Rashad Becker masters most of the Dozzy's and Tin Man works. Very betufiful mastering work: so mellow, so clear and spacious. Take a listen of Donato Dozzy - K album on a good system, you'll hear so many details and beautiful spatial placement.
Right! I'm familiar with Donato Dozzy and their collaborations, it's a match made in heaven. I need to chase down a copy of his K album on vinyl, it's lovely.
1. The whole thing is done in 1000 lines of code after unobfuscation (including 808 and 303 emulation, sequencer, random generation, UI) - technically it's quite impressive.
2. Part of why it makes such good patterns seems to be reliance on octaves. The patterns usually contain at least 50% or more of the same note repeated, across one or more octaves, and coinciding with rhythmic beats. Even when the notegen set is just F2, F2, F3, F4, G#4 (mostly all the same note) it's quite listenable.
Years ago I used to hand-sequence synth lines like that and skipping between octaves was the "instant hack" to get compelling patterns by adding dynamism to the sound without making the melody itself too complicated. You could just jump around multiple octaves for literally two notes and have some pretty awesome sounding stuff. Heck, that's what an arpeggiator does, but honestly, doing it by hand can pretty often get better results. Combine with filter envelope key following for even more interesting sound on an otherwise "simple" pattern :)
I remember installing ReBirth on my PC many years ago, and so amazed that it can emulate two whole friggin' TB303 analog machines, with a 909 drum kit too, just using the CPU. Which was pegged to 99% during playback.
And here we are today, the same thing is running in my browser and taking 2% CPU.
Rebirth was discontinued and released for free download in the Rebirth Museum, if you can find a genuine copy I think it still works on Windows 7 and Mac OS 9.
Propellerhead, right? I remember opening Reason for the first time (I had no experience with music hardware) and was completely lost. I played around with knobs for a long time trying to make sounds with limited luck. Then I clicked something and the entire rack flipped around and there were patch cables hanging there that I could plug anywhere. I think I plugged things into different holes, the one sound I had going stopped, then I quit the program :)
I eventually ended up reading the manual and getting the hang of it. My friend turned me on to it, and he had it mastered. He produced house music and frequently toured the world with Mark Farina back in the day.
Haha, this was my experience with Cubase. Then one day, I came into a music shop complaining about Cubase. The guy showed me how to hook up a simple VSTi, and I was hooked. I discovered Reason soon after.
Then there was this Dutch House DJ, A-lusion (I think that was his name). He said he used Reason in combination with Cubase.
That got me really really hooked.
Then I found some videos of some hip hop artist, showing how he made hip hop beats in Reason.
That got me so hooked that I still want to go back to that time. But alas, I'm programming. I like programming, but producing music was my passion back in the day.
I was in sane boat until I found a pirated version of a tutorial that took me through creating a house track from scratch. Brilliant fun. Total waste of my time though!
The system usage on this is well-designed. I'm using an ASUS F201E Notebook with Debian 10 and the CPU throttled down to 790Mhz (just a bit less than 0.8GHz) to save battery while I'm out on coding field trips, with CPU usage registering around 20%. Very nice.
After listening to this for an hour, it does well with the minimalist resources it's given. Just like what people do on the Pico-8, limitations start becoming an attractive feature. And you can do a lot with 16 bars. A good example is "Army of Me" by Björk - it only uses 8 bars for its backing track but the constant tweaking of filters make it sound a lot more.
This is starting to get in to www.Pouet.net material. Seriously, you should submit it as a demo. It's not just the coding they're after in demos but also creativity. The votes on HN say it all.
I have seen plenty of AIs that supposedly generate music, but this is the first one where the results are actually plausible. This is not too different from what I remember hearing in the early 2000s raves. Only the sound-system back then was maybe a bit beefier than my meager laptop speakers. ;) Well done!!
I have actually been working all morning now with this as my background music. It is great.
Also worth following on Soundcloud for old school jungle, with lots of mixes from back in the day: https://soundcloud.com/ethereal94. The ones from "KOOL - 94.5FM" (straight out of Clapton) are well worth listening to.
One of the things that's cool about 2020 and 2021 is getting to experience things that were mostly forgotten relics of the past as very real experiences today. I never thought I'd go to a real speakeasy (not a gimmicky cocktail bar with a password or hidden door). Illegal raves in the woods have always remained a thing, but have gotten a bit of extra cachet this last year.
The button presser could make a claim on the copyright. I could see a legal challenge being filled as this not filling the requirement for being a creative work and hence not copyrightable.
This is definitely enjoyable and impressive for being done in a browser. It covers some of the core bits of a prototypical acid set up, but there are some basic things missing.
The "909" section only has 4 sounds with some (4?) different velocity (think loudness) levels. The real thing had 26 knobs to adjust the 10 different individual outputs (plus L/R) of 11ish sounds (ride and crash cymbals are on separate outs, but open/closed high hats are on one out). The "303" has 4 knobs but the real thing has 6 and the delay would be from some sort of external effects, so really it's missing 3 controls (tuning, env mod, and accent).
The note color seems to indicate accents and slides (the yellow and purple), but I'm still having trouble hearing which is which and it's not really doing a great job of getting to the level of resonance and distortion that happens with the real thing. It's nice that delay has been added as an effect on both "303"s and the overall mix, but there are so many classic acid tracks which relied on some sort of distortion either through external effects or various mods.
The whole issue of 303 emulators is a huge rabbit hole in and of itself. [1] The history of the 303 and its various uses can also go on for hours [2]
This reminds me a lot of the algorithm that drives the Korg KARMA workstation. I often found myself letting it run like this to see where it decided to go. The settings were changed by knobs but there was often slight drift and changes...as well as a lot of 1/256 measures.
The KARMA architecture moved on to the Korg M3 and OASYS (IIRC) and is very interesting. There are some good videos on the topic and Stephen Kay I have found to be very approachable in the forums (albeit a decade ago).
There was even software to make your computer "double" what the KARMA was doing so you could expand to even more channels/instruments...days like this, I miss my synth.
edit: KARMA= Kay Algorithmic Realtime Music Architecture and it was a variant of the Korg Triton system.
I'm a programmer. I also enjoy occasionally producing electronic music, usually within the trance genre. Nevertheless as a good programmer I'm lazy and that reflects in my music creation. I'm a minimalist, but minimalist trance doesn't sound so good in my opinion. Many layers are often needed, with complex melodies, if you want it to sound professional.
On the other hand, acid techno seems to be minimalist by design, I love it. If I can create it programmatically even better. Mixing automation with some human interaction seems to be the best of both worlds. I'm in love with this and feeling like creating my own acid techno production tool, with a mix of automation and interaction.
I don't know anything about acid techno. Can someone comment on how accurate this is to the genre? I can't seem to get it to do much but make random beeps and boops, but maybe that's just the style of music?
This is pretty much how it sounds - the particular synthesizer and tweaking its parameters (resonance, filtering, and so on) is the main defining feature, the particular classic drum machine used typically goes along with it - but there’s a lot more variation you can do besides just slowly modulating it like this program does, as well as crossovers to other genres (using that squeaky synth sound in other genres of electronic music).
This anxiety inducing song by Evol basically switches between samples of different acid tracks every other beat, serves as a nice primer for how you can vary it lol
Thanks for the examples. I can see why this website is so impressive now, it sounds pretty spot on to the samples! That's absolutely mind blowing to me.
This is a good algorithmic approximation of the quality of 50-80% of sub-professional acid techno live sets. It's a great minimum bar for "should you be performing live": you should at least be more interesting than this webpage of random patterns and sweeps.
To be honest, I'm going to leave this playing a lot. This is an extremely representative sample of a very specific minimal style, one I love a lot.
Wow, excellent execution. I listen to a lot of acid and am amazed how good this sounds. Great way to discover interesting patterns for further music making as well I can imagine; how would that work with copyright though?
> This version of Jasmine by Jai Paul is created using the BRONZE AI engine. On each listen, Bronze performs a unique and infinite playback of the piece.
Bronze is a new technology that allows music creators to utilise AI and machine learning as creative tools for composition and arrangement. Bronze is also an audio file format which will revolutionise music playback, enabling artists to release non-static, generative and augmented music.
What a perfect song to pick for this experiment. It's extremely listenable. Jai Paul is a genius and my biggest problem with him is that he doesn't produce enough songs. His story is an interesting one, if you are interested. His laptop with his music was stolen and the music was leaked, and he disappeared from the scene for years afterward.
This has an extra layer to it in that a lot of original experimentation was done with the 303 generating random patterns by removing the batteries for a period of time to reset the memory
If anyone is looking for raw data they could potentially cut down to a training set of data, Marc Rebillet aka "loop daddy" regularly improvises songs by incrementally creating and then overlaying small one to three bar loops. He doesn't specifically make acid / techno music but some of it is EDM.
"It's so easy to get acid, you can get it anywhere!"
And now you truly can get fresh acid anytime, anywhere! Absolutely amazing project. This is actually like... 75% passable, and better than most entry level stuff I've heard.
I've wanted this to exist for like 15 years. Tried my hand at it with Reason back in like, 2012, got kinda there with RPG-8, Jupiter and way too much automation, but it was just way too limiting of a medium back then. This makes me want to take another stab at it.
Some minor suggestions:
- on mobile: Not sure which interfaces are clickable but nonresponsive, vs not clickable at all. E.g. BPM knob didn't do anything for me on mobile, but worked fine on desktop
- I realize this might be hard for browser, maybe with a node/electron app, but MIDI out would be dope.
- Distortion module
- A simple mix-out EQ would be nice.
- Constraints to make sure one osc is always on a bass part
- third osc? :)
- What would really sell it is some lead-in before a pattern change. E.g. When the cycler starts to flash red (especially when they are all about change), occasionally toss in some snare fills/crescendos
If you had a sample board, you could easily convince someone this was made by a human. Maybe inexperienced, but still very much human.
I'm collaborating with this company working very much on this space: giving artists simple tools to make music that is non-linear and generative, where the artist expresses structure and composition, and the machine is the performer: https://bronze.ai
As a fan of Acid music this is actually pretty good. It even has "drop" measures where the drums disappear and the lead goes through a dynamic cutoff filter. Every now and then a bad lead/bass combination does pop up.
An implicit good quality in an app is having it make you feel like you have super powers. For a brief moment I could see myself in Ibiza, taking over a gig on short notice and making it work by using this app. Bravo to the author.
If someone automates music creation in the popular styles and then automates submitting it for copyright protection, then in not so distant future creation of any music will be illegal, as every nice sounding melody will be claimed.
In a way this is what large labels do. They release a ton of similar sounding music just for that reason.
I think the patent trolling equivalent in music world has not been discussed much.
Nice, someone should hook this up to a real 303 and 909 with Web MIDI API... heheh :) Also this synth line with heavy delay sounds like something from Velvet Acid Christ, haha
https://iphonesoft.fr/2009/12/29/easymix
This wasn't AI based, because 2009, but with a brain at the commands, you ended up with the same "pre-synchronization"
Turned out the difficulty in this project was more to a) find a source of decent sampels b) get them to the right sound quality / length for synchronization
Anyone have any recommendations for reading material for an experienced programmer that wants to learn how to start to write procedural music? My kid is both learning to program and learning to sequence music I would love to be able to teach him how to combine the two. The last time I did any procedural audio generation was back in 1995 and I’m sure things have changed since then.
Reminds me of how the music in Rez for each stage is a single short loop. It's longer than a bar; I think it might be four bars or so. A 32-bit bitmap for each "layer level" controls which of the loop's 32 MIDI channels are active, creating the illusion of an ever-shifting soundscape.
Really cool project. I have been spending most of my free time lately on a similar side project: https://neptunely.com. It takes midi files as input and generates original samples that fit with your composition.
That's pretty impressive. Take it to the next level: pattern of patterns. Your level-1 patterns look like a sum of simple repetitions. A level-2 pattern would be a simple repetition of level-1 patterns. And so on.
Anybody want to ELI5 the general concepts of how this is composing the loops? E.g. I don't know what 303-01, 303-02 and 909-XX are, or what the notation in those rows means.
I never listened to this style of music before.
I honestly do not really like it.
BUT: Listening to this music makes me focuse very hard on my work. I love it!
Sounds are great. The only major thing that felt "off" is the overall composition over time. Just needs some more variety in structure over time and it's golden.
It is cute that this is implemented in the browser, but to be honest, this does not sound any different from a whole load of generative modular synth patches out there.
Cute. All we need is for it to have maybe a couple more tracks and an option to instead randomize each track every 3-6 full bars, and I can leave it on all day...
To me, the end goal is to be able to procedurally generate live Grateful Dead (fair amount of training data) and Sphongle (not as much, but fairly repetitive).
I'm fascinated by the implication that in the not so far future we'll reach a point where we can teach a computer to generate arbitrary works of art endlessly.
Want new episodes of Seinfeld? Just feed the existing episodes to an AI and ask it to make new ones. Beatles songs? Ditto. Spotify can look at your playlists and not only recommend tracks that you might like, but actually create some artificially. Post-cultural-scarcity.
Of course by then we might already have reached the Singularity so it'll be a minor aspect of a gigantic revolution.
I'm endlessly fascinated how we on HN have the arrogance to assume we can do things like auto-generate Beatles songs or episodes of Seinfeld. Do you think comedy writers are sitting around the writing room saying "Man, in 5 years we'll be able to take over those nerd jobs, no problem."
To auto-generate Seinfeld or Beatles... you have to be able to create as well as Seinfeld or the Beatles. Try it sometime. When you make a billion dollars and change popular culture, answer this post and I'll eat crow.
Then you need to be able to do what no one else in history has been able to do, which is bottle that skill.
Dude, most of us can't even figure out what kind of nails we'd use to use for exterior siding on a chicken coop.
I wasn't talking about hand-written algorithms like the article is using, I'm talking about machine learning. TFA made me think about it because we've seen a lot of "this X does not exist" lately. I should've been more specific about what I had in mind, I agree that it's a reach with this particular example.
Sure, we're still a long way away, but progress has been exponential. It'll probably take decades but I suspect that we may see something like that in our lifetimes.
Interesting project, but terrible UI. If you're not immediately clear what you are doing, you really have to struggle to comprehend what the adjustments are. Some of the knobs don't appear to do anything.
For me the interesting part is not just in getting a computer to do all the work but using it as a compositional aide. You can think of it at writing music at a higher level of abstraction. Rather than writing a score you can write the rules of the score and use a stochastic process to spit out permutations for example. Eg imagine writing a program that could spit out Eric Satie-esque melodies.