I'm not sure I follow your argument, because neither synthesizers nor recordings write music.
For augmenting comoposers, sure, GenAI can be a tool like others. Musicians have been incorporating rhythms and melodies shipped with their electronic instruments for ages.
Entire genres have been defined by sounds and synth presets, too.
So I do see a bit of the similarities that you describe, but I think this is largely misleading.
> I'm not sure I follow your argument, because neither synthesizers nor recordings write music.
The argument is that each technology advance accompanied resistance followed by adaptation. "Recorded music" was arguably as paradigmatically disruptive over "live music" as "AI generated music" will be.
The two are fundamentally incomparable beyond the surface level fact that "things will be different". Recorded music changed the way we experience music. AI tools may change the way we make music.
From my perspective the implications of this are dire. AI can completely remove the human element. The skill, creativity, and collaboration required to produce music is a big part of my appreciation for it. Once that's gone, when Spotify can generate exactly what I want to hear, Music as we know it loses its value.
You're not wrong. A paradigm shift is not an incremental change but a disruption of fundamentals.
> From my perspective the implications of this are dire.
These changes are scary, especially as people try to come to practical terms with the new reality.
> AI can completely remove the human element. The skill, creativity, and collaboration required to produce music is a big part of my appreciation for it.
I still hate autotune. I feel that it ruined music. But, on the other hand, it allowed people who were excellent musicians but terrible singers to make excellent music, even masterpieces. I don't think autotune was a paradigm shift really, but it was pretty disruptive.
People are deeply creative, social, collaborative, musical, artistic, hierarchical and status conscious. These traits will always drive people to make music and share it, and derive meaning from it. People will still pay other people for the music they make.
Photography utterly disrupted the social role that painters held as documentors. No one needed to hire a good painter to have a portrait. They could hire a photographer more cheaply for more accurate documentation. Artists working in the medium of painting really had to grapple with the question of what art is, if academic faithful representation of reality is no longer valued by society. Painting thus began to change. Impressionism led to Suprematism, Dadaism, Surrealism, Abstract Expressionism, Conceptualism, Modernism, Post Modernism.
Artists today will have to grapple with similar questions raised by AI generated art. But humans are creative, indomitable, curious and tenacious. I am absolutely excited to see the art that future human artists will make in the face of all of this.
This was implicit as an intent in the public statement of Ek, Spotifys CEO, when he said that they're not gonna ban AI-generated music per se (and there already is plenty on Spotify).
Somehow this nudged me a bit when I switched to YTM, although the bundle with YT background playback on iOS was probably the bigger nudge.
There's plenty of AI-generated music on YouTube as well, but for the moment their recommendation algorithm and catalogue is just better for me.
For now, it also doesn't seem to suggest me any AI-generated music whenever I use autoplay or suggestions.
I'm not falling for the illusion that this won't change though.
For augmenting comoposers, sure, GenAI can be a tool like others. Musicians have been incorporating rhythms and melodies shipped with their electronic instruments for ages.
Entire genres have been defined by sounds and synth presets, too.
So I do see a bit of the similarities that you describe, but I think this is largely misleading.