This is a weird title for the book, because there's very little musical content in the book at all. It's about sound synthesis and signal processing. It's audio engineering, which is a nice skill to have for music making, but it's not music theory.
Any and all formal, mathematical, informal theory about music can be called music theory. Music theory is about modeling music. Period. It does not matter if it's about harmony, rhythm, pitch, form, timbre, dynamics, or some other aspect of music. Whether it helps with understanding music, composing music, improvising music etc is a separate topic. Music theory neither has to be about practical music making skills, nor it has to be about music of a particular artistic tradition. It just needs to present some model that can be a helpful tool in some musical context. Maybe what you call "audio engineering" is a specialized skill for some musical traditions, but for musical traditions where the expressive content primarily comes from timbre and synthesizers are common instruments, it will be an essential music making skill.
It's actually a good analogy. It doesn't mention "timbre" and doesn't claim that timpre is not part of the compositional process. A classical composer can indeed specify timbre to a certain degree, and modern composers created new kinds of scores which offered more specification means for many features related to "timbre", but there is still a difference between composing music, playing music or building instruments in formal musical education.
No you're simply wrong. This is how a "classical Western" musician thinks of music, but this is not necessarily what music is. Timbre is the main expressive content in many cultures.
Maybe there is a difference how formally trained musicians and computer scientists see it. But actually I don't see a contradition of your statement to what I've written.
And don't forget that also Miller Puckette comes from the Western musical tradition and developed important works at IRCAM.
The difference is in the model. The same way you can model mechanics with Newtonian mechanics, or statistical mechanics, or quantum mechanics and each of them can be useful in different scenarios, and irrelevant in others.
If you're making Western classical music in classicist, romanticist or modernist style, the model of music you have will carry a lot of information about harmony and the application of harmonic techniques throughout the piece. Given a core musical idea you can then apply peripheral techniques (such as orchestration) to build a full piece. E.g. when people study counterpoint, the model of music originates from vertical harmony of notes and when they can be used with respect to each other. The assumption is that orchestration is something that'll be separately developed "skinning" the composition. E.g. a common technique in this tradition is composing a piece for piano four hands and then orchestrating it (e.g. Holst's "The Planets" symphony was composed this way).
However, this stops being a useful model once you step into other musical traditions. In some cultures harmony would be like how Western music treats orchestration, peripheral to composition (like how extreme speed is irrelevant to Newtonian mechanics because it was never designed for near lightspeed motion). So you'd first design timbres, and have an idea about how timbres interact, timbres change, transform to each. You may have a theory of counterpoint of timbres. Once you have this, you can apply any standard "harmony skin" on the composition and you have a piece. This is not even restricted to non-Western music. If you look at the postmodernism in Western music you'll find instances of it. Easy example: a lot of people say that Philip Glass "makes the same music" again and again, what is being missed is the point he's trying to convey is that even if you pick the exact same 4 chords you can still create variation in music via other means. It just won't be different from the traditional harmony-centric Western musical model.
By the way, I studied CS and my full-time job is a Software Engineer. So I doubt our disagreement comes from my background in computer science.
> So you'd first design timbres, and have an idea about how timbres interact, timbres change, transform to each. You may have a theory of counterpoint of timbres. Once you have this, you can apply any standard "harmony skin" on the composition and you have a piece
I asked the same question above, because I'm not sure if you're alluding to the same thing here or something different. May I have some examples of traditions which do this, with something to go listen to?
You seem to be hell-bent on a disagreement. Let's invest our time better, for example, making music. Do you have any musical works that can be listened to?
This is exactly what I'm talking about. In Western music timbre is akin to fonts. You have a composition for piano, you play it, record it in MIDI, and reskin it with some other timbre in studio. This is an extremely Western way of looking at music. There are countless cultures where timbre is the "main" part of the music where the harmony and/or rhythm would be like fonts/reskins and timbre is the main juice composers and improvisers try to squeeze out. This type of distorted view on music is rooted in 18th/19th century beliefs of non-Western art being "primitive" art even though every single culture that's known to humanity have a unique musical tradition. This is an extremely anti-humanistic look at music.
I think there is just a certain a kind of ambiguity with the word "Theory". Miller is really focused on the theory of sound synthesis and does not really deal with composition or aesthetic theory. People who are more interested in the latter might enjoy "Composing Electronic Music: A New Aesthetic " by Curtis Roads (https://global.oup.com/us/companion.websites/9780195373240/b...).
I think in order to have a book about music theory, there should be some explanation as to how to make music for some particular expressive purpose and not just the technical details of how to make sounds. A guide to constructing a piano is not music theory. I don't care if it's classical music or techno or gamelan, or if the theory is formal or traditional, if there's not some discussion of how and why to express musical ideas, it's a technical manual and has very little to do with music.
The traditional concept of notes (and accessory ones like scales and traditional notation) marks the boundary between "traditional" music theory, that treats notes as the final result (when you have notes written down, it's only a matter of concretely playing them with a given instrument) and theory of electronic synthesis, which treats notes as an input, both optional and taken for granted, and audio signals as the product.
Traditional music didn't even have the concept of notes in the way modern music does. Of course as modern music was developing for hundreds of years, perhaps thousands, and as they did that they started taking common themes in traditional music and calling them notes, but real traditional musicians can't think of notes in the same was as modern music does.
Also notation systems were further developed. Many famous composers came up with their own scores for their electronic or electro-acoustic compositions (see e.g. Xenakis or Stockhausen).
Huh? There's a huge amount of theory and writing about electronic music that isn't just technical. See Mark Fell's PhD thesis, for example.
Your comment seems to suggest the other person is ignorant but really it just shows your ignorance of theory and writing about experimental and electronic music. Not all music theory is western classical.
I mean, how do you even consider Stockhausen and Xenakis from your perspective?
You're misunderstanding what I'm saying. Music theory from other cultures are of course music theory as well. Each composer can (and almost always will) have their own idiosyncratic music theory as well. There is nothing contradictory here.
Also I'm a composer with extensive knowledge of how to make, orchestra, and mix acoustic or electronic music. This thread has an extreme Western bias, just because something is studied in a particular way from Western music theory perspective, it doesn't mean it has to be that way.
That comment just seems like an overly disparaging and ignorant take on this so called “western music”, which you claim is somehow anti human. Why would I care what you think about electronic music when you think western music is anti human?
Experimenting with timbre and the nature of sound itself is absolutely musical. That's a big reason why people love to listen to many different kinds of electronic music in the first place (or things like heavily distorted guitars).
Music is not just about combining 12TET pitches in different ways. Everything about the experience of music is open game for creative expression.
During my university studies, I took courses in electro-accoustic music composition. Significant amounts of time dealt with synthesis and signal processing because those were critical elements in these kinds of compositions.
It's absolutely different than composition for traditional instruments in this regard because the sounds you are using to compose with are being created by the composer and much as are the notes, rhythms, and structure of the composition.
The first sentence of the foreword brings to the point, what the book is about:
"The Theory and Technique of Electronic Music is a uniquely complete source of
information for the computer synthesis of rich and interesting musical timbres."
Whereas tools like Max Mathews' (btw. the author of the foreword) MUSIC programs and their successors clearly separate music composition and instrument building (i.e. sound synthesis), later tools like Max, PD or SuperCollider are blurring this difference. Nevertheless the difference is still maintained by all institutions where electronic music is studied and performed (e.g. IRCAM).
> "The Theory and Technique of Electronic Music is a uniquely complete source of information for the computer synthesis of rich and interesting musical timbres."
It's really a great book, but it is far from "complete" as it omits some very important synthesis techniques - most notably granular synthesis and physical modeling! To be fair, no single book would be able to cover the entire spectrum of electronic sound synthesis. The second edition of "The Computer Music Tutorial" by Curtis Roads (https://mitpress.mit.edu/9780262044912/the-computer-music-tu...) comes close, but it is a massive book with over 1200 pages and took literally decades to write. (The second edition has been published 27 years after the first edition!)
What I find really cool about Miller's book is that all examples are written in Pd so anyone can try them out and experiment further.
On the matter of institutions: IRCAM is the paradigmatic example of composer / technologist role demarcation, but I would question whether this extreme position "is still maintained by all institutions" -- it certainly was not at my alma mater and I doubt at UCSD either. As you say, Max (coincidentally a product of Miller Puckette and IRCAM) and it's more recent ilk have empowered composers to independently build their own instruments and this practice has been ongoing within the academy for at least 35 years now.
As someone who studied computer music in the mid 2010s I can second that! All the composers in my generation who use live electronics do it themselves.
The devide between composer and programmer has disappered for the most part and I think the main reason is that both hardware and software has become so affordable and accessible. Back in the old days, you needed expensive computers, synthesizers and tape machines and people who could assist you with operating them. Today, anyone can buy a laptop and learn Pd / Max / SuperCollider!
That being said, institutions like IRCAM still have their place as they allow composers to work with technology that is not easily accessible, e.g. large multi channel systems or 360° projections. They also do a lot of research, too.
> Today, anyone can buy a laptop and learn Pd / Max / SuperCollider!
And anyone can buy a laptop and contribute to the development of Pd, SuperCollider, Chuck, et al.
Not sure how much overlap there is between those two groups. Arguing against my earlier point: there still seems to be a separation between music systems users and music systems developers.
> there still seems to be a separation between music systems users and music systems developers.
That's true, but just like a pianist typically doesn't need to build their own piano, computer musicians rarely need to build their own DAWs or audio programming languages. However, computer musicians do build their own systems on top of Pd, SC, etc. and these can evolve into libraries or whole applications. So the line between computer musicians and audio application developers is blurry.
That being said, I can tell for sure that only few computer musicians end up contributing code to Pd, SC, etc., simply because most of them have no experience in other programming languages and are not really interested in actual software development. Of course, there are other important ways to contribute that are often overlooked, like being active on forums, filing bug reports, etc.
Maybe I'm a bit biased because I was there for a study visit in the eighties. Of course it depends on the use case; if the composition is fully electronic, the composer can essentially be the same person as the performer, conductor and producer, so there is no big need for a score; live coding goes even further and "the composition" appears during the performance; specific tools have been implemented for these use-cases (e.g. Standford has a long tradition for such tools).
this assumes the position that the pre-eminence of the even-tempered music based on european art music traditions and the associated staff notation. this is extremely limiting when considering the breadth of music that exists in the real world.
this is a theory of music, and while most pedagogy will reinforce the special position of this system, it is not THE theory of music. there are alternative systems of notation. there are harmonic systems that incorporate tones that do not exist in even tempered western scales. there are drumming traditions that are taught and passed down by idiomatic onomatopoeia.
this is especially apparent in electronic music where things like step sequencers obviate the need to know any western music notation to get an instrument to produce sound.
the western classical tradition is a pedagogically imposed straight jacket. its important to keep a more open mind about what music actually is.
The book is from basically the “experimental music” school of electronic music. The idea was/is that music will be completely transmuted by electronics and computers, leaving traditional music behind. Here “traditional music” means almost everything people actually listen to, from orchestras to GarageBand electronica to pop.
The claim may be a bit aspirational right now, but in theory “electronic music” subsumes all music. Or enlarges music so much that traditional musical ideas are special cases, not necessarily relevant.
I’m trying to pitch this properly as a very cool concept. But I no longer believe it will happen in my lifetime.