Hacker News new | past | comments | ask | show | jobs | submit login

To me this is just utterly ridiculous.

In the field that I work in (audio software) there are dozens of things that we can do in software now that were unimagined even in 1990. Polyphonic note editing? Utterly transparent time stretching? Even just the ability to do huge amounts of serious DSP on generic CPUs has completely altered the entire landscape of music production (and the software that is used to do it).

The same is true of so many other fields.

What hasn't changed much is the sort of software that is concerned with entering, editing and generating reports on data. The business/consumer focused versions of this stuff have changed from being built with tools provided by Oracle in the 1980s to using various web stacks, but the basic model - data input, fetch some data and present in a GUI - remains unchanged. And perhaps that's because the task hasn't really changed much, and what we have is actually fairly good at doing it.

But switch over to other areas where data-driven applications are important - many scientific disciplines for example - and even there you will find huge expansions in what is possible, particularly in terms of visualization (or auralization) as a way to help humans explore data.

And FFS, Google freakin' maps! Yes, something like it existed 10 years ago, but have you actually used it while driving recently! It's not bringing about world peace or solving hunger, but good grief, that is an absolute marvel of the composition of so many different areas of CS and software engineering into one incredibly user-friendly tool that I don't even know what to say other than "use the 2nd lane from the right and then turn left at the light".




I don't know why, but your extreme excitement for Google Maps really made me smile.

It's important not to take really good pieces of software and services for granted, yet it's something we all do every day.


> It's important not to take really good pieces of software and services for granted, yet it's something we all do every day.

This is so true. In fact, the user to whom you are replying is the creator of one of those really good pieces of software: Ardour [0].

If others who read this comment are users of Ardour, please consider doing a $5 monthly donation to Paul on PayPal.

[0] https://ardour.org/


Google Maps is so much better and so much more impressive and so much more useful than Ardour that it's not even funny!

Thanks for the plug, even if it feels a bit out of place here (and most of our supporters only pay $1/month, which is fine too).


> In the field that I work in (audio software) there are dozens of things that we can do in software now that were unimagined even in 1990.

Consider that these things aren't really due to the practice of making software becoming better, but rather simply that hardware has become ludicrously powerful so as to enable this at all. It's the hardware portion of IT that deserves the gold medal here.


That's not really true of the first two things I mentioned. These required significant evolution in the DSP/math involved. In 1990, timestretching existed, but generally created artifacts. Polyphonic note editing (as implemented first by Melodyne) didn't exist and wasn't really even imagined.

It is true that they both benefit from more powerful hardware, but these examples required significant advances in software too.


And what novel possibilities have Polyphonic Note Editing and Utterly Transparent Time Stretching brought to music? All that has been accomplished 4% YoY reduction in the cost and time of producing Mass Media, which requires a massive volume of production work. These are conveniences which are and always have been utterly conceivable consequences of sufficient engineering hours, not revolutions or fundamental changes in our relationship to the world. Even if some of those are just now coming into people's minds, all those changes were already conceived of and implemented 20+ years ago.


Where did it say in the "let's create software" contract that the only acceptable goal was "revolutions or fundamental changes in our relationship to the world" ? As I said in another comment in this thread, I understood the role of computers to be helping humans with tasks they wanted to do by doing things that computers were good at. If that happens to include revolutions or fundamental changes, fine, but it definitely includes a lot of other things too.

You are right that the particular examples of audio software capabilities do not in and of themselves bring anything in particular to music.

But the timestretching stuff has totally changed how huge numbers of people now make music, because they can work with audio that is in the wrong key and/or at the wrong tempo, without really thinking about it.

Do I think that this results in an aesthetic leap forward for music itself? I do not. In fact, probably the opposite in many senses. But that is true of so many human technologies, not just software. Some people would even argue that the advent/invention of well tempered tuning (and the concomittant move away from just intonation) hurt music in the west, and that was just as much the result of "sufficient engineering hours" as anything in the software world.

Also, just to correct you, 20 years ago, I guarantee you that nobody, absolutely nobody, believed that you could ever do polyphonic note editing. When Melodyne first demonstrated it, most people who knew anything about this just had their jaws hit the floor. It was absolutely not an "utterly conceivable consequence", even though in retrospect of course it now all seems quite obvious.


The whole premise of the comment you were replying to was that we were spinning our wheels, which is not an utter refusal of progress but a characterization of progress. You responded with marveling at the things which have been achieved which you find remarkable and unappreciated. I responded by characterizing those as having some qualities of remarkability and novelty which ultimately fail to exceed wheel spinning. You are now accusing me of having set up an unreasonable standard of progress. There are two characterizations of the progress in this domain which I believe are likely but not certain accounts, maybe both are partially true, maybe just one, but both are contained by the wheelspinning metaphor. One is that progress is being made, but the progress is not forward and in fact the progress is the digging of a deeper and deeper hole that makes actual progress more difficult, and the other is that progress is being made, but that progress is that of an infinite series which logarithmically ascends from 1.0 to 1.1. There were genuine discoveries, inspiration and novelty required on the road from note identification to chord identification and deconstruction that could be reversed and reconstructed with reasonable accuracy. That does not mean that we aren't doing anything more than refining and improving the accuracy of processes which we were already performing rudimentary forms of 20+ years ago. I'm not saying there isn't progress, only that the progress is limited, and we are now reaching towards the same limits which we had begun approaching at the inception of the programmable machine and there is no escape in sight.

https://www.jstor.org/stable/3679550?seq=1

"The real power of a neural net is its ability to compute solutions for distributed representations. In most cases, the solutions for these complex cases are not obvious. The pitch class representation of pitch is a local rather than a distributed one. In this case a possible solution for the chord classification problem is apparent without the use of a learning algorithm. A net containing 36 hidden units, one representing each of the possible major, minor, and diminished triads, could be constructed so as to map chords to chord types. Thus our interest in using a pitch class representation was not to find this obvious solution, but to find a solution which used a minimum number of hidden units. We hypothesized that three hidden units would be adequate and that the hidden units would form concepts of the intervals found in triads: i.e., major third, minor third, perfect fifth, and diminished fifth.

Each pitch-class net used 12 input units to represent the 12 pitches of the chromatic scale and 3 Output units to represent chord type. The number of hidden units and the values of the learning parameters are summarized in Table 1 for each of the eight pitch class nets discussed. Net 1 had an adjacent layer architecture as shown in Fig. 2 and three hidden units. It identified 25 percent of the chords after more than 11,000 learning epochs. When a fully connected architecture was used in conjunction with three hidden units in Net2, 72 percent of the chords were identified after 2,800 learning epochs. "

https://secure.aes.org/forum/pubs/conventions/?elib=11400


Almost all progress is incremental. Look at it from one angle, and it looks like "spinning our wheels". Look at it from another angle and it looks like almost all progress in almost all fields of human endeavor.

Neither of those papers cover any of the technology or ideas behind what Melodyne introduced with polyphonic note editing, which allowed the editing (in time and/or pitch space) of a single note within the audio of a polyphonic performance.

I'm entirely fine with saying "getting computers to do things humans have done for a long time isn't really progress". I'm not sure it's true.


You raise an outstanding point here, in that DAWs do tend to simply automate away the pain-points of making music as conceived of by people before the advent of personal computers. However I would caution against applying this mentality to Paul's project, which if anything is doing the most of any DAW out there to fight against those very conditions.

One of the primary problems with DAWs as conceived initially was that they were closed, proprietary systems comprised of stupid-expensive hardware to even just open the dang application. This helped to facilitate the inescapable bubble in which the Mass Media finds itself today, playing right into their competition-killing hands. So of course, the world was stagnant for 20+ years, since the only people that had access to this software were "audio professionals," who had the creativity, ingenuity, and passion of a wet noodle. And they did predictably lame things with it all.

When only Kanye West and T-Pain had access to polyphonic note editing, it was pretty lame indeed. But access is, in itself, novel. The world has since changed considerably, and we have projects like Ardour in part to thank for this.


c'mon, we live in an era where people get Grammys for albums that they could record entirely in their bedroom. Just 15 years ago you had to book professionnal studio time with an engineer at a ~1000€/day for the good ones around where I lived.


up'ed for the implicit Jacob Collier reference, whether you knew it or not :)


eh, had Billie Ellish in mind but that works too haha


> The same is true of so many other fields.

The Dunning-Kruger effect can be fiendishly subtle. So many people are limited by their experience, they don't stop to think (as you have) and imagine what fields outside theirs have taken advantage of in information systems.

I agree with the article that in the mainstream web/app ecosystem, there is a lot of unnecessary trash. Through my own experience, I've seen duplication of libraries and APIs that just frustrates. On those fronts, yes, it would be nice to have a little less paradox of choice.

But as you have graciously pointed out, there are domains undreamt of by the author that wouldn't give up the progress of the last 70 years for any amount of gold, and have much need of software still. Thank you for your examples.


> Polyphonic note editing? Utterly transparent time stretching? Even just the ability to do huge amounts of serious DSP on generic CPUs has completely altered the entire landscape of music production

Right, so now what is popular music today? Vocalists who can't sing in tune without help from technology, and musicians who never play more than a few bars at a time in studio because it is all stitched together digitally, and live performances that are just lip-synching to a playback. It's artificial from end to end.


why care about "popular" music at all?

Yesterday on youtube I watched two hour+ live concerts, both made available by the lead artist (Dhafer Youssef). The first was a more jazz inflected performance (not suprising given Mark Guiliana and Chris Jennings as the rhythm section), the second was more "world music" (also not suprising with Zakir Hussain on tabla and Husnu Selendecir on clarinet). Both featured Youssef playing oud and singing in his incredible melismatic style. One was recorded in 2014, one in 2106.

The music was utterly incredible. Virtuosic performances of the highest levels, astonishing compositional and improvisational music structures. And amazing sound quality (though sadly one was marred a little by clipping).

You don't have to like this particular music. Just stop focusing on "popular music", which has ALWAYS been dominated by dreck of one sort of another. Remember that there is (and often always has been) a huge universe of incredible music out there from cultures around the world, reflecting both tradition and experimentation, skill and inspiration, solo brilliance and music collaboration, virtuosity and taste.

Lamenting what the kids are doing with Ableton Live or FL Studio when you can watch Dhafer Youssef play live for 2+ hours with Tigran Hamasayan, Mark Guiliana, Zakir Hussain (or whatever floats your boat) is just wasting your time and your life!




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: