Hacker News new | past | comments | ask | show | jobs | submit login

You seem to think DAW makers don't already specialize a ton in DSP, algorithms, and concurrency - I can assure you that that's definitely the case, and that innovation and optimization happen at a very healthy pace. There is significant market pressure to run tracks and plugins in a highly optimized way. Several DAWs have a visible CPU-usage meter, and some allow users to directly configure a process isolation model for plugins.

However, audio has a very different set of constraints from other types of workloads - the hallmarks being one worker doing LOTS of number crunching on a SINGLE stream of floating-point numbers (well, two streams, for stereo), that processing necessarily happening in SERIAL, and getting the results back INSTANTLY. Why serial? Because for most nontrivial audio processing algorithms, the results depend on not just the previous sample, or even chunk of samples, but are often a rolling algorithm that depends on a very long history of prior samples. Why instantly? Because plugins need to be able to run in realtime for production and auditioning, so every processing block has a very tight budget of tens of milliseconds to do all its work, and some of them make use of a lot of that budget. Also, all of these constraints apply across an entire track as well - every plugin on a track has to apply in serial, one at a time, and they need to share memory and act on the same block of audio.

One thing you might notice is these constraints are pretty bad conditions for GPU work. You're not the first to think of trying that - it's just not a great fit for most kinds of audio processing. There are some algorithms that can run massively parallel and independent, but they're outliers. Horizontally scaling different tracks across CPUs, however, works splendidly.




To be fair, some of the things currently piquing people's interest are suitable for offline "massively parallel" processing ala GPUs. Source separation and timbral transfer would be the first two that come to mind.


Yah these are all well understood, however, you don't get to nuclear reactors without inventing some new math supporting subatomic physics.

Saying that audio has "different set of constraints from other types of workloads" and giving up on fundamental algorithm research is just defeatist, throwing in the towel, and frankly really insulting to human advancement.

Come on, we need some new algorithms and just saying "whelp it can't be done" is kind of ... not the hacker spirit.

It could be that we need quantum algorithms for parallel processing what previously was thought to be serial. Just from reading your well reasoned paragraph, I can see we desperately need fundamental algorithm research in sound processing.

An imaginary scenario might to to invent an algorithm to convert/transform sound/pressure wave information into another domain, one that is not dependent on serial time, and then do the operations, and then re-convert it back to the time domain that we usually associate with sound processing. Where, within this alternative domain, parallel processing is possible.

We do stuff like this all the time in other disciplines. Even stuff like the FFT was an attempt to transform and make certain "unsolvable" problem solvable in another form.

That's the kind of math research that I'm referring to.


> An imaginary scenario might to to invent an algorithm to convert/transform sound/pressure wave information into another domain, one that is not dependent on serial time, and then do the operations, and then re-convert it back to the time domain that we usually associate with sound processing. Where, within this alternative domain, parallel processing is possible.

We already do this. It's called FFT, which transforms the data from the time domain to the frequency domain. You can, if you want/need to, parallelize frequency domain processing. There's oodles of interesting audio software that does this.

But again, parallel processing is only interesting for speed. And we mostly have plenty of speed these days.


Parallel processing isn't "interesting" except as a way to do things more quickly.

But for 90% or more of the things people do in DAWs (and currently want to do in DAWs), current processors can already do it fast enough.

So the sort of innovation your dreaming of/imagining isn't going to come from new algorithms - it's going to come from people wanting to do new things.

This is already happening to some extent with things like timbral transfer, but even there, the most important part of it is well within current processing capabilities.

> Come on, we need some new algorithms

If you don't have a "why", that doesn't make much sense. Start with "Come on, we need to be able to do <this>" and then (maybe) the algorithms will follow.

Necessity is the mother of invention, but so is desire. What do you desire?


The desire would be to work with sound artists and acoustic designers and product designers and engineers to make certain types of audio spaces.

A lot of the cutting edge of acoustic research is from people wanting to make materials and spaces do certain things with sound.

For example, this article describes a system that lets sound through one way This was described in https://www.scientificamerican.com/article/a-one-way-street-... ("an acoustic circulator") - the need will come from things like this.

And for musicians, artists, and instrument makers to make use of materials, devices, and spaces like this. That would be my answer to your question of where the need/why will come from.

Imagine needing to write a DAW module to deal with "one way sound" that results from using an acoustic circulator in a musical production or a song.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: