A GPU is useful, but DSP's are also still useful - for example there is a compelling case to have frameworks around such as AudioFlux, JUCE and others, in order to support portability and also realtime analysis competitively, which is important in this domain, where such things as Qualcomms' ADK, and others, is quite literally being put inside peoples ears...
Not to say that big-AI shouldn't have audio analysis as a compelling sphere of application, but more that, until the chips arrive, in-ear AI is less of a specification/requirement, than in-ear DSP.
We don't need AI to isolate discrete audio components and do things with them, in-Ear. Offline/big-AI, however, is still compelling. But we don't yet have GPU neckbands ..
Not to say that big-AI shouldn't have audio analysis as a compelling sphere of application, but more that, until the chips arrive, in-ear AI is less of a specification/requirement, than in-ear DSP.
We don't need AI to isolate discrete audio components and do things with them, in-Ear. Offline/big-AI, however, is still compelling. But we don't yet have GPU neckbands ..