How does granular analysis differ from pcm representation, Fournier transformation and sampling? Or is it a different name for the same thing. I think it's natural to whoever worked with sound on a Pc.
It's probably debatable, but I don't agree with the statement that shortnening the "sound" changes pitch. It depends on your representation of the sound. If you represent it as a function of amplitude vs time then scaling the time axis does change pitch.
This makes a sensational tone about a fallacy. No instrument plays sound faster or slower to make it shorter or longer.... It just stops playing it or doesn't. If one thinks about the phenomenon this way, it becomes natural why you cannot compress time, to play shorter sounds.
You don't seem to have a very good grasp of this subject and don't appear to have read the article very carefully. The only viable alternative to PCM is DSD, which failed to gain any traction for good reasons. So for all practical purposes, sampling and PCM are the same thing. You also throw in Fourier (not Fournier) transformation for good measure, which is relevant to additive synthesis, but not to granular synthesis, which is the topic of this article.
> I don't agree with the statement that shortnening the "sound" changes pitch. It depends on your representation of the sound. If you represent it as a function of amplitude vs time then scaling the time axis does change pitch.
The only relevant "representation" is digital audio, which by definition is encoded as amplitude over time regardless of encoding technique. To lengthen time without changing pitch or pitch without changing time requires manipulation of the audio data. That manipulation is either done by granular synthesis, or by utilizing a Fast Fourier Transform to decompose the audio into its component waveforms, changing the frequencies or shortening the wave components, and recomposing them back to a composite waveform. This article is about granular synthesis, which requires far less computation than FFT.
> No instrument plays sound faster or slower to make it shorter or longer....
Irrelevant. We aren't dealing with physical instruments, but with digital audio.
There is nothing in the least fallacious or sensational about this article.
Fourier-based techniques (FBT) and sample slicing (SC) may be similar if doing "raw" transformations, but FBT can potentially be cleaner, or at least easier to clean up. If you use raw "bit-maps" for FBT, yes it will be choppy like SC, but one can use regression or regression-like curve-fitting to give FBT smooth time/frequency curves to synthesize against, sounding more natural. There are down-sides to using regression, but for typical voice and music, those won't matter much.
One rough area for curve-fitting is white-noise-esque sounds (WNES) like the letter "s" or "h" and tambourines. The processor can perhaps detect if WNES exceed a threshold, and use other techniques such as SC instead.
It's roughly comparable to JPEG versus GIF images. JPEG is better (more faithful) at gradual shades while GIF is better at edges. A better compression algorithm perhaps would use each where it does best per given image. However, at the cost of algorithm complexity and compression/decompression processing time.
It's probably debatable, but I don't agree with the statement that shortnening the "sound" changes pitch. It depends on your representation of the sound. If you represent it as a function of amplitude vs time then scaling the time axis does change pitch.
This makes a sensational tone about a fallacy. No instrument plays sound faster or slower to make it shorter or longer.... It just stops playing it or doesn't. If one thinks about the phenomenon this way, it becomes natural why you cannot compress time, to play shorter sounds.