Hacker News new | past | comments | ask | show | jobs | submit login

What’s wrong with a digital AA filter?

Also, aliasing as a concept doesn’t show up in purely digital systems.




The problem is that if you sample a mathematical waveform such as a sawtooth wave, it contains infinitely high frequencies, so the act of sampling creates aliasing.

There are a lot of solutions to this in the literature, such as wavetables that are band-limited, but for simple virtual analog synthesizers, there's a technique that is close to magic in its simplicity and quality: PolyBLEP. Check it out.


Complexity.

Digital AA is very hard to design, just the mathematics involved are extremely difficult to grasp, much harder than analog.

Aliasing shows up SPECIALLY in purely digital systems. Take a sample every half period of a periodic signal like a sinusoidal wave. Your signal output will be constant or zero.

Take a digital sample(calculate a frame) of a wheel in a 3D game (draw the wheel at a specific position), each 30th or 60th of a second. The wheel will go backwards at some speeds(stroboscopic effect, a form of alias).


There's nothing wrong with digital AA filter today, but they weren't as good or as cheap computationally in 1982 when the Juno was released. Most digital audio stuff used analog AA filters back then. Also, the Juno was polyphonic, so handling 6 sawtooths at the same time plus antialiasing would be too hard on the poor chip!

And aliasing does happens in digital oscillators when you're using a fixed clock and sample rate: If you have flexible clock (per-voice) you can just increment the output value in equal steps (and then set it to zero at the end of the "sawtooth") and you won't get aliasing. But if you have a fixed clock, you have to use a (very simple) equation to calculate the position of the soundwave at a given sample number. So you get aliasing when you approach Nyquist frequency, and you need the anti-aliasing filter to fix it.

And since it was 1982 the sampling rate of the chip was probably not that high, so you would get aliasing much earlier than at ~22kHz.


That's an amazing claim.

Try drawing what would appear to be a perfect square wave in a digital wave. Up, flat, down, flat. Guess what: you've just introduced enormous amounts of aliasing.


That's not what aliasing means.

"Lollypop" sampling vs. real DAC output: https://mk0soundguyshosprmrt.kinstacdn.com/wp-content/upload...

As you've observed, the output of a real digital circuit includes the flat parts, unlike the infinitely thin "lollypops" that you see in DSP textbooks.

But the jaggies themselves are not aliasing, they're actually the result of anti-aliasing. The sample-and-hold reconstruction filter that practically every DAC uses is an anti-aliasing filter, albeit a crude one, but if you've chosen your sampling frequency correctly, the harmonics from the jaggies exist entirely outside of your passband. A simple RC filter removes them almost perfectly.

This is aliasing: https://upload.wikimedia.org/wikipedia/commons/thumb/2/28/Al...

Digitally sampling an analog input signal is usually where you need to actually put some design effort into the AA filter, because you don't necessarily know whether the analog signal at your input has a lot of energy just outside your passband. Such an input can cause you to see an alias that's comparable to or louder than your signal of interest, and there's no way to get around that with direct sampling except for arbitrarily sharp AA filters.

But with a DAC, you know exactly what you're putting into your AA filter (your sampled signal convolved with a box), and you can easily arrange things so that the AA filter both works perfectly (sampling > nyquist) and is easy to design (sampling >> nyquist).


A naive digital square wave will absolutely generate aliasing, there is an entire area of audio DSP research aimed to generate alias-free waveforms (another poster mentioned BLEP which is a popular technique). You can trivially see this by looking at a spectrogram of a naively synthesized waveform- all kinds of junk will appear that is also clearly audible.

This is because the "naive" approach to e.g. a digital square wave is to sample a logically continuous function defined as:

  if (less than halfway through the cycle period) -1
  else 1
Firstly, this is almost always going to be out of tune because the transitions from -1 to 1 and 1 to -1 necessarily fall across a digital sample boundary. For any frequency not evenly dividing the sample rate the transition will be slightly early or late.

Secondly, the sharp rising edge of the square wave, from a "sum of sines" perspective, requires sinusoid components that are not part of a conventional square wave to represent them digitally. If you were to digitally sample an analog square wave through an ADC it would look quite different from the result of this function.

The naive digital sawtooth wave suffers from similar issues due to its discontinuity.


You and I are in agreement.

An ideal square wave is not a signal that is band-limited to the audio range, so attempting to sample it directly (the naive approach) doesn't work. Instead, you want to sample a band-limited approximation of the square wave (which is what BLEP, etc. do).

If you're synthesizing, the naive approach doesn't work because the interaction that occurs between the square wave transitions and the sample rate introduces energy into the passband as a beat frequency. Yes, this interaction is aliasing, which is what we expect to get when sampling a signal that is not band-limited to our passband. That BLEP oscillator or other polyphase techniques are needed in order to synthesize a correctly sampled band-limited approximation of a square wave.

Similarly, if you're sampling an analog square wave with a high slew rate, you absolutely must have an analog anti-aliasing filter in front of your ADC, or you will end up with exactly the same beat frequencies in your sampled signal.


An ideal square or saw wave contains infinitely high frequencies because of the discontinuities.

When you represent an ideal square or saw wave as a series of digital values you are effectively sampling it at a much lower sampling rate than the infinite rate required for a perfectly sharp square wave. This introduces aliasing is why a naive implementation of a square or saw wave will sound terrible.

There are a whole bunch of techniques you can use to solve this problem without excessive CPU usage but they are non-trivial.


A list of things that don't respond to infinitely high frequencies:

Your ears

Your speakers

The op-amps in your fancypants analog synthesizers

--

More critically, signals having high frequency components is. not. aliasing. Neither is distortion.

Out-of-band signals being erroneously remapped into the passband via the sampling process is aliasing.

That's why it's called an alias. The remapped out-of-band signal is indistinguishable from a different in-band signal.


I think you need to revise your understanding of what aliasing is, because you are - to be blunt - wrong.

Any process that generates or distorts a waveform at any sample rate will generate aliasing in the digital domain - unless it has no >Nyquist components.

And most waveforms do have >Nyquist components. Sine waves <Nyquist don't, but ramps, steps, and virtually all forms of distortion do.

If you don't understand this go back and reread the basics, because hundreds of DSP engineers have spent countless hours devising ways to handle this problem.


You miss a lot of nuance when you're busy being blunt.

An ideal signal that you want to approximate may contain infinite frequency components, but this property alone is not what gives you aliasing artifacts. Sampling incorrectly does. You can absolutely approximate this signal without aliasing artifacts, but you need to sample a band-limited approximation of the signal, not the signal itself. That's what BLEP does.

Attempting to sample the ideal signal directly (the naive approach) will indeed give you aliasing artifacts. But we already know that the ideal signal isn't band-limited, so we should also know that sampling it directly (at any frequency) is always folly.

Do you see what I was trying to highlight? "Oh no, the signal has high frequency components!" is not the problem. Incorrect sampling is the problem. Aliasing is an effect, not a cause. You don't "get aliasing" just because the ideal signal has high frequency components, you get aliasing because you sampled the wrong thing.

You can call it aliasing if you want... you wouldn't be wrong. It's just not particularly instructive, and referring to everything as aliasing without being more specific about where it's coming from is what leads people down rabbit holes like running a DAC at 2+ MHz because nobody showed them how to use a polyphase filter. Sure, it works, it's just wasteful, and you can accomplish the same thing by sampling better rather than faster.

Then again, your condescending response leads me to believe that you're not actually interested in being instructive.


Aliasing occurs when you try to sample frequences above half your sampling rate. This is the famous Nyquist theorem. This might help you understand why this can happen in naively implemented digital oscillators:

http://www.martin-finke.de/blog/articles/audio-plugins-018-p...



> That's not what aliasing means.

?? Aliasing doesn't just happen when you're converting an analog signal to a digital one. It can also be introduced when you're manually building or converting a digital signal with the intention of it sounding like a given analog signal, or like the ultimate output of another digital signal. It was this usage I was referring to. For example, if you downsample a digital signal from another digital signal without filtering out frequencies above Nyquist. Or in the example I gave, if you draw a square wave with the expectation that it sound like an analog square wave, not realizing that the sharp corners you've drawn didn't add in high-frequency odd harmonics, but rather partials with Nyquist-reflected aliased frequencies. It's this second case which causes problems for naive approaches to digital wave-design: you can't filter out the incorrectly-introduced reflected frequencies after the fact, but rather must build the wave in the first place to not have them.


Sorry, I misunderstood your original comment. When you said:

> Up, flat, down, flat. Guess what: you've just introduced enormous amounts of aliasing.

I thought you were suggesting that the jagged, stair-step output of a DAC is what causes aliasing, which is a common misconception.

I was trying to point out that stair-step output is just the reconstruction filter of the DAC operating as intended, and that a reconstruction filter is an anti-aliasing filter. In the case of a DAC's sample-and-hold, it's an easy way to select the baseband alias. [1]

I think one reason this thread went off the rails is that there are two anti-aliasing filters in a digital synthesizer. I was initially referring to the hardware anti-aliasing filter at the output of the DAC, which is also called a reconstruction filter or an anti-imaging filter. Everybody else was referring to the digital anti-aliasing filter that you need in order to create a band-limited approximation of an infinite bandwidth ideal signal so that you can actually sample it, because attempting to sample the ideal signal directly is always incorrect.

Given that the article is about the precursor to digital synthesis, I suppose I should have realized that people were going to be more interested in discussing the latter. However, now we're getting to the second reason this thread went off the rails (and the reason I usually regret participating in audiophile threads): Differences in terminology, points of focus, and "Well, Ackchyually" audiophile-grade condescension cause people to read past each other and continue to argue despite largely being in agreement.

If you read my downstream responses (https://news.ycombinator.com/item?id=25601970 and https://news.ycombinator.com/item?id=25602267), I hope it's clear that I both understand and agree with the actual argument you wanted to make.

[1] Aside: If you were to insert sinusoids at twice the sampling frequency instead of flats in between each sample, you'd be selecting the alias centered around twice the sampling frequency instead of the one at baseband. This is (roughly) how a mixer works.


> I thought you were suggesting that the jagged, stair-step output of a DAC is what causes aliasing, which is a common misconception.

Oh, heavens no. Just describing the naive way to draw a square wave in PCM.


It's not possible to get aliasing without a reference signal or reference frequency. A single purely digital signal, perhaps clocked or perhaps not, has nothing to alias against. So the claim has something to it!

Once you introduce a master clock, especially one clocking an ADC, then aliasing can and does indeed show up.


In the digital domain Nyquist is a normalised ratio. It is not tied to a specific hardware clock rate.

So it's perfectly possible - in fact dangerously easy - to generate waveforms with components that are >Nyquist.

It doesn't matter if the hardware runs at gigahertz frequencies or subsonic frequencies. In fact it doesn't matter if there's never any hardware at all.

Because there is always an implied sample rate of 1 x fs, and any signal which generates components of more than 0.5 x fs will alias.


I don't consider a "digital signal" to have any sort of implied clock or sample rate whatsoever. This appears to be a terminology mismatch between audio people and electrical engineers, if the rest of this thread is any guide.

Here is my view of things: along the time axis, signals can be either continuous-time or discretized/discrete-time. Along the intensity axis, for example voltage, signals can be either continuously-variable or discretized/discrete-valued.

I consider an "analog signal" to be one that's continuous in time and in value. I consider a "digital signal" to be one that's continuous in time but discrete in value; for instance, the output of a (asynchronous) logic gate.

Your definition of "digital domain" seems to be discrete-time, discrete-valued signals; for example, the readings out of an ADC or commands into a DAC.

The difference between the two would nicely explain a lot of the confusion here.


Aliasing only appears when you try to make an analog signal. A digital signal is not beholden to channels. It is a pure mathematical entity.


All right, mister fancypants. I decide to create a sawtooth signal by repeatedly drawing a diagonal line, then a vertical line in my digital PCM sample array. It looks like a sawtooth when I plot it, but the sound that comes out doesn't sound like a sawtooth: it has lots of reflected harmonics caused by those sharp corners. What is your term for the artifact that I had unintentionally introduced?


Aliasing. That isn’t a digital signal.


Of course it's a digital signal. It's a PCM array.

If what you mean is "the artifact isn't 'aliasing' until it finally goes through the DAC", my answer to you is: that's a useless and counterproductive distinction. I inadvertently introduced an artifact into my digital signal that I cannot easily remove. That this artifact magically gains a label only once it passes through the DAC is pedantic nonsense that helps no one.


This is not some useless academic distinction. You are measuring an analog signal. You do not worry about aliasing in all digital systems, even if they are representing something you might typically make into an analog signal. Why would you want a digital square wave? Who knows. But aliasing is not in the digital signal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: