I think that comes down to organization. We can't expect teachers be the experts they are in X and also be experts in HTML, CSS, JavaScript, SVG, graphic design, interaction design, and animation.
I'm imagining Wikistration. Not necessarily based on Mediawiki, but it enables collaboration on interactive illustrations. Start with SVG, implement Inkscape in the browser, add the usual version control. Then level up with shared JavaScript scripts and SVG snippets so the community can build a library of templates like in other projects. With enough elaboration, a next generation editor could formalize some of the most commonly used stuff to accelerate experienced contributors and make it easier for beginning contributors to get in the game. Also, CSS libraries can probably get mixed in to make illustrations skinnable.
If enough momentum builds up, and people keep looking forward far enough, it might be feasible to merge it into Wikimedia Commons, though that would be a major scope change for the project.
What I find really intriguing is how versatile and powerful fourier transforms in the discrete setting are. In a finite field you can use them to interpolate or evaluate polynomials quickly, to generate lagrange basis, etc.
It’d be interested to read more about this if someone has a link. Not just how it works, but why it works.
The Fourier transform of the coefficient vector of an Nth order polynomial is the evaluation of the polynomial at N powers of a primitive Nth root of unity. The inverse Fourier transform is the same, except in reverse order of the coefficients, and some scaling factor.
You can say that the Fourier transform is a way to convert between the coefficient representation and the evaluation point representation of polynomials - with the caveat that the evaluation points must be powers of a primitive Nth root of unity.
Luckily, the multiplicative group of the finite field GF(q) is cyclic, so every nonzero element is going to be some power of the primitive (q-1)th root of unity!
In other words: in finite fields there's no restriction on the evaluation points; the (inverse)Fourier transform is always applicable for mass-evaluating polynomials (Fourier transform) or doing Lagrange interpolation (inverse Fourier transform).
I meant GF(q) to be the general finite field, i.e. GF(p^m) for prime p, and n >= 1
The multiplicative group of a finite field is always cyclic of order one less than the order of the field.
What's more interesting to ask is what happens when you're working over GF(q) and want to do evaluate polynomials of length N, if N does not divide q-1? You see, the Fourier transform needs a primitive Nth root of unity to work, but in GF(q) there's no Nth root of unity if N doesn't divide q-1.
Maybe if we extend the polynomial with zero coefficients we can upgrade our Fourier transform from length N < q-1 to length q-1? It sounds workable. But is there a similar trick for Lagrange interpolation?
Surprisingly, I understood some of that (not a strong math background), but I asked Bing AI to explain it at the level a smart high schooler would get it. How did it do?:
Let me try to restate this in a simpler way. A polynomial is a mathematical expression that involves adding and multiplying powers of a variable, such as x. For example, x^2 + 3x - 5 is a polynomial of degree 2. A coefficient is the number that multiplies a power of x, such as 3 in 3x. A coefficient vector is a list of all the coefficients of a polynomial in order from highest to lowest power of x, such as [1, 3, -5] for x^2 + 3x - 5.
Sometimes, we want to know what happens when we plug in different values of x into a polynomial. For example, what is the value of x^2 + 3x - 5 when x = 2? This is called evaluating the polynomial at a point. We can also evaluate the polynomial at many points at once, such as x = 1, 2, 3, 4. This is called evaluating the polynomial at a set of points.
The Fourier transform is a technique that helps us do this faster and easier. It takes a coefficient vector of a polynomial and transforms it into another vector that contains the values of the polynomial at a special set of points. These points are called powers of a primitive Nth root of unity. This sounds complicated, but it just means that they are numbers that when multiplied by themselves N times give 1. For example, when N = 4, one such number is i (the imaginary unit), because i^4 = 1. The powers of i are i^0 = 1, i^1 = i, i^2 = -1, i^3 = -i. These are the four points that we can evaluate any polynomial of degree 3 or less using the Fourier transform.
The inverse Fourier transform does the opposite. It takes a vector that contains the values of a polynomial at some points and transforms it back into the coefficient vector of the polynomial. This is useful when we want to find the polynomial that passes through some given points. This is called interpolating the polynomial from the points.
The Fourier transform and the inverse Fourier transform are very similar, except that they use different orders of the coefficients and the points, and they have to divide or multiply by some number (the scaling factor) to make sure everything works out.
The nice thing about finite fields is that they always have some numbers that can be used as powers of a primitive Nth root of unity for any N. A finite field is a set of numbers that has a fixed size and follows certain rules for adding and multiplying. For example, GF(7) is a finite field with 7 elements: 0, 1, 2, 3, 4, 5, 6. In GF(7), we add and multiply by taking the remainder after dividing by 7. For example, 4 + 5 = 9 mod 7 = 2 in GF(7). In GF(7), one primitive Nth root of unity for any N is 3, because 3^N mod 7 = 1 for any N that is divisible by 6. The powers of 3 are 3^0 = 1, 3^1 = 3, 3^2 = 9 mod 7 = 2, ..., and so on.
This means that we can use the Fourier transform and the inverse Fourier transform to evaluate and interpolate any polynomial in any finite field without worrying about finding the right points to use.
I hope this helps you understand better what the original text was saying. If you have any questions or want more examples, feel free to ask me.
Almost correct, except for the fictional part about 3 being an N-th root of unity in GF(7) for all N. In reality 3 is a primitive element of the field - i.e. a generator of the multiplicative group, aka a primitive 6th root of unity. So it's confusing the concepts "primitive element" and "primitive N-th root".
Is there a variation of FFT that works with square wave functions instead of sine waves?
It seems like for DSP it might be worthwhile to build something that works on discrete signals out of the box instead of approximating them with sine functions.
The reason why the Fourier basis works so well is because it forms an orthogonal basis for all smooth real-valued functions. We'd need a similar way to define square waves which form an orthogonal basis. Found one paper doing that: A REVIEW OF ORTHOGONAL SQUARE-WAVE FUNCTIONS
AND THEIR APPLICATION TO LINEAR NETWORKS [1]
If I'm not mistaken, the square wave case should in essence be the limiting case of the general case you get when looking at "trigonometric" functions which don't arise as the parametrisation of xˆ2 + yˆ2 = 1, but xˆn + yˆn = 1 for n > 2. It turns out that people have looked at that, and the study of those functions have been given the funny name "squigonometry". There's a Springer textbook out there.
Typically mathematical treatment of DSP is based on the discrete-time Fourier transform and the Z transform, which differ from respectively the Fourier transform and the Laplace transform not in using square waves, but in using an infinite series rather than an integral to capture the idea of discrete sampling.
Huh. In all of those years when people were talking about wavelets being the ultimate solution to JPG artifacts (on square boundaries) , nobody ever mentioned that they were built on a square wave function. They just talked about being able to move the positions and adjust the diameter arbitrarily. I have to say I'm a little annoyed.
That first audio example just blew my mind! I can't believe I've lived my whole life to learn only now that our ears essentially just pick out all the sine waves in a complicated wave. Thank you!