I think the sound is generated locally. Check out the page source, which contains the JS I think is responsible for generation (I haven't checked in detail).
IMHO this is the most important thing that should be noted, maybe someone can add this link to the description/make it more prominent.
A whole bunch of components were missing from @next version, now it seems they are finally here, so that means there is no blocker for guys needing to move to the new version, which in my opinion is awesome, with things like grids:
https://material-ui-1dab0.firebaseapp.com/layout/grid
and such that were/are not present in 0.17x and below.
The docs site also just "feels" snappier to me in Firefox for some reason. I'm not sure if it's actually faster, but clicking around feels more responsive.
So, mp3s add a bunch of silence to the beginning of the file, and ogg files start to "chirp". I never got around to putting this info in a consumable, easy to understand format though. The videos in these folders just continuously re-encode a source file w/ a given lossy format.
One of the cofounders of the Echonest (acquired by Spotify) created this back in 2004:
"A Singular Christmas" was composed and rendered in 2004. It is the automatic statistical distillation of hundreds of Christmas songs; the 16-song answer to the question asked of a bank of computers: "What is Christmas Music, really?"
By my understanding, yes, in the sense that certain parts of each algorithm are looking to minimize something...the former model used principal components analysis, which is linear in the sense that you are using transforms which pick out for the least correlated pieces of a huge chunk of data, whereas neural networks, which uses a combination of linear and non-linear layers picked by the user to minimize, "errors."
What's interesting is that the former model sounds so much, "better." I wonder if anyone could chime in about how our ears and auditory nerves or perhaps auditory cognition works, and whether they are more, "principal component analysis-y" somehow than "error minimization-y" or something relating to the actual math, which may explain why this new neural network christmas song sounds like absolute crap to us, whereas the older version sounds pretty amazing. Also, whether my understanding of the underlying math is correct or not.
https://github.com/steadicat/eslint-plugin-react-memo
Does anyone know of any other eslint plugins that help enforce this?