Hacker News new | past | comments | ask | show | jobs | submit | BinaryBullet's comments login

I wonder if this is the plugin they are using:

https://github.com/steadicat/eslint-plugin-react-memo

Does anyone know of any other eslint plugins that help enforce this?


The author of the article wrote the ESLint plugin: https://github.com/steadicat So they probably use(d) it.


That's great!



The best version I remember hearing about is a janitor pitching the idea of Flaming Hot Cheetos:

https://news.ycombinator.com/item?id=20227175

https://www.washingtonpost.com/news/morning-mix/wp/2018/02/2...


See also: An interactive floating point visualization: https://evanw.github.io/float-toy/


Good job. Very cool!


Glad you liked it.


not meows, but here are some purrs: https://purrli.com/


It's a shame that the meows in the "meow-y" setting are just regular meows, not purry meows (as in, meowing and purring simultaneously).



OP's post was amazing but can someone explain how this was probably made?


I think the sound is generated locally. Check out the page source, which contains the JS I think is responsible for generation (I haven't checked in detail).


captive feline


Meow-generator when the last slider is set to Meow-y ;-)


The new material-ui@next (version 1.0) docs are here:

https://material-ui-1dab0.firebaseapp.com/


IMHO this is the most important thing that should be noted, maybe someone can add this link to the description/make it more prominent. A whole bunch of components were missing from @next version, now it seems they are finally here, so that means there is no blocker for guys needing to move to the new version, which in my opinion is awesome, with things like grids: https://material-ui-1dab0.firebaseapp.com/layout/grid and such that were/are not present in 0.17x and below.


...and I'm excited to try out the new grid. I had been using https://github.com/rofrischmann/react-layout-components in most material-ui projects, but I'm glad they decided to add layout features.


The docs site also just "feels" snappier to me in Firefox for some reason. I'm not sure if it's actually faster, but clicking around feels more responsive.


I tried out the first version and really didn't like working with it.. but @next really addressed all my concerns.

If you've used material-ui, you should really check out @next.


I had always wanted to do a webaudio port of:

http://www.cs.princeton.edu/~prc/SingingSynth.html

but never got around to it. This is great!


Not really related, but a while back I wrote a script to visualize/hear audio generation loss with different file formats:

https://github.com/skratchdot/audio-generation-loss/tree/mas...

So, mp3s add a bunch of silence to the beginning of the file, and ogg files start to "chirp". I never got around to putting this info in a consumable, easy to understand format though. The videos in these folders just continuously re-encode a source file w/ a given lossy format.

See also: https://en.wikipedia.org/wiki/Generation_loss


Kinda sounds like "I'm sitting in a room"

https://en.m.wikipedia.org/wiki/I_Am_Sitting_in_a_Room


Thanks for the link! I hadn't heard of this before.


One of the cofounders of the Echonest (acquired by Spotify) created this back in 2004:

    "A Singular Christmas" was composed and rendered in 2004. It is the automatic statistical distillation of hundreds of Christmas songs; the 16-song answer to the question asked of a bank of computers: "What is Christmas Music, really?"

https://soundcloud.com/bwhitman/sets/a-singular-christmas


Very interesting reference! Deep learning is statistical, so this is sorta one of the spiritual predecessors.


By my understanding, yes, in the sense that certain parts of each algorithm are looking to minimize something...the former model used principal components analysis, which is linear in the sense that you are using transforms which pick out for the least correlated pieces of a huge chunk of data, whereas neural networks, which uses a combination of linear and non-linear layers picked by the user to minimize, "errors."

What's interesting is that the former model sounds so much, "better." I wonder if anyone could chime in about how our ears and auditory nerves or perhaps auditory cognition works, and whether they are more, "principal component analysis-y" somehow than "error minimization-y" or something relating to the actual math, which may explain why this new neural network christmas song sounds like absolute crap to us, whereas the older version sounds pretty amazing. Also, whether my understanding of the underlying math is correct or not.


There's a good interview on Youtube with the main author "FLOSS Weekly 340: VeraCrypt": https://www.youtube.com/watch?v=rgjsDS4ynq8


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: