Hacker News new | past | comments | ask | show | jobs | submit login
The Making of Nefertiti 1kb (2018) (romancortes.com)
195 points by wensley on Jan 9, 2019 | hide | past | favorite | 20 comments



> Find 2 or more evaluation functions that share a global minimum for your problem. Choose one of those functions randomly at each step of the hill climbing.

This reminds me of the Pandora strategy where you don't every upvote anything, you only tell it no, to encourage it wandering the search space instead of orbiting tightly around a handful of songs.

Never tell people your favorite, or that is all you will get.


When I first started Netflix, I thought, oh, these recommendations are good, but then I watched one Star Trek Voyager Episode and now all suggestions are about Star Trek.


This explains a lot about why youtube recommendations are so bad.


YT is much worse, it keeps recommending "popular" clips it deemed worth of pushing. Conspiracy theories (still!), top 10 you wont believe number 9 garbage, politics, etc.

Just now out of 20 related/recommended slots: one 9/11 conspiracy, one Nuremberg trials? something about Nazi experiments, 6 watched videos I already gave a like few weeks/months/years ago! top 10 some gimmick artillery guns list, Belarus drunken driving compilation, 3 totally random videos in my native language despite YT UI being set to English, 4 videos actually related to the clip on the page, finally 2 videos somewhat related to my subscriptions.


I worked with Javier (hi! ) at Montage Studio where we built a web app framework and an IDE inspired by the Apple development stack. Javier is an amazing mind and a wizard at CSS. Fun fact: he used to sketch out code in good 'ol Notepad! Here are a few more js1k entries by him:

http://www.romancortes.com/blog/furbee-my-js1k-spring-13-ent...

https://js1k.com/2012-love/demo/1100

https://js1k.com/2010-xmas/demo/856


And I came up with a block compressor based in sinusoidal circular waves.

That's actually not too far off what the DCT used in JPEG does, in that you're similarly trying to represent sampled data using a series of sinusoidal functions.

I wonder how well wavelet-based compression (like JPEG2000) would be for this data, since it's been used before in a demo too:

http://www.iquilezles.org/www/articles/wavelet/wavelet.htm


Is https://github.com/victorvde/jpeg2png analogous to a deblocking filter for jpeg?

>JPEG encoding loses information. But it is JPEG decoding that introduces artifacts by filling the missing information with noise.

>jpeg2png is smarter and fills the missing information to create the smoothest possible picture.


The underlying model only exists because a couple of artists snuck into the German museum where Nefertiti is displayed, and clandestinely made a 3d scan. Nice to see it getting used for this.

https://web.archive.org/web/20160220090011/https://hyperalle...


That story is probably not true: https://news.ycombinator.com/item?id=11238921


Pure genius. Even with his explanation of the problem domain, I have no idea about how he arrived at the 'magical idea' of random double losses


I don't understand this. Could someone give a quick explanation? Specifically, why changing the cost function helps?

Is the point that the cost functions have incompatible gradients around local minima/different local minima?


Is the point that the cost functions have incompatible gradients around local minima/different local minima?

I think that is part of it: the different cost functions can have different local minima and also different saddle points; ideally even different ridge/valley configurations.

In machine learning there is a well-known technique called Stochastic Gradient Descent (SGD) [1]. There the cost function is the sum of a very large number of terms reflecting how well each element of the training set has been reproduced.

With SGD the optimisation steps use randomly chosen cost functions which are obtained by choosing a random subset of the training set.

I had thought the advantage of SGD was purely in saved computation: by computing the approximate cost function on only a small batch you have only a tiny fraction of the computational expense. That if you could use larger batches it would always help the convergence.

This demo writeup makes me realize there may be a benefit from the randomness. Different cost functions may have different local minima, different saddles, different ridges. That helps you not get stuck or even slowed at these points.

[1] https://en.m.wikipedia.org/wiki/Stochastic_gradient_descent


I gather that's the idea, but mean squared error and mean absolute error are fairly correlated, so I'm not sure if that would be an advantage or disadvantage.

I'm running a MSE hill climbing thing at the moment, I might give it a go and see if it helps.


My guess is the random that helps. Noisey hill climbing is a technique (simulated annealing) to address local optima.


Ah, now that makes sense.


That was also my understanding.


> The other option available in browsers is H.264. I thought in using a single frame video stored in H.264 but I was unable to generate a video file little enough to fit in 1kb. It might be possible, but my knowledge about video compression is limited.

Even though support isn't universal yet, I wonder how HEIC would fare here.


Fabrice Bellard submitted a 2018 IOCCC entry that uses H.264-like techniques to compress a 128x128 Lena (RGB) into 1220 bytes, but the decoder is >2KB of obfuscated C:

https://www.ioccc.org/2018/bellard/hint.html


Does anybody remember Second Life?

They used a similar technique of creating an object out of an image.

The maps where called sculpts and consisted of an 64 by 64 image. Each pixels rgb value was mapped as xyz coodinate onto a grid. The grid base mesh was either a sphere or flat which allowed for a "closed" or "open" object surface so one could simulate holes (it also influenced the physics).

The users figured out a lot of funky tricks.

Like having multiple connected objects described by only one such map by having the connection be an ultra thin line which got not rendered. This lowered the "costs" Second Life calculated since it consisted of only one primitive.

Or using a bug to create giant ones of these things to create houses and other big objects.

On top having an automatic LOD was quite easy just half the number of grid vertices and there is your lower poly object.


1kb != 1kB.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: