Hacker News new | past | comments | ask | show | jobs | submit login
Stealing Ur Feelings (github.com/noahlevenson)
372 points by simonpure on Oct 23, 2019 | hide | past | favorite | 104 comments



There is a case to be made about how good the AI behind emotional detection is. When you take the test, it will be accurate for some, and blissfully wrong for others. More so wrong for most. I took the test (or unknowingly took it) and it was correct for the most part. And it got some things wrong. I love dogs.

The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; Your terrorist score is at 52%. A police car is dispatched.

In 2017, my contract was not transferred to the new system. The automated system saw that an ex employee was scanning his key cards multiple times. Security was dispatched to catch the rogue employee. Now a simple questioning should have cleared things up, but the computer had already flagged me as troublesome. Long story short, I was fired.

When the machine calculates your emotions, the results are unquestionable. Or, we don't know how it got to the answer, so we trust that it is right. It is a computer after all.

What scares me is not how fast machine learning is being deployed to every aspect of our lives. My scare is our reaction to it.


The problem with all current "AI"-driven systems be it facial recognition, voice recognition, translating, fraud detection, navigation, whatever, is that they are not 100% right, and when they're wrong, they're hilariously devastatingly super-wrong in a way that humans are not wrong.

But since the success modes are good and human-like, we assume that the failures are going to be human-like as well, but the failure-modes of these system are usually bizarre and alien. Take self-driving accidents, for example. Pretty much all of them happen in situations that no human would fail in, and that's obvious to most people, but then we're forgetting about all the other mistakes similar "AI" systems make, and don't realize that they're also failures no human would make.


> Take self-driving accidents, for example. Pretty much all of them happen in situations that no human would fail in, and that's obvious to most people, but then we're forgetting about all the other mistakes similar "AI" systems make, and don't realize that they're also failures no human would make.

Amen? I've tried to explain precisely this to many brilliant AI/ML folks in lots of variations, with little success. They look at me funny, as if I'm a crank for believing that that probabilities don't capture the whole story. It seems that, to far too many of them, a computer that has half the accident rate of a human is a strictly better replacement, end of story. The notion of just how spectacular the failure mode might be, or the degree of control that a human might have in the process, or any other human factor you can think of besides the accident rate, just seems like a completely nonsensical notion to many of them. For the life of me I have yet to find a way to convey this thought in a compelling manner.


To me, one of the best uses of this generation of AI would be as an aid to decision making. Your personal Skynet learns representations of data that help you make decisions, and watches out for worrying signs that you may be missing (perhaps based on your known mistake patterns, adapting as those change).

Meanwhile, human domain experts still look at the details and are able to add their own understanding of broader context, human nature, etc.

In this world, AI/ML primarily increases quality by augmenting human abilities rather than decreasing cost by automating humans out of the system. It's a smaller market maybe, but there are areas where better work can fetch a premium.

I think part of this is the lack of well understood patterns for the plumbing and UI of a system like this that makes it easy and useful and non-threatening to users. It's nothing that someone couldn't figure out, but not as well paved a road as integrating a caching, search, or image processing subsystem.


So I guess we aim for augmentation first before shooting for a complete replacement.


It has a reasonable sound to it! Although I think the complete replacement scenario has proven a little harder than it looked on some tasks (driving could be an example).

Yet it seems like there is a lot more all or nothing efforts (or seeing human workers as a level of escalation), I see fewer projects aimed at being helpful in a user directed way without taking control.


Statistics is incredibly hard and unintuitive, and if you understand statistics, it might be hard to understand that the majority of people absolutely don't.

I get that if all cars were self-driving, and the error rate of the self-driving system was half of the average human error rate, it would be better for humanity.

But if the errors of those self-driving cars would be obviously avoidable for a human, the absolute majority of people would never ever ever in a million years trust that system. It's a complete fucking no-go. Most people would much rather take shittier odds, if the failures are human, than better odds, but where the failures are alien and bizarre. Most people would automatically think that that risk is worse.

I would probably too, even though I understand the statistics.

It's the same reason people are afraid of flying, even though it's much, much safer than driving a car.


> in a way that humans are not wrong.

I dunno... from the parent post:

> you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; Your terrorist score is at 52%

That sounds like a pretty human failure mode to me.


Yes, it sounds like that because it was a human making up a failure mode, so it was human. Now suppose you bought 3 12-packs of Coke Zero last week, you sneezed on the frame the computer used to identify your face, the moon was waxing gibbous (which nobody even realized the AI was actually using as part of its calculations), your name is a 3 letter first name followed by a 15 letter last name, and a coincidental confluence of 25 other equally pointless things that, technically, the AI weights at non-zero. Now you're a terrorist.


That's a way better example. Neural nets are really good at picking up obscure correlations. The last letter of your first name is 'j', there are limestone blocks in the picture and you're wearing a yellow t-shirt? Terrorist for sure.


Every educated AI knows that the best way to spot a criminal is too look for people staring directly at the camera, with a white rectangle below their face.


To play devil's advocate though, sometimes the ML training process is picking up a signal that humans simply haven't noticed.

The moon waxing gibbous is a fun example - what if moon phase were to affect our biology in subtle ways?

There are plenty of things that people feel or know to be true but have trouble putting into words. That guy you know who just gives you a creepy vibe, but you can't put your finger on exactly why?


To some extent, I tend to agree. Sometimes we humans are a bit too attached to our stories, and when a computer shows us they may not be true, our instinct is to simply disregard the computer, rightly or wrongly. After all, it isn't as if we haven't all had computers be wrong several thousand times for ourselves, right?

But there is also an extent to which if my example did occur, it would still be flawed, at least with modern AI. Even if all these signals are in some abstract sense truly signals that you are a terrorist, in the real world it is still not correct to simply add them together, or whatever other simple operation the AI is doing. For all that deep learning may be cool, it's still got some giant, gaping holes in it compared to however human cognition works. Humans can look at say "Yes, even if those are all 1% signals, it really isn't sensible to add them all together." The current state of AI is not able to come to that conclusion, or at least, not well enough.

If that changes in the future, well, I'll update my beliefs as appropriate then. It's hard to guess the "cognitive shape" the biases of the next breakthrough in AI may have.


The problem is that correlation does not necessarily imply causation. So if a ML process finds a counter-intuitive correlation we should seek to understand why rather than assume the results are actually meaningful. Sadly I think this is the step that is being gratuitously skipped to the extent that “working” models are hardly interrogated.


Seems like the main game changer here is the feasibility of massive surveillance databases, not the human/"AI" decision-making. Of course non-invasive keys for those databases, like face recognition, are convenient, but if they don't work good enough I expect something like machine-readable pedestrian license plates.


This is an excellent point. I can foresee a world where ML becomes a sort of "bias laundering."

"Nobody knows why the machine learning module denied black people a mortgage 43% more frequently than whites, but it's 'AI' shrug"

One of the biggest priorities as we shape this future needs to be a way not only for the algos to make correct decisions, but also the ability for us to interrogate the decision-making process so we can be proactive about the kind of future we want this technology to give us.

Because it is coming, without a doubt.


"Foresee"? "Coming"? The example you mentioned is already with us. Insurance companies and lenders already use computerized models with problematic outcomes, which are completely opaque: https://scholarworks.law.ubalt.edu/cgi/viewcontent.cgi?artic...

"AI" is just speeding up processes that started in the 1970s.


> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

And this is why mathematical statistics and probability theory should be taught in middle school (may be instead of some trigonometry and stereonometry). Not only researchers, but any decision maker, and general public too, need to understand what confidence intervals are and how normal distribution works, on an intuitive level.


Won't help. Most practicing scientists who have been trained in statistical methods don't understand them, and just publish whatever garbage they squeeze out of SAS and then add a layer of misinterpretation on top for good measure. Statistics is fundamentally beyond the comprehension of 98% of people.


Not if you gameify it and educate them when they're about 12 years old. I don't have any sources, just a very strong gut feeling.


So your boss is an uneducated moron that doesnt understand current AI. Did you try to explain it to the boss? You should have shamed the company online. Always a good thing to know which companies are moronic.


And you should've found him a convenient and well paid job. Since publicly shaming the company online, besides being ineffective, is a good way to get fired.


GP already was fired for accidentally triggering disclosure of the company's incompetence. Warning people not to work for a company is one of the best ways to justifiably kill a company by starving it of talent.


> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

It's not clear to me how this is different than the current standard?


> it was correct for the most part. And it got some things wrong. I love dogs.

Seems an order of magnitude easier + more accurate to just track how long you linger each post while scrolling down a newsfeed and which ones you engaged in.

MacBooks, Chrome etc. already warn you when your webcam are on anyway so if social sites started adding webcam tracking when you're only viewing the newsfeed I can't see it lasting for long.


The fact that I lingered longer on a post doesn't indicate the polarity of my interest, just the magnitude at best.


Knowing that you linger longer on that topic is good enough to know that the topic is likely to get further engagement from you though - people like to spend time arguing about things that annoy them as well so you don't only want to show things that make people smile.

If you interact in any way with a post, you can do sentiment analysis on the text too to figure out their thoughts on the topic.

I just don't see facial analysis being that powerful over the above. You're already giving Facebook and Twitter a ton of information about what you enjoy via likes, shares and follows, without having to get into the subtleties of what it means if you half-smile or frown when you see a post.


Seriously why is the discussion anything but delete all tech immediately and bomb silicon valley?

Too late, the ML just flagged me as being anti-ML. I'll probably be assassinated soon.


The problem is not the tech or the ML. It is the people using and creating it.


First, this is excellent and very well done (the analysis wasn't great for me, but whatever - point made).

The part that bothers me so much about this all is the sense of hopelessness it leaves. Why? Because 99.9% of people literally, actually, truly, genuinely Just Don't Care.

It's like they're all rats in a Skinner Box -- so long as the feed keeps scrolling and the dopamine keeps getting pumped out, why does it matter if they're being analyzed, bought, and sold? More feed, more likes, more dopamine, more feed, more likes...

Sigh.


> It's like they're all rats in a Skinner Box

We're all rats in a Skinner Box. Just like me, you gave this site access to your camera and let it scan your face for minutes so you could see the result. I don't think it's fair to judge other people for not caring about privacy immediately after uploading your likeness to a random site.


I read the privacy policy before using it. I guess it doesn't upload anything. Is it naive to trust the privacy policy stated here? https://github.com/noahlevenson/stealing-ur-feelings/blob/ma...

> The information is processed locally in your browser and is never transmitted or stored externally.


I would say if you are very concerned about privacy, you should never grant a website or app access to your camera.

I'm not, so it's no big deal to me, but the parent poster seems to be.


Curious, why would you think that?

Plenty of people who want to live long lives drive cars and take showers, even though those are two of the most dangerous activities normal people get up to.

At least as I practice it, privacy protection is risk analysis, not blind minimization.


You need to take a shower and drive to work. No one needs to use an app like this.


If you use a blanket rule like that, then the most likely outcome is that you'll stick to your guns for a while, but once the benefit gets large enough, you'll give up and stop thinking about it at all. Which, in the long run, results in the worst outcome.

It's much more effective to do a lightweight risk analysis and set some sort of standard for what you're ok with given different levels of benefit. For example, if you're generally ok with a privacy policy, maybe you're fine sending your image to random experimental sites that don't have any obvious motive to profit from abusing your data. (Which is actually not quite the case here -- the site in question is specifically thinking about privacy risks, and so probably ought to be given a better prior than a random site.)

Then again, I'm a total hypocrite. If I were to do my own analysis, I would not be ok with having a live mic potentially sending any audio it picks up to a company I already distrust. Especially when their privacy policy is not at all reassuring. And yet, I can see the Echo from where I'm sitting...


Well it is open-source, so you can validate that claim by auditing/reading the code...


You don't know if that's what's running on the server though.


True, but if you don't trust the website, just run the code locally


> The part that bothers me so much about this all is the sense of hopelessness it leaves. Why? Because 99.9% of people literally, actually, truly, genuinely Just Don't Care.

This doesn't bother me as much. It can be argued that the first step to the ineffectiveness of this analysis is our growing apathy towards its results as a society. I'm not saying the analysis itself will be wrong, I am saying its results will become less and less effective the more we ignore it akin to infomercials of the past. I know right now we all see what appears to be a seemingly obvious link from individual accuracy to susceptibility. Marketers sure count on that. But the latter will reduce over time as a peak of accuracy is neared as the effectiveness of targeting can't grow forever (and there will be value in marketing your company as one that doesn't track, albeit limited as a niche).

I do know I can't make people care nor can I ask them to not give up their info for something they want. I think it is a bit foolish to presume people should care as though something actively harmful is happening to them when such harm is subjective compared to the benefits.


The combination of cynicism and self-righteousness pervading in the tone of messaging like this is a factor.


Believing "People just don't care" is both factually incorrect and counter productive. This is on the top of a site that's read by millions of people.

Your hopelessness and cynicism towards "99.9%" of people does not actively contribute to the problem.


Apparentl I like Kayn West... and I don´t even know who that is.

I hated the delivery method. It looked like these day´s cartoons, made for millenials or Zs with ultra low attention spans (smileies bouncing around, a lot of flashing graphics, etc).


We're all sheep, you're just one of the neighing ones...

Unless you have a plan to fix it, that is.


> genuinely Just Don't Care

short term focus makes sense to me in an evolutionary context, the future is quite uncertain


Facebook/IG are the cigarettes of our age, and no less unhealthy; we should regulate them as such.


Smoking gives people cancer and heart disease at a pretty predictable rate. Quite baffling to claim facebook/ig are even close to that unhealthy.


It's less baffling if you value mental health and physical health equally.


Internet social media are merely 15 years old. We just don’t know yet what kind of toxic and pathological consequences there might be. To use an analogy, places where people use social media intensively and the people themselves soak in certain digital „smell” which is tailling them afterwards.


Definitely, I've been saying the exact same thing for quite a while. We're currently living in the 1950s of social media use (even kids smoking was considered ok back then). Eventually we'll look back at this the same way we think of smoking today, hopefully... :)


I was interested how are they doing the "IQ estimation", here it is:

   -(dogPos + Math.abs(menPos - womenPos) + Math.abs(whiteNegative - nonWhiteNegative) + kanyePos)


After seeing no code in their repo I was interested in the same. The important parts are in https://stealingurfeelin.gs/js/events.min.js

You've left out some important parts of the iq calculation, with the full equation being:

    reactions = dogPos + Math.abs(menPos - womenPos) + Math.abs(whiteNegative - nonWhiteNegative) + kanyePos) / 4;
    iq = Math.floor(15 * -((reactions - 0.0005) / 0.05) + 100);
    if (iq < 100) {
        thatPartBitSFX.play()
    } else {
        thatPartWaySFX.play()
    }
Also amusingly rebulican percentage is calculated as:

    reactions = (dogPos + kanyePos + nonWhiteNegative) / 3;
    republicanPct = 50 + 15 * ((reactions - 0.05) / 0.1),
And income is calculated as:

    reactions = (iq / 100 + republicanPct / 50 + dogPos) / 3
    estimatedIncome = Math.floor(200000 * (reactions - 0.5) + 31099),
    if (estimatedIncome < 31099) {
        isPoorSFX.play()
    } else {
        isNotPoorSFX.play()
    }


The math here is saying you're poor if you're a low-IQ democrat, and not poor otherwise.


Don't forget about dog love.


Oh dear. Lots of magic numbers. The programmers are not educated properly. Just horrible code.


NNs are full of magic numbers that were learned from training sets.


I think you are missing the point.

The horrible magic numbers are outrageous because of the arbitrary nature of them. And for those of us who read the code we are supposed to notice that.

Hiding them behind constants would lessen the impact of that.


This is minified code, so couldn't the minifier have replaced the constants with their values or something?


Those aren't minified or obfuscated variable names... that's obvious.


A minifier is actually more likely to introduce constants for repeated values.


It's supposed to be funny


Yeah, I was wondering that one...


Outstanding multimedia article. Very cool attempt at multimedia persuasion. However hard to take at face-value (pun intended) when its analysis was so incorrect.


Agreed. They concluded I don't like dogs. I love dogs. What I don't like is not being able to find my mouse pointer in that noisey background. Hahaha. Nice try AI


Creepy proof-of-concept. Are there devices which have been proven to capture image data when not in a camera mode, or is there an assumption that they're somehow doing it covertly?


Not on a mobile device, but I read that some company is using OpenCV to judge customers reactions to ads shown to them. that is, ads shown on TVs in stores and recording the reactions with a hidden camera and evaluating right away. Wish I could give more details but I can’t find the source any more


Do any popular mobile apps capture camera info while you are say scrolling the feed?


yes, this is what tik tok is most famous for


I googled and didn't see anything about this, is this true?



I don't see any mention of 'camera' or 'face' in this article, interesting tho it is...


ah I misread OP, thought we were discussing something else.


As far as I can tell, this claim is completely false. Do you have a source for this claim?


That's terrifying. Is it known/clear to the user that the camera is on?


No, but some recent versions of Android allow you to disable camera access to specific apps. Unfortunately, always ask every time isn't enough of an option. Simply don't use apps that refuse to work without the camera permission.


I don't know how much this website can improve the general public understanding of how much companies and governments can deduce from a small set of data about an individual, but this concept was presented in an interesting way on the website.

"An AI that knows you better than you know yourself" - I know SV loves the apocalyptic Noah Harari, but that's exactly what he's been talking about. One of his possible scenarios is that this rapid processing of data by a centralized entity can erode modern individual freedom (including free will and free markets) since it will be more efficient than individuals maximizing their interests. If the centralized processing of data can feed more people, provide security, and in general runs things more smoothly, we collective might accept that route, and gladly give up the power we hold on a democracy (whatever that amount you believe actually exists).


If the "smooth" option is feeding into people's biases, that doesn't seem like a good thing. For stuff like dogs or pizzas curating content based on AI is harmless, but once you get into racial and gender biases, with the given example of suggesting dating profiles, the effects on the outside world can be disastrous. Implicit bias is neither a positive or a negative thing assuming you are aware of and combat it when it has a negative impact, but businesses don't care about that and just want more clicks/buys/swipes/etc.


Everyone says my face shows no emotion and they can never tell what I'm feeling (and when they try to guess they are wrong 9/10). I'd love to know what this would say about me. If this can tell what I'm feeling then that would be awesome. Headline: Computers are better at detecting human emotion than humans.


In some parts, the player shows raw scores for emotions, and one of them is "neutral". I was 99.9999% neutral or more for the entire event, never breaking my apparent poker face until it called me possibly brain damaged at the end, which I found funny for some reason. The claims about IQ and left/right leaning are possibly arbitrary and just meant to stir discussion and virality.

For it to be any amount confident in it's claims, I imagine you need to be at least somewhat expressive. However, I don't expect most users of the average NN classifier to pay more than a little attention to how often what they classify something as is less than 80% confident.


> The claims about IQ and left/right leaning are possibly arbitrary and just meant to stir discussion and virality.

Yeah, I found it very amusing that this thing concluded I was highly republican. First, I am not from the USA, second I am strongly for social programs/socialist, and third, current USA republicans amuse me (in that, what they are doing is funny to see from outside).


I took the bullet for you; I have a similar lack of expression.

It just outputs very low confidence results, so the end result is junk. That said, I think that drives the point home even harder, as its built a profile based on incorrect data.


haha yes same.

women try to guess my intentions and fail spectacularly when I'm largely indifferent

close friends try to understand if I'm annoyed when I'm largely indifferent

but my smile gets me into all the rooms I want to be in, so thats great!

the sociopath reddit told me to gtfo because I'm clearly not one, so that had an odd relieving sting to it but I should have predicted that they would lack empathy


What’s scary to me is how wildly wrong this was at describing me. If anyone is ever going to be making decisions for me based on this tech, it has a ways to go.


A lot of the good stuff seems to be coming from: https://github.com/justadudewhohacks/face-api.js

Results are poor, as expected. I think the author intends it as a joke.

But the message is spot on.


Really cool demo! Though the part about companies using facial recognition to determine what to show you on your feed /results (presumably Facebook, Youtube, Google) is pure FUD, which is disappointing and probably should be made clearer that it's a hypothetical.


Back when Apple first announced Face ID and Animojis, there were vague rumors/fears that companies would use the face-recognition sensors for targeting like that. Did that ever actually happen?

(Come to think of it, is it even possible to know for sure? Do apps need a permission prompt to access the facial-recognition data?)


After watching this I realized that we need to update mobile app permissions model to distinguish between front and back camera access. I can see many cases where I would grant the app back facing camera permissions but would not give it front facing camera access.


I thought the content of this was really interesting, although the results were actually way off. I love pizza, for one thing. Although, to be honest, their picture of pizza did not look very tasty.

I also wonder how these results would be affected by being alone vs being in groups. Kind of like laughing (people tend to laugh more when with other people or in social situations rather than alone), emotional reactions can be very different depending on environment / social situation, even when you feel exactly the same about something.


>>So our goal was to make an interactive doc that had the silly, sarcastic, collaged aesthetic of a vlogger video — and our central tech trick — using AI to tell you secrets about yourself — was designed to function like one of those viral BuzzFeed personality quizzes.

It definitely hits some kind of addictive pattern. I felt the pull to "try it out". But I didn't. Just imagined what I'd learn from it if I shared my data (likely nothing), and that dampened my enthusiasm.


This AI is highly inaccurate: AI predicts I don't like dogs & like men (/r/suddenlygay after I shaved). I hope that's the author's point.

The worst thing about AI is that people believe AI is god that makes unbiased prediction. E.g. Big companies use Pymetrics and HireVue for their "unbiased" hiring practice is a joke.

May be a few years from now, AI will become a classic for software bugs just like Therac-25 (but developed by mostly top programmers)


> reveals how your favorite apps can use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilize American democracy.

I must have missed the part of this where they show how AI promotes inequalities and destabilizes American democracy. They showed me dogs, pizza and a bunch of pixelated people from the 90s.

What this demonstrated to me was that AI is probably worse than noise. It provided zero insight into who I am or how I feel other than getting it correct when I smiled.

I also think the people that made this are missing a huge piece of the puzzle... For most people the issue isn't that they are being tracked, it's that no one is paying attention to them. People want to be analyzed. They want their existence to be recognized and they want it having an impact on the world. The worst thing that could happen is that no one cares. Even if it's an algorithm scanning their facial features to better sell them pizza, I think most people would desire that.

Case in point: we all just took this test to see what it thought about us.


Some weird narcissism you're selling to convince people they shouldn't care about privacy. Immediately refuted by everyone's real world experience with visible privacy violations.


I'm not selling anything. I don't think my point was refuted. I think it was verified by how many people (including myself) just gave a random website camera access and let it scan our faces.


Your point isn't verified because it can be explained by other reasons (curiosity about its effectiveness seems likely, especially given the nature of this community).

The effect you describe is certainly real, but you only have to look at the present reality to see just how far it extends. In modern society, people are mostly not comfortable sharing their intimate thoughts with random service workers. And that's with individual humans, with faceless machine networks the tolerance might be lower.


How many people did give it access? I didn't.


Haha, this is awesome! It does showcase a beautiful world, though! Imagine if you walk into a department store and they already know if you're the kind of person who likes to be greeted vs left alone to browse. Or using median commuter feeling to make commutes better. We could find unusual things. For instance, maybe people would like trains coloured brightly on average. Who knows! The positive possibilities are endless. We could engineer general contentment for all without drug usage.

EDIT: Since I'm rate limited, here's my response to below comment

People are always engineering your contentment. Stand-up comics, your wife, the people at your favourite coffee shop, the bookstore you visit. It's a good thing. It's the lubricant of society.


>We could engineer general contentment for all without drug usage.

The idea that other people are engineering my contentment, is a truly terrifying notion.

I imagine this technology will predominantly be used for increasing the effectiveness of ads by attempting to put pixels in front of your face that spike dopamine output and attempt to persuade you to pay for something to get another dopamine spike. Essentially creating the same effect, but in a more subversive manner than a consumed drug.


What do you mean by rate limited?


My account has a rate-limit flag on it that prevents it from posting too often. It's a silent flag and I know why it's there (I'm often downvoted[0]) so I don't know until I send a message whether it will make it. Unfortunately, that often prevents me from replying to people. When I do I receive a message "You are posting too fast".

0: Not complaining about this, just explaining.


Disciplined by a technicolor behavior algorithm and not told about it. How appropriate!


Participation in this forum is not a fundamental right so I'm comfortable with it. I fundamentally support the right of discussion groups to restrict who gets to be present.

HN's place. HN's rules.


I don't see this any more accurate than just taking random numbers and displaying the mapped result back to the user.


I'm impressed by how well it got my income. Really cool art project for something running entirely in browser.


I hope u guys caught my bathtub video.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: