Hacker News new | past | comments | ask | show | jobs | submit login
Project Aria 'Digital Twin' Dataset by Meta (projectaria.com)
180 points by socratic1 on July 20, 2023 | hide | past | favorite | 90 comments



First thing I notice while browsing from EU is that there's only one option regarding cookies. Accept all! Even if I click "Learn more" there's no "accept necessary cookies" or "Reject cookies". First time I encounter something like this.


This is common on many, many sites like this because they do not have any tracking cookies or anything else that they would need consent for, but they're still required to display a cookie banner "notifying" you that cookies are "in use" as per the terms of the old 2009 ePrivacy Directive. In this case, it appears that projectaria.com sets 1) one cookie for the user's DPR (1 or 2) so that the backend can serve optimized images, 2) one cookie for the user's locale, and 3) one cookie for a CSRF token for form submission.


> but they're still required to display a cookie banner "notifying" you that cookies are "in use"

Common misconception but this is not true. If you use cookies only for functional purposes (not for tracking for example), you do not need to show any cookie banners. Like if you have a shopping cart and you have a cookie for keeping track of what's in it, it's for functional purposes for the user and hence needs no notice to be used.

The UK's ICO made a handy summary for people who are curious about what the directive actually says: https://ico.org.uk/media/for-organisations/documents/1545/co...

Specifically:

> Exceptions from the requirement to provide information and obtain consent

> Activities likely to fall within the exception: [...] Some cookies help ensure that the content of your page loads quickly [...] Certain cookies providing security that is essential to comply with the security requirements [...]


> Common misconception but this is not true. If you use cookies only for functional purposes (not for tracking for example), you do not need to show any cookie banners. Like if you have a shopping cart and you have a cookie for keeping track of what's in it, it's for functional purposes for the user and hence needs no notice to be used.

Personally, I would not put a cookie banner of any kind on my website. However, given this text:

    The term 'strictly necessary' means that such storage of or access to information should be essential, rather than reasonably necessary, for this exemption to apply. However, it will also be restricted to what is essential to provide the service requested by the user, rather than what might be essential for any other uses the service provider might wish to make of that data. It will also include what is required to comply with any other legislation the person using the cookie might be subject to, for example, the security requirements of the seventh data protection principle.

    Where the setting of a cookie is deemed 'important' rather than 'strictly necessary', those collecting the information are still obliged to provide information about the device to the potential service recipient and obtain consent.
I think it's clear why a more risk-conscious organization like Meta might take a more conservative reading of "Strictly necessary" that does not apply to e.g. bandwidth optimizations related to a device's DPI


You'll notice those last three words "and obtain consent".

Either the cookies are strictly necessary - in which case, there is no need to display a banner, or they aren't in which case you have to ask the user for consent.

"List non-necessary cookies, but don't ask for consent" isn't an option.


They are not necessary.

> One of the ways we use cookies is to show you useful and relevant ads on and off Project Aria.


But it's easier and less risky to just always put in the standard language that everyone ignores and mindlessly clicks through anyway. Which is why this was very silly legislation. People helping develop future legislation (in the EU and elsewhere) should be aware of this as a cautionary tale of incentivizing theater with only cost and no benefit.


Then why does the banner say?

>We use cookies to personalise and improve content and services, deliver relevant advertisements and increase the safety of our users


It’s probably the default language for the company. Technical, t he at allows them to have tracking cookies even if they don’t have them now


They explicitly say

> One of the ways we use cookies is to show you useful and relevant ads on and off Project Aria.

Also no, it doesn't let them do that because that's not how the law works. There must be an opt out.


doesn't ot have to be opt in? you need to give consent. you can't gove consent until they've told you what cookies there are and click a button. if you don't click those can't be added if you're in the eu


Right, I'm sure this is boilerplate language provided by a law firm.


Deceptive


Language is all that matters when it comes to law. It's a blatant violation of GDPR.


What if the banner language suggests they might break GDPR, but in reality they are not doing those things? If my SaaS forces you to select a checkbox that states you're agreeing to allow me to set fire to your house (which is illegal) - would the sign up itself be breaking the law? IANAL, but I don't think it would be; I wont be breaking any laws until I commit arson.


It's breaking the law because you're essentially being forced to consent to some thing that they legally must give you the option to opt out of. It's not about them doing it, it's about the validity of their request for consent.

If I hire a hitman to murder somebody, but the hitman chickens out, I'm still guilty of having hired a hitman, even if nobody died.


AFAIK, that is not true. Cookie banners are only required if they are used for tracking purposes.


I have to admit, for technical folks like ourselves, I don't understand why you care what options the dialog presents. Just use a "Kill Sticky" plugin to nuke the stupid dialog so you can read the page, or even Accept them, and then instruct your browser to do whatever you like with the cookies the site creates (i.e. delete them). It's all in your hands, the popup dialog doesn't do anything you can't do yourself.


Learned helplessness. Teach the citizens that the only thing protecting them from the Big Scary Internet is their benevolent government. Meanwhile the politicians collect millions from the mass media companies that lobbied for the god-awful implementation of the law we all ended up with, and as everyone gets worn down into Accepting All always, they simultaneously forget that their User Agent holds all the cards (or in this case, cookies), and it can be instructed to do anything the User wants with them.


Did you just take a problem that free market tech created and blame it on the government? ;)

User agent sovereignty would be nice... except the most used browser and 1/2 of smartphones are controlled by Google, the largest ad tracking company on the planet.

We're way past the 90s.


This particular problem with annoying cookie dialogs is actually a government-created problem though...

But I do agree with you that "free market tech" created the problem of "tracking cookies are ubiquitous and users don't know how to control them". But then regulators just layered another annoyance on top of that, instead of solving that actual problem.


Google, the same company that provides a litany of fine grained cookie retention options in its User Agent’s options page? Except thanks to the government’s antagonistic policies (read: big-media’s lobbying) those are useless as you need to keep on clicking the damn pop ups every time you visit a page “anew”.

Also never forget we already had a perfectly good solution in the form of Do Not Track headers that a benevolent governing body would have simply mandated abiding by. Instead we have this shithole.


DNT was ignored from the time it was implemented.

The only way it ever would have been respected if it was required to cryptographically sign an acceptance of cookies, then the server was required to retain that attestation as proof of acceptance, subject to legal liability if they were found in possession of tracking data without a valid attestation.

Absent enforceability, even when the server actively and maliciously decided to ignore it, it was a toothless solution.


The EU just should have said "if you don't respect DNT, go directly to jail and don't pass Go".


How would not respecting DNT have been proven?

The rub here is that everyone wanted to ignore DNT, because it made them lots and lots of money.


How can you prove that they respect your preferences in those consent theater pop ups?

I think those pop ups are the worst thing that ever happened to the web because they eliminated the moral authority that anyone had to say “it is user hostile to use pop ups”. Once the EU made it appear “required” and even “laudable” or “prosocial” there was no basis to say “you shouldn’t put this other popup in that will make users feel harassed”.

So now we get pages where the popups get in the way of the other popups.


Point.

I suppose the formalism around popups, and specifically when the EU decided to start levying fines on entities who used dark patterns to avoid the spirit of "accept/reject must be equally easy to click", convinced me that user-visible was a better way to win the fight.

Granted, it's not a technically optimal solution, but it may be a politically optimal one. Vis-a-vis the people vs the advertising industry.

I'm unconvinced that DNT would have ever garnered the same support as something that people, and specifically politicians, can see. Which would have led to ad money quietly carrying the day.

I'm hopefully after we've chiseled "Thou shalt respect user decisions" in stone deeply enough, we can flip back to enabling a user agent to automatically respond to that question for us.


It does make visible how absurd the situation is. You might imagine an advertising system would require one or two cookies but it's so shocking to see that some ordinary site would have 40 third part cookies. Some of that is the use of these embeds from the likes of Facebook, Twitter and YouTube and some of it is the "knives out" situation where nobody trusts anybody in the adtech universe and the answer is to have 10 different authorities collecting information and assume they can't all be colluding with each other. (e.g. everybody has a reason to understate or overstate views or clicks and naturally there is attrition in the pipeline so the numbers won't add up perfectly.)


GDPR doesn’t involve any of your crypto-nerding so I don’t see how this is relevant in any way.


Is it the time of the day where I make Brave advocacy again?

Brave has shown that we can have an user agent that is aligned with the user, even if the browser engine is made by Google.


The same Brave that is selling data for AI companies, which may or may not include personal data ?

I'd say it's probably not a good day for Brave advocacy. Neither is any other day.


That's a very good example of "argument-by-Google". You know what conclusion you want to achieve, so you just go around looking for statements that are either taken out of context or misunderstood and that can be backed by a (shallow) google search.

For the record: go back to the article that you are (wrongly) alluding to [0] and see how much the author has retracted. Also, see the response from Brave's Chief of Search.

I "have" to keep advocating them because all the opposition that is presented is always based on false information, biased and prejudiced and clearly made by people who never used the browser or tried to understand the value proposition.

There are tons of things to criticize about Brave (their "partnerships" with Binance and Solana, their complete lack of interest in making BAT an actual currency for payments online, them completely losing the train of decentralized social media) but none of that ever comes up from the detractors, only this kind of bullshit like the one you bring up.

[0]: https://stackdiary.com/brave-selling-copyrighted-data-for-ai...


I'm just going to quote the updated, follow up article:

The Brave Search API does not respect the site's licensing, and Brave is under the assumption that 1) because they are a search engine and 2) because they attribute the URI of data - this puts them in the clear to scrape and resell data word-for-word.

Brave steals data and resells it, and is not to be considered a trustworthy entity.


The article went from "selling personal data to AI companies" to "selling results in the API search with a longer summary than Google which might be a violation of fair use policy", and yet you still don't want to back down.


Aside from what ethbr0 said, which I agree, that you're blaming the victim, I want to address the "learned helplessness" idea.

I heard recently that's actually wrong. We are born helpless, and learning to take control. The helplessness is innate, and we learn to overcome it.

In democracy, the innate helplessness of citizens is overcome by learning to participate in governance - activism, elections, public functions and so on.

The people who say "government does nothing good ever" are the ones who want to keep people in their natural helpless state. It's like telling a student, "you're doing it all wrong and can never be good".


You are definitely not born helpless. Babies keep screaming until they get what they want. They also tirelessly try to master mobility. Learned helplessness would mean they wouldn't even try crying or moving.


I think you're arguing beside my point, I am referring to psychological concept of "learned helplessness", and how is that wrong. That concept doesn't imply that helpless people fall into a total coma, it just means that they don't attempt certain things.


Speaking from experience, babies and toddlers will attempt to do absolutely everything, including flying (which will fail) and using smartphones (where they succeed remarkably).

Learned helplessness can by definition manifest only after you have tried something and failed. Hence you cannot have learned helplessness after being born, because you had no opportunity to try anything yet.


Please don’t tell me you’re unironically arguing “learned helplessness” doesn’t exist because you “learned” you’re born “helpless” and the only path to actualizing change as an individual is through the official government sanctioned mechanisms… because if that is your honest argument… wow.

For reference, in my experience, the public works projects that get front page news coverage with tons of anecdotes from locals about how incredibly helpful and long-needed the installation was, are those that were completely unsanctioned.

And the only path to substantial policy change in all of history has always been violent revolution.


I am not sure what your argument is. I explicitly list activism as one of the options, so I don't claim you have to always go through official channels.

But especially in democracy, you have a lot of opportunities to use official ways to institute change, like being elected or vote.

I also think there is plenty of positive social change that happens non-violently.


I have to admit, for the non-technical folks like not-ourselves, I don't understand why people dont know about such "simple lifehacks" -->

You're knowledge is sound, but rather than condescedingly relegate people to your 'simple' workaround - the ENTIRE premise of cookies and tracking against ones implicit desire to be private, is assinine.


you should care because often this agreement is not about the technical detail of cookies, but allowing the company you are interacting with to share data about you with 3rd parties.


If you think which HTML div element you click on to dismiss a sticky banner has any bearing whatsoever on how your data is handled server-side, I've got a trip to the Titanic in an Oceangate submersible to sell you.


Legally it does, at least if you're in the EU. The only reason websites are slow on the uptake around this is that the legal gears are slow, however we have seen many considerable fines in this space in the last few years and things are improving.


It's also in direct violation of GDPR.


Or just use private window in the browser and then close it.


That assumes that the entity ignoring the law is operating within it to the extent of requiring your consent. GDPR goes much wider than explicitly cookies.


It’s amazing how much worse browsing the web has gotten after all these pop-ups everywhere.


You mean how much worse surfing the web got after it became visible, how much companies are tracking us? The problem is the tracking, not the banner. If you don't sell your readers data, you don't need a banner.


Noticed the same thing as I always select ‘necessary only’ or something like that.

Instead, I just closed the page and clicked on HN comments to see what it was about.


This one is tame compared with a whole industry of "data privacy" popups which hide "Legitimate Interest" opt-outs within Vendor lists containing hundreds of entries. Someone needs to slap some serious lawsuits on these gangsters.


This is a perfect example of how GDPR can present challenges to innovation. The fact the top comment in this announcement revolves around GDPR compliance and associated fines raises questions about whether companies will be motivated to share research and open source datasets in the future.


Isn't that a GDPR violation since they're not allowed to prevent access if you refuse to share data not necessary for the service to function? Since it's from Meta I suspect regulators would enjoy another thing to add to the list for future fine calculations.


It depends. If everything stored & tracked is genuinely necessary (as defined by the regulations) then consent is not required. If they are just telling you that they are storing & tracking necessary things like session information, then all is good (unless the things they store & track are not, in fact, strictly necessary and non-privacy-affecting).

Of course if they don't need consent, then why are they making a song & dance about it having us click an accept button?


They explicitly say a use is for ads on and off this site.


Not at all. You can have a single prompt with a single accept button if you are only using functional cookies on a site.


In that case, you don't need anything, no notice nor consent required at all, get rid of the entire modal/popup/banner.


> First time I encounter something like this.

Facebook itself use to have this exact banner with no alternative until they were strong-armed to properly comply with GDPR.


That's a common dark pattern.


It’s not really a dark pattern because they’re not tricking you into accepting or coercing more than is acceptable into doing so, they’re just not allowing you to decline. Which means if not accepting cookies is a dealbreaker for you, no problem, just: move along.


Dark pattern is a term normally used for legal, but immoral user experience. In this case legality is questionable as they redirect you to other external sites to manage your "consent" https://optout.aboutads.info (at least from a European GDPR perspective)


Yep, closed it down immediately. IANAL but that might not be legal in Europe


While acknowledging the utility of this kind of object/environment mapping in AR applications - the privacy obliteration is stark.


Yes, and it scares the shit out of me.

Now. Facebook, the company that nobody trusts to do this sort of thing, is going to have to really work hard to demonstrate that they are to be trusted with this data. Apple, and to a lesser extent google, don't.

That cool startup could get away with lots of things, so long as people like the product.

Fortunately for us, AR glasses are limited by power consumption, this means that they can't really do always on realtime streaming of data to the backend for mining. Sure you could have always on mm accurate location, but you can't have video recording at the same time. If you want facial recognition, you'll have to stop the music playing.

Now, what would help is a decent set of privacy laws, ie:

Any cameras smaller than x, must only allow recording of data from persons that expressly allow it, unless in the public domain. People attempting to re-create personally identifiable data from such sensors will be liable to 5 years in jail and or an unlimited fine. (insert carveouts for legitimate research and persons working towards providing evidence for court cases)

This isnt perfect, but its a lot better than what we have now.


I think you underestimate the power consumption of streaming video compared to what a VR headset already does. I mean, they already Chromecast the feed if you so choose.

I will say that Meta is fairly aware of their reputation. The TOS is clear that they do not upload or share any video capture and they seem committed to it. As it is now, all the scene understanding is done on device.


> I think you underestimate the power consumption of streaming video compared to what a VR headset already does

Rayban stories have a 167mah battery[1] the quest2 has a 3640mah battery[2] even then, it only last 2 hours, more or less, rather than an entire day.

[1]https://beckystern.com/2023/04/30/ray-bans-stories-teardown/

[2]https://www.meta.com/gb/legal/quest/product-information-shee...


Genuine question: What is an egocentric data set? Eli5 please? What can this be used for?


Egocentric means that the sensors frame of reference is the same as the ego (self). The person walking around is wearing the sensors/glasses themselves & their pose is given as “ego pose”.

This dataset is clearly targeted for research on AR/XR/VR applications.


+1

I’ll “yes and” here…beyond AR/VR a more powerful use case is multi-modal learning (with RL) which is what Meta is probably the leader in IMO.

Example paper here: “ Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning”

https://arxiv.org/abs/2301.10931

This IMO is the pathway to AGI, as it combines all sense-plan-do data into a time coordinated stream and mimics how humans transfer learning to children via demonstration recording and behavior authoring.

If we can create robotics with locomotion and dexterous manipulation, egocentric exploration, and a behavior authoring loop that uses human behavior demonstration and trajectory reinforcement - well, we’ll have the AI we’ve been all talking about.

Probably the most exciting area of research that most people don’t know or care about.

That’s why head mounted all day ego centric AR is so important - it gives eyes ears and sense perception to our learning systems with human directed egocentric behaviors, guiding the whole thing. Just like pushing your kid down the street in the stroller.


Just to make sure I understand your excitement: we need guinea-pigs ahem people to wear 'head mounted all day ego centric AR' with who knows how many integrated sensors for long stretches on end, so we can finally get to our fabled A.G.i?

That is some B.F. Skinner level future we're aiming for--only this time around, humans become the fully surveilled 'teaching machine'.


Well no...not guinea pigs. But correct conceptually - if it's opt-in only and perfectly transparent to everyone what is happening, which in this specific case of Aria it absolutely is.

If we want to make machines with equivalent or better capacity as humans we have to transfer the process for scientific discovery, including the sum of our cognitive capacity and knowledge to them.

If you quantize human adult-infant interactions, then it boils down to Human adults introducing learning trajectories, labeling input data and biasing weights with reinforcing behaviors for new reinforcement agents. If we can re-build the infrastructure to do precisely that, where the agent is in the place of the infant and society is in the place of the "Human Adult" then we will have re-built at scale the process for human development.

The best way we know how to do this today is implementing transfer learning approaches from the basic human developmental research. I started down this road back in 2010 trying to follow the work of Frank Guerin out of the University of Aberdeen [1] [2].

[1]https://www.surrey.ac.uk/people/frank-guerin

[2] https://scholar.google.co.uk/citations?view_op=view_citation...


But what about observer effects? People act differently when recorded, and rarely do we catch humans acting natural when knowingly observed (some of the early 24h/day Twitch streamers come to mind). And what happens once trials are done? How would people feel about their actions becoming part of a technology potentially able to replace them?

Even when this barrier can be overcome (i.e. people become accustomed to wearing these devices), I worry about the opt-in nature of it. We've yet to see a disruptive technology adhering to this principle through-and-through, and if current learning efforts are anything to go by, training data is not something companies want to willingly let go or lose out on.

Taken both, this path has the potential to be quite coercive if no strong guarantees or safeties can be upheld, especially if early exciting trials generate an interest-boom similar to the one we're seeing right now in the LM-space.


This is a great point and why, I advocate so vociferously thatall of these systems and future organizations that are going this direction should be cooperatively owned, based on mutual voluntary Democratic principles, rather than owned by a small subset of wealthy individuals in your standard business construct.


That would be a welcome future, indeed. And hopefully, not just upheld in some regions of the world, but everywhere where AR-backed AGI gets off the ground. And this governing structure would need to work for some decades at least. Which would be quite a feat.

That still leaves my first question regarding observer effects and how people would respond to such a technology on an individual level. It would have the capacity to reshape behaviour towards preferential and/or optimal interactions, would it not? Seeing how we do not want reinforce models with 'erroneous' interactions?


TBH I don't know, and I think there's a real chance that there's going to be actual changes in how people behave as a result - which, if it's integrated like many other social changes will become another layer in the fabric of society, displacing another layer. For better or worse I think it's just an exposure thing.

You are persistently surveilled in London and Shanghai and New York City - yet people act just as unhinged in ways they did before cameras were installed.

I'm not sure what other data acquisition/technology arc is possible though, and open to ideas.


> You are persistently surveilled in London and Shanghai and New York City - yet people act just as unhinged in ways they did before cameras were installed.

Unhinged people do, but ordinary people? I'd be willing to bet that normal people who are in areas where they are aware they're on camera don't behave as their normal selves. It's hard to see how it could be otherwise.


Is that model (parents giving labeled input and affecting some weights in the child’s head with reinforcement) really a good fit for the reality of how people learn to do things?

It’s my understanding (though I haven’t looked at the primary sources myself) that one of the facts that inspired Chomsky’s language theories and work for instance, was that when you quantify the information communicated by parents to language learning children, there’s actually not very much of it. Not nearly enough to support that what’s going on is anything like the kind of learning embodied by machine learning models.

If that’s true, and there is something of how to act intelligently / humanly already encoded in children (maybe genetically?) and not communicated by this sort of training, wouldn’t ignoring that and trying to get to it purely in this machine learning way be.. at least not at all informed by evidence / examples of it working in nature?


So this is extremely complicated and nuanced with respect to intelligence acquisition, and I don’t think there’s a definitive right or wrong answer.

I certainly acknowledge my own bias with this however, with respect to what Chomsky discusses, I make the distinction that most of the “code/data/information” that you need in order for the language capacity to develop is actually embedded in our biological mechanical systems. That is to say, if you were to take a human infant and never expose it to another human with respect to generating sounds for language, the infant would still develop some sort of sound based communication system. We see this with feral children, mute children, deaf children. They still have a verbal function, even if it’s not connected to any semblance of coherency.

So in that sense it’s like you’re given all of the building blocks for language out of the gate biologically and then the people who are around you tell you how to assemble them into some thing that is functional. This is why different languages have different rules yet language acquisition is consistent across cultures.

This is why I am insistent on holistically understanding the computing infrastructure and systems because the sensors processors, etc. are the equivalent to our cells, genes muscles, bones, etc. Most people don’t think about computing systems and generally intelligent systems this way.

If you go back and look at the work of wiener and early Cybernetics it does discuss a lot of this, however, after Cybernetics was absorbed into artificial intelligence, which was an absorbed into computer science, it doesn’t really look holistically at systems of systems, unfortunately, in the general case.

And I would argue that all of machine learning currently is very much moving in to the direction that I am describing where is exposure to frequency of correlated data that gives you your effective understanding of the world, and being able to predict the future state. That’s what I mean when I say multi-modal is “sequential and consistent in time” with respect to causal action.


As with most technology, there are plusses and minuses.

If used correctly (if is doing lots of heavy lifting here) this type of system, eye gaze, imu & microphones would provide much much better hearing aids than the current state of the art, at a much cheaper price (go look up the price of hearing aids, its _extortion_ )

Using gate analysis, it would be possible predict when someone is prone to falls, allowing much longer independence for older people.

Assuming that its possible to understand who you are talking to and what they said, you could mitigate and support dementia much more than we can now.

However.

You also have a vast network of headsets with highly accurate always on location, able to see what you are looking at, who you talk to, what you say, and in somecases what you feel about things.

Add in some basic object/facial recognition and you have an authoritarian's wet dream.

now is the time to regulate, but alas, that wont happen.


+1

Applications of embodied AI very interesting. Additionally a lot of hard problems are increasingly being solved in simulation like this. See Wayve's GAIA world model


> a behavior authoring loop

A behaviour that authors loops?


I would read it as “a loop that authors behaviors”.


As you know, to make machine learning work, it needs loads of data.

Most picture and videos are taken from a camera at arm's length, not attached to someone's face.

so if you want to make AR glasses "see" and "understand" the world from the point of view of a human (ie navigation, where is x, etc etc) then you need to make a dataset with that sensor configuration.

https://www.youtube.com/watch?v=6vnZCwf5_QE has a simple example from CMU, rather than facebook


This will be complementary to the SecondSelf Skinset Pro: https://samkriss.substack.com/p/how-to-enjoy-your-secondself...


Non-commercial licence :[


"I can't use this to make money". Sad face.

If I had a company, which is a for-profit entity, I wouldn't want competitors to use my company's tech for free either.


I wonder, let's say one uses it on an open source project that accepts donations. Is that non-commercial enough? Or is it a no-no because money is involved? I'm just curious.

I'm glad such free datasets exist even if they are restricted to educational/hobby use.


You can't use it in an open-source project. Open source licenses don't restrict commercial use.


love Meta. they fell face first into the "Metaverse" and now are just releasing all their AI related work to juice their stock and be "innovative".


> All sequences within the Aria Digital Twin Dataset have been captured using fully consented researchers in controlled environments in Meta offices.

The requirement and boastful nature of this heading is a frightening tell against the company/industry's perceived practices.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: