Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can't "delete" anything from the Internet. Despite Snapchat saying you can. Or a big button that says "Delete". Furthermore, Facebook doesn't need you to have an active, visible profile for it to collect data on you. They still have tracking pixels. "Erasing" comments and likes doesn't do anything or get to the heart of the issue.

The best way I know of to combat Facebook is to poison their data. Which means taking the effort to "friend" strangers across the world, "like" random things, visit websites you don't care about and basically blur your profile to the point that Facebook can't tell who you are or what your actual interests are. It's quite an effort over years.

But no. Everyone wants a button. Which is how we ended up in this situation to begin with.



I don't understand why more people don't understand this. Each "post" is a record with a field called "deleted". You can make that field True. You can't remove that record.

"Deleting" only gives the database owner one more piece of information about you: that you wanted to delete that record.


I assume the did some sort of paranoid delete. I was thinking of getting rid of my Facebook account (once I deal with oauth) and I was thinking if it would be better to delete my account or write a script to flood it with garbage.

I was thinking of having it run as a cron on my laptop. Log in via chromedriver, accept a random couple of suggested friends. Download some random images and upload it tagging myself. Grab some random comments off twitter and post it as my own. Like random posts. Remove a few real friends. Repeat.

Maybe even Facebook Munging as a Service. :D


I've been entertaining this idea for a while.

Train a NN/markov chain on enough of my own and publicly available facebook posts and comments that I can make it spew out realistic-looking posts and comments on demand.

Make another script that likes and unlikes pages, joins and leaves groups.

Make another that uploads random photos (or randomly generated image content) and tags me in them.

Stitch them all together into something that runs all of those on my account, generate so much noise that it becomes a tedious exercise to identify real comments and posts simply by virtue of volume.

For bonus points, run it from my home computer.

Would it be perfect? Probably not, there's probably way more you could do to simulate a real user, but it'd make a fun project if nothing else.


A lazy implementation of this idea might be to just have it post content from https://www.reddit.com/r/SubredditSimulator/


You say lazy, I say making use of existing resources :P (or not re-inventing the wheel).

Seriously though, Subreddit Simulator is probably actually a pretty good idea: a lot of them come with a handy image/gif attached and occasionally you get results that could pass as being written by a human.


That sounds human, albeit confusing. Very cool!


I was thinking the same thing ... but I was going to steal random content from the Twitter firehose. You'll want a bit of smarts behind it so that it's not obviously you (e.g. don't post much when you'd normally be sleeping). It's relatively easy to automate a browser to do this these days (giving you a realistic user-agent and behavior). You'll also want to avoid registering key-presses at an unrealistic (or constant) rate. Apparently you can even be identified by your typing. I guess you could train it on your actual typing but that might be overkill.


You will probably always lose.

It is someones daily job to try rand enrich the data they are getting from you, you are working on your bot(s) on your spare time. You just purely can not spend as much resources to keep ahead of the algorithm.

EvilCorp™ will always gain more benefit out of adapting the algorithm to even include/exclude your bot since they get profit gains across the board.

More over you need to spend a lot of time on your bot(s) to make them blend in. I'm not saying; "don't do it", I'm saying; "it is going to be hard to get real benefits"


It's asymmetric; it is significantly easier to make a clear image harder to read than it is making a fuzzy image clearer, and the same applies to data sets.

There might be swaths of teams all focused on enriching the data, but you eventually can't derive more information out of a deliberately flat information graph, and all it takes is a few extreme data points to distort the average.


It certainly would I dont know what a Markov chain is but it sounds cooler than me dot rand'ing my way through random twitter posts.

Yeah, Oauth and events is the only thing keeping me on FB. Although, I believe they opened up their event system to non-fb users, probably so they could build shadow profiles.


Once Facebook algorithms detect that your account is managed by bots, they can simply discard whatever you did since you started the bots, rendering your efforts useless. I am guessing it wouldn't be that hard to detect account anomaly for Facebook - any abnormal activity, such as liking an unliking pages several times a day, sending random people friend requests etc.


Great idea, and hilarious, BUT: you're going to end up spamming the crap out of your REAL friends with all the nonsense you post, so I think step one would be to block or de-friend anyone you care about in your social graph, first.


That’s perfect! Because then they would start removing me as a friend and the whole process wouldn’t be one-sided :-)


Someone went and did this :D

https://gitlab.com/danso/derivejoy


Even that doesn't guarantee your previous data is gone; nothing is preventing them from maintaining a history of changes, meaning they would have both your new, munged, data AND your older data, pre-munge.

Not sure what you are gaining with this.


My “data” would be a cluster fuck, technically speaking.


If you are in the EU and press delete and the database owner (data controller) does not delete the post then the database owner is now breaking the law. As of ?may? this will carry significant fines.


I don't think it's the case yet. GDPR will enforce the right to be forgotten and as such truly delete data about you, but other than that, I don't think there are other regulations forcing you to delete data.


This is why I said as of ?may?. The GDPR comes into force this year.


I'm not familiar with the specifics of the law, but my guess would be that it does not require that something be deleted upon any delete button click. I would assume it requires full deletion when requested through proper channels.

There are plenty of cases where a system would not function correctly if you are actually erasing DB entries when a user clicks the delete button. In many cases, even the users themselves might expect to be able to undo or go to a delete list and see entries that were deleted.


At my company, I heard them discussing deleting the data the user requested within 1 month.


Oh, my bad, I misunderstood your sentence.


Let's wait and see, I would guess that we are going to see the first cases within the coming five years.


Why be the test case? Your company will be severely impacted by a fine.


No, they're required to delete data if you specifically ask them to (they most probably will change "delete account" to "deactivate" and you'll have to send them a snail mail request if you want a complete removal) and they're not obliged to delete data partially - it's either this (deleted=true) or whole account.


I suggest that you consult a lawyer.

EDIT: In my layman's opinion pressing that button is an explicit request to delete the data. Also the kind of behavior you are suggesting is not private-by-default and goes against the spirit of the law. I don't think that a judge will look kindly on it.

I am not a lawyer. I am not your lawyer. I suggest that you consult a lawyer.


Yes, but right now the law does not apply. In the future they probably will change 'delete' to 'hide' or even get rid of this option and then the rest of what I said applies. I incorrectly said that "they're" instead of "they'll be".


How does the EU propose to enforce this?


Fines of up to 4% of worldwide revenue or €20 million, whichever is larger. Proving that the data is actually deleted is another matter, but the potential upper bounds of the fines alone place a pretty big incentive on treating all data as if it's tainted.


Many smaller tech companies aren't even chasing the European market yet. My guess is that this will make it an even less attractive market.

That being said, the jury is still out on whether the EU can successfully collect fines from exclusively US based companies.

Large corporations like Google or Facebook have a presence in the EU and can be fined directly. Good luck in the courts enforcing gdpr to US only companies who have no servers or physical presence there. I imagine it will be a difficult process and not worth the effort in most cases.


I suspect this is EXACTLY why Cambridge Analytica is located in the UK, and EXACTLY why, they promoted the Brexit campaign: so that they would be exempt from EU privacy laws.


This is why I'm waiting until the GDPR kicks in before deleting my account!


How does the database owner verify that they deleted you?


IANAL, but isn’t "true deletion" legally required under GDPR?

Wouldn't leaving personal data lying around in your database like this technically leave you liable to massive fines?


AFAIK GDPR mandates that deleted data is no longer processed (used for marketing, analyzed, etc). I don't remember it saying anything about the data "existing"; nor could it given how backups work in our industry.


Furthermore, I'd imagine that the host might be required to retain those records for law enforcement purposes (imagine if someone made death threats, etc and law enforcement requested those records for an investigation).


GDPR does have exceptions for legal compliance, yes. If someone does something illegal and they request deletions you can (IMO, not a lawyer) retain the records if you know there is an investigation or court case going or strongly suspect so or initiate it yourself.


You may even be able to keep comments/posts for longer for general compliance. You will need an audit explaining why and to be prepared to defend it. You won't be able to use such data for analytics.

This is based on my personal layman's understanding. I am NOT a lawyer. I am NOT your lawyer. If you need legal advice consult a competent lawyer in your jurisdiction.


Legal/Regulatory compliance is not for Facebook or Amazon.

It is to prevent people going to their Bank, where they have a credit card or a consumer loan, or a mortgage, and say "can I please be forgotten/be wiped from your systems - oh and the loan too!".


> I don't remember it saying anything about the data "existing"; nor could it given how backups work in our industry.

You just store the keys separately, assuming one-way encryption (losing the key) counts as deletion.


IANAL - no idea what "erasure" as in the "Right to erasure" means http://www.privacy-regulation.eu/en/article-17-right-to-eras...

But... if deleted data is no longer processed, does that mean it can not longer be used for targeting etc?


It means deleted as in gone.

Your backups should be cycled after a few months and the deleted data will eventually not be present in any backups.


There’s a bunch of cases where a business should not delete data even if requested by a user; the biggie is crime/fraud prevention. However, if that data was later disclosed in a breach the company would be subject to penalties. The request for removal does count as rescinding consent, so you would no longer be able to do anything that required consent anyhow.

Data could be thought of as radioactive so long as it has a unique identifier. If you can aggregate your fraud detection data in some way to remove the pseudonymous/personal identifiers then you should. If you can’t, then your usecase needs to justify the risk of keeping that radioactive material around.

Watch out that your aggregates can’t be reverse engineered though. There’s a reasonableness test around how easy it would be to recompile a users profile etc. As technology advances things that were once unreasonable become reasonable think md5). I find it helpful to think of reasonableness being connected to the best 10 people you recently interviewed, or the actions of any competitor in the space. If a prosecutor can point to the competitor and ask why you didn’t do what they did, you need a very good reason to pass reasonableness.


Prove we didn't delete it.


It would be extremely risky for such a huge company to pull a stunt like that. All it would take is for one employee to become a whistleblower to put the company into a legal and PR quagmire. And all that for what? Losing 0.1% of their data because of people deleting their account? Not worth it IMO.


In which event Huge Company finds the lowest level employee they can reasonably pin the blame on.


But they still get fined 4% of their global revenue. A few $1.5bn fines and they'll learn how to delete.


I haven’t read the relevant (proposed?) legislation, but aren’t these sorts of penalties usually written as ”fines up to x”?


The EU has recognized that a 200 million fine won't hurt facebook. Instead it says "fines up to X€ or Y% of global turnover, whichever is higher."

I also strongly suspect that in cases like FB or Google, the judges will happily go for the 4% mark if possible.


Ok, great thanks for clearing that up.

Yes, this is a good thing, in opinion. There was a case where a Finnish man was fine 54,000 Euro for speeding, where the fine is calculated based on income[1]. I think this seems like a reasonable way of metering our penalties.

It seems reasonable to me companies should be treated in a similar manner.

1. http://www.bbc.com/news/blogs-news-from-elsewhere-31709454


Atleast for regulations like these, yeah. Big corps like facebook don't care if they get hit with 200 M€. 4% of Facebook's global turnover however...

I'm sure the EU will find some way to spend 2 billion € for something useful. (13% of their net income btw)


Yes. A minor technical cockup when the company is attempting to meet the spirit of the law wouldn't be treated as harshly as a systematic failure, or a deliberate attempt to work around the spirit by following the letter.

If facebook discover a backup from 2012 on a tape that hasn't had items deleted, and adapt their processes so it doesn't happen again, they won't be hit with a $1b fine. If they deliberately refuse to delete people's data as a policy, they will be.


This one is at least...


Maybe I'm naive but I'm not sure how you could spin something as fundamental as "deleting does not actually delete anything" as the work of one rogue employee. What would said employee have to gain from it anyway?

I'm sure Facebook would do anything it could to work around these regulations and lobby as much as possible to have them overturned but I doubt they're reckless enough to simply ignore them and hope for the best.


Pretty easy. Develop code to delete. Code doesn’t work as “designed” and just makes the qc/qa think data was deleted. Automated analysis keeps using it.

No human ever directly accessed data so they don’t “know” it exists.

Ad targeter or whatever needs the data keeps working.

This seems like a trivial programming problem. I’m confused by the confusion.


You need people to develop and maintain this code, and it touches many aspects of architecture. Keeping it contained to a small team and avoiding any leaks doesn't seem trivial to me. What if some dev not in the know notices the issue and submits a fix or a bug report, what happens then? You introduce a pretty big management liability in your management if you need to protect yourself against your own employees. And what if the employee(s) you chose to throw under the bus in case the trick gets noticed decides to rat you out anyway, for fame, money or to avoid repercussions? How do you even find engineers willing to do something so unethical without raising suspicions in the others?

And again, all that for what? Keep profiles on the small minority of users who bother to scrub their profiles, even though these same users are probably not very good targets for ads in the first place? I don't think it would be very rational for Facebook to try something like that unless they're trying to be evil for the sake of being evil.


It would require a whistleblower, which could definitely happen.


Especially if they give a percentage of the fine to the whistleblower.


1% of 4% of Facebook's annual revenue sounds like a good deal to me. Who wouldn't be a whistleblower for $15MM


Prove a negative?

The onus should be on you to prove you did.


Yes.


It's because most people are not computer scientists/hobbyists/engineers. They are laypeople whose knowledge of the internet, and computers in general, is very limited.


One anecdote from my life. In university one of my best friends at the time found the browser function "view source" and was convinces he can now hack the website.


I’ve probably had to explain at least 1000 times that our software being open source does not mean that people can edit the functionality of the software running on our servers.

One thing that is cool about all these pushes to teach every kid programming is that it might help instill “data values understanding.” Or whatever you call the source->object->runtime. And how data works.

Too many people confuse software with data “the data is in the app” and confuse it all with a book in a library. You just burn the book and it’s gone. Or you transfer the book from one library to another. This is a good metaphor for usage, but falls apart if you have privacy or security concerns.

Logic that seems natural and instinct to whatever you call people who program/use tech, is incorrectly understood by others. Especially “digital natives” who always had computers and Internet.

My kid has all of his files in google from forever. This breaks down in situations where Google’s interests are out of sync with his own.


I mean, depending on the quality of the website, he now can.


In the EU you can.


Most often true, luckily for European citizens GDPR is changing that.


>I don't understand why more people don't understand this

Because of the round trip fallacy: People equate the absence of evidence as evidence of absence P(E|A) != P(A|E)


This can be used against them by updating uncompressible guassian white noise over and over... If what you said is true.


Somehow related to your comment, I wanted to post a link to this extension:

https://adnauseam.io/

What it does is basically to hide the adds from you, but clicking all of them.

This is so messed up for add companies, that Google removed it from their App Store.


I think that such strategy with data poisoning will not be effective because based on current data your activity will be marked as anomaly and will not go into any dataset. So this strategy is the same as doing nothing.

But I agree that it's pointless to delete visible data about yourself, because statistics are already stored somewhere else and you don't have power over it.


> based on current data

It might well be useless to start today with your existing accounts. But it's a principle worth knowing about nonetheless. It's certainly possible to poison data from the beginning of your contact with a service - especially for ones less pervasive than Facebook. An account that never sees entirely true data may not be able to build up an accurate profile.

Having said that, providing false information does violate the ToS for most sites. Which probably isn't a federal crime, given US v Lori Drew, but it's certainly not a good idea for an account you can't afford to have suspended.


I wouldn't say its pointless. What if someone gains access to your account through social or technical means? Sure Facebook/NSA/etc have a copy, but at least Customs agents, hackers, and crooks wielding a rubber hose can't access it.


Now there's a business idea.. have an app running on your phone and PC that watches your "normal" facebook activity to establish a baseline, and ever so slightly perverts it over time using the same connection so the IP/location information still lines up. Slowly and methodically so the information doesn't get flagged as bogus. Start with overtly partisan political stuff since that's the easiest to classify.

Call it Social Chaff as a Service (SCaaS).


I have been fantasizing as a combination of data poisoning and intervention when the poisoning is actually real: make groups to swap profiles for a while. Without even going back to the original, but just jumping to the next one. As long as you agree through a different medium and swap credentials with someone you somehow trust, it should work.


why go through all that? who would go through all that? making your own life more difficult in spite of a company? it's not sustainable

note: I know that some people actually would. but philosophically speaking I just don't get it, it's almost like its missing some kind of deeper point.


> I think that such strategy with data poisoning will not be effective because based on current data your activity will be marked as anomaly and will not go into any dataset. So this strategy is the same as doing nothing.

You give them too much credit. This would require some PLM being concerned with such activity enough to add the anomaly detection. Unless they are way proactive over theoretical issues.


Why not build a browser plugin that visits 5 random websites for each site you visit.


https://adnauseam.io/ There you go. Not on the chrome extensions store, but works on chrome (according to their site, I can only vouch for Firefox). I had this question a while ago, and debated building my own, but laziness won out. Edit: I'm an idiot, didn't read that you specified random websites. Adnauseam just clicks on random ads to pollute marketing data.


Why are these people not creating spaceships, exploring the deep of the oceans, or curing cancer?


There is no advertising money in those fields.


Lest you think this is an exaggeration, graduate students and postdocs make peanuts, who would be doing the cancer research, make peanuts.

Starting salary for a postdoc is a bit less than $50k, and this is after 5+ years of grad school, during which you make even less (often $20-30k) One can expect to be a postdoc for 3-5+ years before becoming remotely competitive for an independent (faculty) position, and another 5 years before the job is relatively secure. The competition for these jobs—and the remaining industrial ones—is savage and unpredictable.

This is bad for so many reasons: frantic people don’t do careful experiments or do good science, very smart people opt out all together and figure out how to make you click on ads instead, and a lot of time, money, and effort gets wasted in the churn.

If you want more cancer research and less ad optimization, nag your reps to improve how we fund research!


I feel like you could pull a 'natural flow' subset out.


I feel like searching for natural flow will not give me the results I want.


Not if you have a natural flow for all the random visits.


I mean, at that point you're having an arms race against Facebook's machine learning. My money's on them.


Data poisoning doesn't work. See:

https://www.youtube.com/watch?v=1nvYGi7-Lxo&app=desktop

The gist is social media posts can be used as a unique fingerprint to correlate your whole browser history. Unless you are willing to spam your friends with useless links....

Researchers discovered a politician had a certain medical condition and a judge had "interesting" habits, even in privacy centric Germany. All from publicly available apis and data for sale.

They specifically say their algo is immune to data poisoning.


The 4th paragraph from TFA:

> I know I cannot close the barn door on any data of mine that's already out in the wild, but I can control any further scrapes of my Facebook data by manually removing as much of my Facebook Activity as I can. Unfortunately, and not unexpectedly, Facebook do not give you a simple way to do this.


Their is an excellent book called "Into the Whirlwind" about Stalin and the soviet era purges. One of the strategies of those arrested was to name 2 more. While a great concept in theory, it just made the purges worse.

https://www.amazon.com/Journey-into-Whirlwind-Eugenia-Ginzbu...

I always come back to structural regulation works best. At the end of the day, we just need to make it very costly to keep too much information. We set rates for personally identiftying attributes (e.g storing a birthdate costs x, a partial ssn Y, etc). The charge is per attribute per person per day. This should incentivize the tech companies to develop and store broad scores (e.g scores high for likes sports and classic rock, low for opera) rather than personally identifying info


So your skepticism seems to be aimed not just at Facebook but at "the Internet". So you believe that Google is lying/deceptive in its privacy policy?

https://support.google.com/accounts/answer/465

> When you use Google products and services, we keep some data with your Google Account, like when and how you use certain features. We keep this data even if you delete activity or other items.

> For example, if you go to My Activity and delete a search you did on Google, we'll still know that you did a search, but not what you searched for. What you searched for will no longer be stored with your account.

> We keep this data as long as it's relevant to meet uses like those above. If you delete your account, we remove this data from it.


I've always wondered about Google Drive. If you right-click on a file, you'll notice it says "Remove" instead of the more common "Delete". Perhaps this is a Material Design spec, but I always felt like that was their way of keeping a copy of the file to add to your personal data file.


Remove and delete are distinct actions in shared file systems. “Remove” gets it out of your Google Drive. But if it’s a shared file, it’s not yours to delete, or if your company has a retention policy, you may not have the permission to delete it.


On the web UI they have a "Bin" option, which is sort of like the recycling bin on Windows.

In the Bin folder, right clicking a file gives the option "delete forever"...whether that is actually the case or not I don't know.


A more practical strategy would be not to have a FB account. If you can't beat them, leave them.


> Which means taking the effort to "friend" strangers across the world

Easy enough. More than half of the people that Facebook recommends to be as "friends" are people whose language I don't speak in countries to which I've never been.

Nice AI, Zuckerbean.


Have you considered, that the AI is simply trying to greedily close the points on it's knowledge graph?

Perhaps having a dense graph is simply more profitable for FB?

Sure they won't know who you know, but they can send their messages (ostensibly "on behalf of") that person you know/friended.

Sure, poison your personal data (get your Fakebook on), but going out and spamming connections may not exactly hurt FB.


> Have you considered, that the AI is simply trying to greedily close the points on it's knowledge graph?

No, all I've thought is that the AI sucks and is broken and for a company that's supposed to have the smartest people and unlimited money it should know better.

Beyond that, no.


Sucks for you <> sucks for FB. Facebook (and it's pet AI) likely cares more about it's revenue than it does about you.


Yup and then some other proportion are actually people you know/are friends with. You probably occasionally reach out to the ones you know/are friends with and ignore those who language you don't speak in the countries you've never been to.

All of this is of course a designed test to see how accurate the model is, to see how ordering of suggestions influences behavior, to see effects of noise, etc.

They aren't stupid and they aren't bad at their jobs. It's just a test.


Some years ago i tried to get Tinder to work on my phone. I needed a facebook account for it. No problem. I had several. I created new accoutns each time I logged into facebook for some reason. Tinder still refused to work. Somebody suggested that my facebook account needs at least 50 friends. I found groups in facebook created specifically for the reason of gaining frieds. The next day I had 50+ friends on facebook. This account was locked later by facebook (too many random friends :-P ). I got Tinder to work years later after they dropped facebook account requirenment.


That's the whole point of doing a "simple background check" to determine if an account is real, isn't it. At least I think it is more useful for Tinder users.


Article 17 of the GDPR, The Right To Erasure, states:

Data Subjects have the right to obtain erasure from the data controller, without undue delay, if one of the following applies:

The controller doesn’t need the data anymore

The subject withdraws consent for the processing with which they previously agreed to (and the controller doesn’t need to legally keep it [N.B. Many will, e.g. banks, for 7 years.])

The subject uses their right to object (Article 21) to the data processing

The controller and/or its processor is processing the data unlawfully

There is a legal requirement for the data to be erased

The data subject was a child at the time of collection (See Article 8 for more details on a child’s ability to consent)

If a controller makes the data public, then they are obligated to take reasonable steps to get other processors to erase the data, e.g. A website publishes an untrue story on an individual, and later is required to erase it, and also must request other websites erase their copy of the story.


I like this idea and wonder whether it would be possible to automate this. For it to work, it would have to make random tracking pixel requests 24/7 to prevent Facebook from filtering out the noise. Even then, the individual fake requests would have to be indistinguishable from real ones.


What you could do- is create a user ring, in which you exchange identitys between users, trusting one another- that way for example - a family would blur to one huge identity for the advertisers, loosing all distinctivness.


What’s the advantage in being associated with a Facebook profile that doesn’t represent your true interests, friends, etc? I suppose it may be nice to see poorly targeted ads and thus be less likely to be convinced to spend money? Or if they’re trying to influence your mood, political views, etc., I suppose it might be preferable to receive essentially random influence rather than influence correlated with your actual attributes.


>>The best way I know of to combat Facebook is to poison their data.

some strategies for poisoning data sets are described here https://iotdarwinaward.com/post/improve-your-privacy-in-age-...


Unless I am missing your point, if you just add this to your /etc/hosts file facebook can track any pixels:

https://github.com/jmdugan/blocklists/blob/master/corporatio...


Unbundle your real identity from your online identity. The problem was solved already with handles, until FB came along.


Was it solved?

Just because the marketers don't have your name as "John Argano", they still know everything about your handle "pishpash".

Even if its not your name, its not anonymous. The same profile can be targeted by political parties and manipulated in ways that we are coming to see as problematic.


> They still have tracking pixels.

Add-ons like Privacy Badger or Ghostery can block those though, right?


yes


You know I've heard this over the history of the internet. But there have been lots of things on the internet that have been lost over the years.


Here's a possibility: legislation is passed prohibiting FB (and other websites) from using data that the user has marked as deleted.


Haha good one... and even if that happens, they even follow it by the letter... „we don’t use the data, we just have to search it for illegal activities and therefore our ai needs to parse it... but using it? Noooo!


alternatively, you can block Facebook's tracking (poisoning data might be somewhat easy to spot automatically)


>They still have tracking pixels

You can block that with e.g. a hosts file blocklist that blocks all Facebook related domains.


Or sign in only in an incognito mode.


I use Firefox Focus on Android when I don't care about sessions. Works an absolute treat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: