Hacker News new | past | comments | ask | show | jobs | submit login

3 raises a very interesting and somewhat concerning possibility. In the near future as AI develops more in the crime detection arena you may become a suspect for crimes occurring in your geographical area based on things you are not doing or based on some set of states your smart devices have that match a statistical model of a suspicious person. The idea of becoming a suspect because you didn’t touch your smartphone during a particular timeframe is chilling.



You can become a suspect for all sorts of random things you have no control over, like being near a crime, knowing certain people, etc.

I don't see how it can work any other way.


The problem is that one is explainable and the other is not. "He was near the scene of the crime and has no convincing alibi," is very different from, "The computer said he's a suspect and we don't know why, but we still want a warrant." People being targeted for being anomalous is bad, but centralizing and scaling it up is worse.


I doubt that is something to be much concerned about, as I doubt there would be much use of anything that produces lists of suspects without explanation. What would one do with such a list? It would be like getting a bunch of anonymous tips all saying that a different person did the deed, without any clues to follow up on.

Getting a list of suspects is rarely a problem for law enforcement; the difficulty is in winnowing it down to the actual culprits. When a body is found, for example, family and acquaintances are all initially suspects, and experience has shown that summarily dismissing any of them, merely on intuitive grounds, will eliminate some fraction of actual culprits.

If a system did start suspecting the actual culprits with a significantly higher success rate than people achieve, there would be much reason to reverse-engineer the process in order to figure out how this was accomplished, as doing so would provide clues (and, ultimately, evidence) that otherwise could only be found by an independent process.

This assumes that due process exists, such that unsupported accusations are not taken as evidence, but if due process has been abandoned, we would have a much greater problem than that posited here.


> produces lists of suspects without explanation.

No. They will come up with some fancy sounding term to describe these situations. Some experts will agree this is a sound method and it will be used to get warrants.

They will just put the cause as "suspect by unattended abnormalities" (aka phone not used during crime)

Everybody will smile. Warrants will be made. Life's turned up side down. And likely false arrest and convictions.

Even if we program rules into th3 AI to avoid this there is a real chance that the AI could work around and add people to thr list that it suspects is involved in one crime but could not list thr suspect do to rules, but a new crime opened a way for it to allow a suspect of crime one to br listed on crime 2.


> The computer said he's a suspect and we don't know why, but we still want a warrant.

Mission critical ML systems (G/FB ads, crime forensics, medical decision support, financial algorithms) do not work like this.

The designers know exactly which features are causing responses/predictions in the model, their respective perturbation sensitivities, and have clear bounds regarding adversarial inputs/outliers.

“Not knowing how the model works” is definitely true for deep learning, though.


This seems... incorrect.

First of all, deep learning absolutely is used in mission critical systems: https://www.techrepublic.com/article/intel-and-ge-healthcare...

Second, simply because designers use systems that are formally interpretable, that doesn't mean that designers know exactly what causes a given response in the model. Formal interpretability means you know what function the model is approximating -- there's a long way from that to a human-readable interpretation.

Finally, even if the designers can tell you what features cause a certain response in the model, they can only tell you that for features that are encoded in the data! There are plenty of features that aren't encoded in your data that can nevertheless affect the outcome of a model!


I don't think the Intel/GE tool counts as mission critical. There are already radiologists reviewing scans. The tool just flags features they may have missed. Remove the tool, and you're back to business as usual.

I don't know much about ML. Are you saying that we don't know how to interpret any ML models at all?

Regarding your final point, isn't that true with or without ML? Any mission-critical design process should scrutinize the solution to see if it's complete enough and correct enough.


That tool has the potential to systematically reduce the likelihood that a certain disease/certain class of people receives extra scrutiny from a radiologist. That's maybe not mission critical on a small scale, but on a large scale it absolutely is.

Deep learning models are black box models, which means they are not formally interpretable. You can still sometimes get interpretations out of these models using various methods, but they are fairly underdeveloped, and without actual theories of neural network behavior I don't see that improving anytime soon.

You're right, it's true with or without ML. In fact, it's true with human-run systems, too. Consider a police officer who is more likely to pull people over in a certain neighborhood, and that neighborhood was 95% AfAm. IF they were asked why they pulled over more people in that neighborhood, they could say they were discriminating against the neighborhood, which, in and of itself is not racial discrimination. Of course, further inspection of that police officer's records could show that they are biased towards neighborhoods with a high AfAm population, which would be racial discrimination.

The same scenario can easily arise in an ML context, but interrogating a machine is a very different context from interrogating a human. First of all, people believe that computers are innately 'unbiased' because they are computers, so making the case that an algorithm is biased is already more difficult. Second, going back to the point I made before -- interpreting a model is not the same as providing a human-readable explanation. Asking a question about racial bias in a model which doesn't even encode for race (as many do, in an ill-conceived attempt to be 'neutral') requires skilled people to understand how to ask the question and how to interpret the answer. There's no plug and play process that one can follow to "scrutinize the solution to see if it's complete enough and correct enough".


> The tool just flags features they may have missed.

This is how it functions at first.

Fast forward a few decades and you'll have doctors saying "The computer model flagged this spot as a concern. Our human review can't find anything, but we know that studies show the computer model has a 95% confidence in locating problems so we recommend surgery anyway. Surgery is lower risk than assuming the model is wrong."

Tool dependency changes over time.


I agree that it will eventually become mission critical. Until then, they have some time to make the implementation interpretable.


I'd like to throw a thought out there: It is a problem if the government uses algorithms or ML systems that are secret and/or cannot be independently vetted by the citizenry.

Judges cannot just sign off on warrants and sentencing just because "the computer says so", for all we know the computer may be programmed to say "poor and non-comformist = guilty" or some other nonsense, and/or reprogrammed after every election. We need to be able to trust this stuff.


Is it what's happening? I thought "computer says so" triggers an investigation, which in turns produces reasoning for the judge, based on which the judge decides?

Is there any source that confirms the belief about judges deciding based on computer output in the past or in the future - relevant cases, law, etc?


The worst I've heard of along these lines currently in practice is predictive policing [0]:

First the Sheriff’s Office generates lists of people it considers likely to break the law, based on arrest histories, unspecified intelligence and arbitrary decisions by police analysts.

Then it sends deputies to find and interrogate anyone whose name appears, often without probable cause, a search warrant or evidence of a specific crime.

They swarm homes in the middle of the night, waking families and embarrassing people in front of their neighbors. They write tickets for missing mailbox numbers and overgrown grass, saddling residents with court dates and fines. They come again and again, making arrests for any reason they can.

One former deputy described the directive like this: “Make their lives miserable until they move or sue.”

It seems that police departments are happy to buy tools for stuff like this, and private companies are happy to make a buck. But are these systems vetted by anyone on behalf of the public? Some like sting rays and breathalyzers have been hidden behind non-disclosure agreements, kept out of open court, etc.

Other than the above I've also heard of computer systems in NJ advising judges whether a suspect is a risk of not appearing in court (as they are reforming the bail system). I wondered if the criteria is published, how it reviewed and modified, etc.

I'm just concerned about the increasing influence of hidden algorithms on our society, and very concerned in general that the government is going to hook all of its databases together and do more of this, a la Chinese social scores, etc.

[0] https://www.techdirt.com/articles/20200907/12212945257/flori...


>> Mission critical ML systems (G/FB ads, crime forensics, medical decision support, financial algorithms) do not work like this.

Are you speaking from experience or from intuition?


To me "being a suspect" is a long way from a warrant being issued.

If cops can get a warrant on mere hunches, then I agree we have real problems. But a separate problem.


> "The computer said he's a suspect and we don't know why, but we still want a warrant."

We are not talking about black box models here. "Phones that pinged near the Capitol" is a very specific query. They want to identify those who trespassed and those who aided the trespassers. I believe it's pretty fair.


It means the person is a suspect. It means we might have to ask them a few questions. Doesn't mean they did anything.


Being near a crime when it occurred is entirely different than your phone being at your house when a crime occurred somewhere else.


Which means there will be 100 suspects where usually there would be maybe 2. Which means the data is useless and therefore won't be used.

Law enforcement doesn't want a system that incorrectly flags hundreds of people, unlike what some people seem to think. They want systems that reliably flag potential suspects, because that reduces work in stead of increasing it.


That is only true if you assume they truly care about having the correct suspect rather than having a suspect they can get convicted.

However, there is a long history of wrongful convictions and police and prosecutors using bad data to get them. Check out the Central Park Five for a big name one.


This. To put it simply, they like Precise data, not Accurate data. They want a bunch of data-points saying one thing and couldn't care less if it's objectively the right target, not a bunch of data points saying a bunch of things, one of which is actually right.

Precision data gives them convictions and convictions give them promotions. As they say, any metric that becomes a target... This is why giving the police invasive surveillance tech is a terrible idea, they will only focus on the data that fits their narrative and discard the rest. The defense doesn't even need to know that any other data exists.


> They want a bunch of data-points saying one thing and couldn't care less if it's objectively the right target

If your system produces data points with 8 significant digits of wrong data, we have two problems: the system and whoever approved the purchase.

Any convictions will have to be corroborated by other evidence.


Until that other-evidence is just the product of some other hellish surveillance system. What I'm really saying is broadly we need fewer of these systems as they will only be abused, especially in concert. If you have one system that says the suspect had their phone off at the time of the bombing, and another saying they searched for "how do bombs work" on google a few years ago the jury will eat that shit up even though in actuality it provides almost zero evidence of anything.


When the crime in question is a headline crime they're happy to go through hundreds of "leads" in order to find someone they can pin it on.


Unless they can use it target they people they already "know" committed the crime, maybe?


All metadata being collected for the lifetime of all our phone numbers also includes all relationships, with varying degrees of separation.


Or even data from genetic testing, literally showing relationships.


Eventually everyone is a potential criminal and guilty until presumed innocent.


I recall (likely pre-covid) seeing a story posted here of some unsuspecting schmuck who became a suspect in a crime because his smart watch showed him circling near the area on his usual bike route.

Scary times.


Perhaps this story about a Florida man and his fitness app data that "placed him" at the scene of a burglary because he had the misfortune of having ridden by the crime site three times during his ride?

https://www.theverge.com/2020/3/7/21169533/florida-google-ru...


Being a suspect does not mean you are guilty. Being arrested and charged does not mean you are guilty.

This whole line of "what could go wrong!?" starts from a much deeper sickness in the way Americans view the criminal justice system.


Being arrested, even if never convicted, even if never charged with a crime and released with a hug and an apology, will have long-lasting negative effects on your life. For example, a citizen of a Visa Waiver Program country cannot travel to the US visa-free for the rest of his or her life if they have ever been arrested. They have to apply for a visitor visa and convince an immigration officer that they deserve a visa. Just a simple mistaken identity arrest can make the rest of your life difficult and there is nothing you can do to fix it, you just have to endure it.


> This whole line of "what could go wrong!?" starts from a much deeper sickness in the way Americans view the criminal justice system

The sickness is in how the criminal justice system works. Jury trials are a rare edgecase, most detminations of guilt are made by DAs. For most people in this country, being arrested and charged means you will be pushed into a plea deal and those that fight for their innocence face retaliatory harsh charges and sentencing.


It's almost become that arrested is conflated with guilty.


For a professional, being arrested for a major crime in America is a life ending event.

Proving one's innocence once the powers that be have determined one to be guilty will cost hundreds of thousands of dollars, will almost certainly mean loss of job, family and friends, etc. And you might still go to prison for the rest of your life anyway.


That’s already a thing, not a future possibility. Jogging too close to a crime can get your location subpoenaed from Google or whoever has it. Stories crop up on HN occasionally about this.


ML is really good at picking up anomalies, and that is scary if, say, law enforcement or prosecutors are doing dragnet surveillance for anomalies in order to drum up charges.


What is even scarier is the likelihood of law enforcement agencies buying tech from fly-by-night companies who will use all of the buzzwords in their sales pitches (AI, ML, big data, etc) but who have no real knowledge of such things and are just selling a shit product that will entrap innocents.


You will end up with "Computer says no" situations if there are no knowledgable humans that review the decisions from the models.

It's really no different in banks but they are regulated and must provide proper reasoning to denied customers for the models they use in my country. It's not enough to say "your score is too low".

It's very easy to create artificial stupidity!


They already do that with their human brains. I've been stopped by the police for:

- Sitting in a parked car in a suburban street for too long.

- Going shopping at 2am.

- Walking under a bridge at night.

It's OK as long as they quickly realize you're harmless and leave you alone.


Policing in the USA, to a nearby outsider, appears to be more about enforcing societal norms than safety or laws.

It seems to me they harass anyone they perceive to be abnormal.

As a delightfully abnormal human, I'm very glad I was born just north of the 49th parallel. It's not perfect, but I do feel free to be myself.


As a weird person who grew up in the USA (and eventually left for just this reason), I can attest to the accuracy of this analysis.

I even wrote about it recently: https://sneak.berlin/20200628/the-problem-with-police-in-ame...


That font on your article looks awful on my phone


This is just wrong on so many levels. You even included the widely debunked (even by her own colleagues!) Nikole Hannah-Jones 1619 project lies. Cops do not care if you are “weird”.

Police do in fact exist to enforce the law. It really is that simple.


That's a nice fantasy. They're supposed to enforce broken laws. It isn't unusual for them to abuse their power with mental gymnastics to violate the civil rights of law abiding citizens.

I've had two illegal stops in the past year because in one I took a swig from a soda bottle going by a speed trap and the cop wanted to see if it was alcohol. The other, a cop in an unmarked car tried to run me out of my lane to follow a speeder.

Some years ago I had to deal with a small town gumshoe in Indiana who had staked out an intersection of IN route 55. This was late at night in pitch black darkness. Indiana highway signs look like speed limit signs from a distance and IN55 in particular is hard to distinguish from the 55MPH speed limit signs. The local cops know this and abuse their power to make illegal stops.

https://www.google.com/maps/@40.2361263,-87.2433936,3a,75y,1...

Indiana requires signals 200 ft from an intersection in rural areas rather than the normal 100 ft. The county couldn't be bothered to have an advance sign for the intersection so I didn't know when to signal for the turn. He and I were the only two people on the road but clearly I was a major threat to public safety.


What does any of that have to do with the true fact that cops exist to enforce the law? I don’t think anyone made the claim that no cops are abusive.


Well that's just blatantly not true. They aren't even required to know the laws and can enforce laws that they think exist even if they do not. See Heien vs North Carolina.

Police exist to "keep the peace." The question is whose peace are they keeping. Well I suppose the reaction of police at the Jan. 6 insurrection vs that of the BLM protests tells us a little. Or the union busting police did in the past. Or blocking the polls for minorities.

But it absolutely isn't "that simple."


> Well I suppose the reaction of police at the Jan. 6 insurrection vs that of the BLM protests tells us a little.

They shot a woman in the neck. What’s the difference you’re trying to portray?


Police murdered this kid for being weird: https://en.wikipedia.org/wiki/Death_of_Elijah_McClain


Not just policing. I've found that in USA, the society in general wants to meddle with people's business.

Me and my buddy were standing on the sidewalk of a bridge, making a time lapse. Within one hour, someone called the police on us twice.


>I've found that in USA, the society in general wants to meddle with people's business.

Get away from the wealthy areas on the coasts and "noneya business" quickly becomes the societal norm.

Only places rich and population dense do the police have the spare shits to give about what other people are doing.


How can anyone possibly draw conclusions about the USA based on this? Like, really stop and think about all of the fallacies here.

Do that 10 more times on 10 different bridges, and I’ll be shocked if that happens even one more time.


My bad. I was speaking in too general a tone.

I should have said that my interaction with the US society has been that people tend to meddle with others' business more often than in other countries that I've been to.

But, yeah, I have many more anecdotes.


There are many other countries where I have experienced more societal meddling with neighbors' business. Ignoring the obvious authoritarian countries, many western european countries have a culture of making sure you are doing the normal or right thing. Germany is the easiest example.


>> Do that 10 more times on 10 different bridges, and I’ll be shocked if that happens even one more time.

Likely it would happen another 8-10 times. Happens all the time, the bigger the bridge, the more likely.


Broken windows policing believes that abnormal behavior creates disorder, so all minor infractions must be heavily punished and "problematic" neighborhoods have to have heavy police presence


Haha, having spent time above and below 49, I would say you couldn’t be more wrong with that perception. In any American city, especially big liberal ones (which is most of them) you can forget about societal norms. There are weird and crazy people everywhere. Nothing is “weird” to anybody who has spent any amount of time in them. Everyone walks right past a circus on their way to work. Cops pay no attention. Ask a New Yorker about their favorite subway stories. It’s hard enough to get a response to actual crime, let alone a cop waste their time on someone acting weird. North of 49, there is way less for them to focus on. Traffic enforcement as a means of revenue generation is huge up there.

It’s important not to confuse media narratives and cherry picked videos with trends. It’s also important not to generalize across such a diverse set of individuals and jurisdictions. You’ve got everything from LAPD to an elected backwoodsy county Sheriff.


Have you considered that your viewpoint might be so wildly opposing to other opinions because you’re part of an “accepted group” that gets their way...while others like me [an individual of color] get the brunt of enforcement — with or without cause?

Heck, I’ve been stopped in NYC for “walking with a limp” (bike accident, fractured knee)...while wearing a $500 shirt and designer suit.

What you refer to as “cherry picked videos” are just the instances that happen to have been filmed. Seriously, after Central Park Karen and Minnesota, can we at least acknowledge there might be an issue?!


Do you not understand the problem with extrapolating trends from cherry picked anecdotes? And no, those videos are absolutely not evidence of problems. They aren’t even evidence of isolated cases of racism. But it wouldn’t change the situation if they were.

More importantly, parent wasn’t even commenting about race.


>In any American city, especially big liberal ones (which is most of them) you can forget about societal norms. There are weird and crazy people everywhere. Nothing is “weird” to anybody who has spent any amount of time in them. Everyone walks right past a circus on their way to work.

> It’s also important not to generalize across such a diverse set of individuals and jurisdictions.

Pick one.


> You’ve got everything from LAPD to an elected backwoodsy county Sheriff

Did you try to make a joke ? Both of those are racist to the core. They are the same.


We have had the RCMP racially profiling native Americans and picking on people of color. The exact reason they are not supposed to pull you over for offenses as you mentioned but need a reason such as speeding or failure to follow traffic laws. It's NOT OKAY and should not be considered normal for them to do.


Human brains tend to be stateful, they can be talked to. And they're only deployed at small scale. AIs don't update and can be deployed at scale.


> AIs don't update

Currently.


How do you train for anomalies, given that the data is widely different every time? Or in other words, how do you obtain a dataset that is representative?


I'd argue that it is often difficult to impossible, but that doesn't stop companies from making AI/ML products for law enforcement. I'm someone who regularly trips up my bank's weird behavior detection system, and have my debit card frozen more often than it should be because of my purchasing patterns and travel.


it's inherently scary if one is somewhat anomalous in behavior.


Indeed. Witness all the folks who insisted Amanda Knox was guilty because she didn't act and look as they thought she should were she innocent.


This is such a general bs statement


This already happens, most of us just don't live in the neighborhoods where it happens. That's the unfortunate reality of today's America. It's also the reason that these techniques will be employed in a ubiquitous fashion in tomorrow's America.

If we wanted to stop this, the time to complain was years ago when the practice was started in the drug war. Or even recently, when the practice was employed during the BLM riots. Any attempt to stop it now brings howls of racism. Causing police departments and other law enforcement agencies to double down on insisting that they use it on everyone in their attempts to prove the people screaming racism wrong.


Like with any other form of surveillance, if it's a bad practice, then it shouldn't be used on anyone, regardless of race.


Thats his point this was being done years ago but white americans were not part of the demographic surveilled this way so didn't care now that they are and it gets stopped because of it then people accusing the police and government of racism are correct.


Point is, the time to stop that particular practice was in its infancy. Waiting until the practice is both widespread and normalized is a losing strategy. The use of cell phone data is well understood, well litigated, and for all intents and purposes, settled law. To come along and unwind all that now is sisyphaean. We need to start getting out in front of the issues. Not reacting all the time. And certainly not allowing privacy violations that affect others, then trying to prevent the very same privacy violation from affecting our own.

The people doing the violating are going to double down. They don't want their critics being proved right. They don't want to be accused of being hypocritical. So what are they going to do?

As privacy activists, we need to make it easy for potential partners to cooperate with us. Right now, we're making it very difficult for potential partners to cooperate with us. We're putting potential partners in very difficult positions, and then asking why they won't support us?


We need to start getting out in front of the issues. Not reacting all the time.

It's kind of difficult when the privacy violations in question begin in secret. Consider police trying not to disclose their use of Stingrays, for example.

I somehow doubt police departments and intelligence agencies are going to agree to run all future uses of tech by a privacy watchdog, so how do you suggest getting ahead of the problem?


Well maybe we need to go over their heads then. That's supposed to be the purpose of the legislative branch - make rules about how the government is allowed to operate.


> The use of cell phone data is well understood, well litigated, and for all intents and purposes, settled law.

I don't have strong views on the right policy outcome, but it is not accurate to call this issue well litigated and/or settled law.

Just yesterday, NYTimes ran an article about DIA claiming a "commercial availability" exception to the only Supreme Court case addressing cell phone location data (Carpenter). If that is indeed DIA's rational, they are going to have some problems. For example, it is unlawful for the state to use commercially available thermal optics to surveil the interior of a dwelling without a warrant. I think DIA may be relying on dicta from Kyllo about devices in "common use", but their rational is secret so we won't know until it is... litigated.

[1] https://www.nytimes.com/2021/01/22/us/politics/dia-surveilla...


How can it be litigated if it is secret? The sorts of lawyers allowed to know of it are not the sorts of lawyers who file suit in the public interest.


> but their rational is secret so we won't know until it is... litigated.

Presumably, this statement means that litigation will necessarily reveal the rational by presenting it.


This is unnecessarily defeatist. GPDR proved that you could get enough support for large scale walkback of thoroughly entrenched practices.


The US isn't Europe, and in the US, law enforcement has undue influence in government and significant lobbying power. PBAs successfully lobbied to keep marijuana illegal all over the country, and it took public referendums to get it legalized in states that it is legal in. Even then, PBAs had undue influence in crafting legislation so that municipal police could still ticket and jail people in order to still generate revenue from marijuana possession and sale violations.


Yet another reason to not let them normalize mass surveillance.


You are a suspect because you do not own a smartphone.


You don't need a smartphone to get tracked, even your flipphone reports its location, here's a wired article from 1998: https://www.wired.com/1998/01/e911-turns-cell-phones-into-tr...


https://www.youtube.com/watch?v=UOA75NAoN6Q

There is a low tech solution to every problem.

For a real world example: https://en.wikipedia.org/wiki/Operation_Igloo_White#Conclusi...


I would think a warrant would still be required, so not as minority report as all that.


There's nothing wrong with that as long as the law enforcement is trustworthy and competent. We do want to catch criminals. That's the whole point of having law enforcement. It's funny that in America, people seem to have so given up on the idea of the police being honest and competent that all they want is to reduce their power. They don't seem to want reform and instead would rather suffer from crime than be investigated by the police.

Becoming a suspect shouldn't be a scary thing to avoid. You should be able to just ignore your status and wait for the police to exclude you. But somehow it's a problem in America that people just accept.


It's generally well accepted in the criminology field that you can reduce crime and that actually the most effective way to reduce crime happen well before and are not related to policing anyone. No one wants to suffer from crime by inaction.


>There's nothing wrong with that as long as the law enforcement is trustworthy and competent

"there's nothing wrong with the death penalty if you can trust the legal system implicitly" Neil Gaiman, American Gods


Because basing the system on the assumption that police will always be honest and competent is what got us into the situation we're in. Once bitten, twice shy.


I'm talking about accountability, not blind trust.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: