Hacker News new | past | comments | ask | show | jobs | submit login
The crisis of American forensics (thenation.com)
74 points by zeveb on Feb 9, 2018 | hide | past | favorite | 15 comments



I think the Genrich case is just as much a story about prejudice against the mentally ill as it is a story about the failures of forensic science.

> Martinez understands that Genrich comes off as “weird.” But, he insists, “He didn’t have a violent bone in his body. He never hurt anybody. He was the one getting hurt.”

As far as I've read about it, this is pretty much the norm for people with serious mental illness. They're believed to be aggressors, but in reality are vastly more likely to be victims because their alienation and unstable lives render them highly vulnerable to the tactics of abusers (which are often similar to the tactics that law enforcement officers use against suspects). People tend to assume they're lying or delusional when they tell anyone what happened.


Precedent is a terribly principle to apply to science. In fact it must be the worst possible way of approaching science. Science almost by definition is about challenging and upending established conventions and "truths".

Imagine if we were to continue to treat desease spreading as a product of bad smell and insist on doing it rather than focus on hygiene because there is PRECEDENCE for doing so.

head shaking


Until DNA, forensics has always been pseudoscience voodoo. Detectives I've known guesstimated the fingerprint error rate to be much higher than 1 in 24--and fingerprints used to be the gold standard.

I've even heard a few judges stress the importance of informing jurors before a trial on how most forensic evidence is nowhere near as solid as crime dramas make it out to be. These were sharp guys though, so that is probably not common practice.


Conceptually, pattern matching isn't wrong. It's the implementation, presentation, and gross exaggeration of efficacy that is the real problem. DNA matching is still often pseudoscience voodoo the way it is used in courts for many of the same reasons that tooling marking is. Just look at Massachusetts, where they are having to retry 10's of thousands of cases because tests were botched by an malicious technician and then passed off in courts as foolproof even when the prosecution knew them to be suspect.


It must be stressful knowing your entire profession is BS with a glossy veneer over it, and the walls could all come crashing down in an instant.

In some ways, we have come a long way from employing ducking stools to detect witches, but in other ways, we have not advanced at all.


I suspect that in at least some cases when the actual perpetrator discovers that law enforcement has focused in on the incorrect suspect and is expending great resources to prove that suspect's guilt, they simply take their get out jail free card and move on. I have no background in psychology, but I'm confident that statistics will support that even the most deranged actors will be able to suppress their antisocial impulses to take advantage of such opportunities. Personally, I would rather have a statistical algorithm prioritize suspects than a human detective with a feeling (and the influence to set an entire law enforcement agency after a weak suspect).


> I have no background in psychology, but I'm confident that statistics will support that even the most deranged actors will be able to suppress their antisocial impulses to take advantage of such opportunities.

You would be wrong. You have clearly never worked with people with dementia.

Many forms of dementia impact your impulse control. Such people cannot control impulses--antisocial or not.


I was thinking more of high-functioning socio paths than people who have lost some mental capacity. I appreciate the benefit of your experience on the topic.


Until your statistical algorithm decides that black=crime due to the data sets its been fed, and nobody dares question the wisdom of the computer.

Before you reply that detectives might/would have done that anyways, the point is that at least that is identifiable and addressable, how do you determine how a neural net makes its decisions?


> Before you reply that detectives might/would have done that anyways, the point is that at least that is identifiable and addressable

I think you're handwaving the difficulty of identifying and addressing bias and discrimination. I'm not sure it's any easier to do for people than for computers. Computers have a huge advantage here because we can open up their brain and examine how they think, and once we create one that suits our tastes, we can replicate it faithfully. Humans are much more of a black box than a neural net would ever be.


That's a very real danger. As much as I can side with the desire to have a human in the loop to observe the system, I'm not sure I trust any particular human to be free of bias or immune to emotional reactions that result in incorrect intuitions.


A statistical model will tend to measure bias in the data.

If some parameter is well correlated with the bias, the model will discount that parameter.

It's not magic, it isn't going to just erase the bias, but a reasonable model will reduce bias rather than reinforcing it.


There are many algorithms already that render ANNs into decision trees and other interpretable forms. Nothing to fear here, citizen.


show me some, that approach is at best experimental as far as I'm aware, and unable to create a 1:1 correspondence with how an ANN classifies.

A far cry from "many algorithms" like its some kind of well established practice and fact.


https://arxiv.org/abs/1711.09784

https://arxiv.org/abs/1708.01785

https://arxiv.org/abs/1710.00935

https://arxiv.org/abs/1702.04595

https://arxiv.org/abs/1801.01693

https://arxiv.org/abs/1503.02531

https://arxiv.org/abs/1712.03781

https://arxiv.org/abs/1710.07535

https://arxiv.org/abs/1702.02540

https://arxiv.org/abs/1603.02518

http://ieeexplore.ieee.org/document/977279/

etc

It is kind of far cry from established practice but how far cry from established implementation are your fears of unintended abuse? It's not nearly at the point yet to be paranoid. Having bugs in ANN before debugging makes them no different from usual software that is already used in forensic research. And nobody in their right mind will deploy networks that impossible or prohibitively difficult to debug for any kind of critical tasks. Otherwise irrational people can abuse power even without any software or ANNs.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: