I wonder if all they did was parallel reconstruction . None of what was in the article was doable or feasible. Predicting crime like they said they could is just probably another form of profiling but just automated instead . Figuring out influencers in a social network is hard from my experience as an undergraduate researcher and community detection can be done but to predict people’s behavior is quite another. I wonder if any other customers of Palantir will have more fallout if people find out what they are doing ?
First, it's "parallel construction", not "parallel reconstruction".
Second, that term refers to the use of SIGINT (or, more likely, aggregates) collected by intelligence agencies to inform law enforcement but not add to evidence available to prosecutors.
Despite the scary name, Palantir is not in fact a signals intelligence agency.
There is a lot to be concerned about with police department gang member databases, but police departments predict crime routinely. It's a core part of what it means to run a large city police department. You don't allocate patrols uniformly across the city; that makes no sense.
Seems to me the major ethical problem with "parallel construction" isn't whether or not the information used to identify and detain the suspect comes from a signals intelligence agency, it's the intentional deception by law enforcement investigators and prosecutors of how and why they acquired and discovered evidence.
If the NSA illegally (or "legally" as they'd no doubt claim) intercepts private communications, and tells local traffic cops to find a pretext to pull over a particular car and search it - telling the defence and the court that the drugs were discovered in a routine traffic stop is parallel construction.
If the tip off comes from Palantir instead of the NSA, and the investigators and prosecutors deceive the court and defence about that involvement - I'd argue it's still parallel construction.
No, the problem with parallel construction is that it involves the introduction of tainted evidence into the law enforcement process. The fundamental protection we have against the abuse of surveillance and searches is the Exclusionary Rule, which dictates that the entire chain of evidence that begins with unconstitutional search is off-limits in prosecution. Parallel construction sidesteps that by avoiding the introduction of tainted evidence into cases at all, while still taking advantage of it during investigation.
But Palantir is a mechanism for collecting and analyzing evidence, not generating new evidence. It's database software. Policing has been predictive for decades; without Palantir, the police just use even dumber predictive methods.
Your argument assumes absolutely none of the information that Palantir uses is tainted. That's a surprisingly difficult thing to demonstrate, and based on the amount of data analyzed highly unlikely.
Thus, the impetus to use parallel construction to sidestep these issues.
We know that parallel construction has, at various times and in various places, been heavily used by some law enforcement elements. We don’t need to know anything about the software used by law enforcement to know that, and to suspect that this may be happening now. Systems such as those provided by Palantir are just a more sophisticated way that tainted evidence could be used.
It seems reasonable that, given a reasonable suspicion that such evidence is used, that we should fear and want to prevent more widespread and systematic use of that illegal evidence.
No, this is more specific than just using a DB. If they use Postgres to handle HR then it's irrelevant. This is specific to using large quantities of potentially tainted information and then not disclosing the use of that information. Which is more or less the definition of parallel construction.
Further the problem with parallel construction is not that it hides tainted evidence it's that the courts can't decide that the evidence was tainted. Even if all the information was fine the technique inherently creates problems.
Second, that term refers to the use of SIGINT (or, more likely, aggregates) collected by intelligence agencies to inform law enforcement but not add to evidence available to prosecutors
It's not this narrow. Stingrays have been used to create a map of behavior and stuff that is later executed upon.
Regardless of whether IMSI Catchers constitute according-to-Hoyle "parallel construction" or not, Palantir is database software. It's not a SIGINT source.
Ok, the experience I had with Palantir was I saw them at Reflections Projections when they were hiring for a software engineering position. They gave me the impression that not only they performed data collection using an ontology but they also performed inference and also analysis on that ontology.
When I also interacted with their software as a trial user their marketing material made it seem like they not only performed data storage of semantic data but also they gave insight to that data. From my impression they not only created databases but also aided in the analysis of the resulting data.
I must have been mistaken in my analysis and interaction and they fleeced me. Thank you for correcting me. (Also for the typo correction). I appreciate your insight in this matter also.
> police departments predict crime routinely. It's a core part of what it means to run a large city police department. You don't allocate patrols uniformly across the city; that makes no sense.
Predicting crime at the geographic level is much different than predicting it at the individual level, in terms of people's freedom and rights. EDIT: And even on the geographic level, it leads to many abuses including 'stop and frisk'.
I think most opponents of stop and frisk disagree with you: it's both. Police routinely "predict" that black and brown people are criminals, and stop and frisk allows them to act on these predictions without evidence. I think the argument here is that systems like Palantir quickly and easily become a technological proxy for these "predictions" and makes permanent the biases and racism used to create the data sets that drive them.
If NYPD set up patrols on wall street to stop and frisk bankers on their way home from work searching for cocaine or evidence of fraud it would be a very different issue.
I'm not sure if I have parsed this correctly. My understanding of stop and frisk is that there is a significant racial bias involved. To me this seems like part of the prediction. In my mind both aspects of stop and frisk are problematic.
maybe "parallel construction" is not the right term for this but I think the suggestion is, the investigators may have found the evidence through an illegal method that would not stand up in court and then made it seem like they arrived at the evidence using Palintir or some other legal method.
Isn't that _exactly_ how parallel construction works?
"No, your honour, we didn't use any of those grey-area or outright illegal surveillance technologies, we just got lucky in a random traffic stop. Again. Amazing how all these gang bosses drive around with tail lights out, isn't it?"
I could believe that they used illegal Palantir and retconned a search warrant. I could believe they did an illegal search and retconned a Palantir insight. But I'd like to know which claim is being made. Is Palantir the crime or the coverup?
"Hickerson, who was sentenced to 100 years in prison, has accused prosecutors of suppressing analytic evidence obtained through the use of Palantir, arguing he had a right to view the evidence if his name surfaced as being affiliated with a gang, or if his name was absent from any analytic data related to 3-N-G."
and
'Ken Daley, a spokesman for the Orleans Parish District Attorney's Office, said in a statement Wednesday, however, that Palantir "played no role whatsoever in Mr. Hickerson's indictment and prosecution."'
The defence is claiming the investigators used "analytic evidence" obtained from Palantir but are refusing to allow him to view it. The prosecution are claiming Palantir played no role - which is exactly what you'd expect if parallel construction were being used.
Related - cynical me considers 'no role whatsoever in Mr. Hickerson's indictment and prosecution' to be exactly the sort of weasel-worded denial you'd use if you had used Palantir information to construct a story explaining how you'd indicted and prosecuted someone based on _other_ evidence that you wouldn't have known about without the unacknowledged Palantir-sourced evidence you'd want to hide from judicial scrutiny...
Not only that, but there's no indication that use of Palantir would have been illegal in any way in the case, only that if analysis leading to the conviction was obtained with Palantir, then it was withheld from the defendant. Conversely, they argue, if Palantir held no such analytical evidence, then its absence is proof of innocence. That later part if very much a stretch though, as it implies a level of sophistication, if not omniscience, that is not in reality possible.
Just because you weren’t able to identify influencers in social networks as an undergraduate researcher doesn’t mean that Palantir hasn’t figured it out.
Link analysis of criminal organizations is something literally every detective knows how to do. I don't understand the difficulty here unless they think finding good marketing candidates on Instagram is anything like a criminal investigation.
Cops keep track of your associations (hence why they can no longer force you to show them the contents of your phone; your contact history is juicy intel that used to reveal lots of links!). Modern technology just makes it easier.
This article suggests that the Palantir software, regardless of method, was in fact instrumental in identifying a gang leader who was subsequently sentenced to 100 years in prison.
And that the reason that it was cancelled was largely due to the public's discomfort with the program, as raised by a previous Verge article, that laid out the potential for civil liberties violations and potential macro ineffectiveness of Palantir's identification methodology.
With that in mind it seems like Palantir's largest risk forward is running afoul of due-process for their criminality / policing divisions, not that municipalities won't fall over themselves to hand them billions in fees in the name of efficiency.
The article suggests the opposite: The prosecutor contends that Palantir was a non-factor in the conviction, hence its lack of inclusion in material disclosed to the defense. The reason for the contract getting cancelled was also not confirmed or commented on by the city, the article only mentioned "some" that were "leery" of its use because it could be used to connect gang members to others. This is a very vague wording of any concerns about use of Palantir. The article was very light on content here, using words that indicate trepidation but not connected those loaded terms to any explanation.
From prior stories about Palantir's lack of efficacy outside of well-resources intelligence and military venue, my guess is that lack of efficacy was the cause for the contract going belly up.
I don't see parallel construction here. There's no reason to think that any information in Palantir was inappropriately obtained without a warrant when one should have been required. Without that breach, then Palantir helped generate leads, it didn't result in parallel construction. That's not even what the defense contends in the criminal issue cited in the article: There, they simply maintain that a Brady violation occurred by not disclosing that Palantir was used, or that Palantir showed negative evidence that was favorable to the defendant. I really don't understand where any issue or suspicion of parallel construction arises from the use of Palantir. It's basically a mashup of a social network and CRM with predictive modeling on top.
There's no reason to think it happened either way. But I think there's reasonable suspicion that Palantir might have been involved.
> information in Palantir was inappropriately obtained without a warrant when one should have been required
That's not the only issue, or even the main one. Government investigating and collecting information on private citizens en masse, rather than individuals for cause, is very dangerous. Doing it based on decisions made by Palantir's software developers, seems even worse.
> It's basically a mashup of a social network and CRM with predictive modeling on top.
How do you know how it functions? And wouldn't that describe any government system used for surveilling (and sometimes oppressing) its citizens? The Stasi had the same thing, just with far less powerful tech. Using banal buzzwords doesn't make Palantir or government surveillance any more banal.
These are all reasonable concerns about palantir, but none of them are parallel construction. And I know how palantir functions because there's plenty of information available on it. Heck call their sales team and chat them up about a use case if you want to learn more (it's what i did, about 10 years ago) my point is their gotham product isn't top secret, at one point they even had some sort of demo client.
I'm sure their tech, like any other, can be misused. But parallel construction isnt even hinted at here, its the potential Brady violation that is the issue.
The problem is I see it is not just whether the Palantir evidence was inappropriately obtained - it's also about the refusal to disclose to the defence all the evidence used against them.
If you're OK with "using ubiquitous surveillance as evidence gathering then cherry picking other evidence discovered based on the original evidence set and never needing to disclose in court that you used the original evidence as a starting point" - how can there be checks and balances in place to ensure that original evidence was obtained legally? What's to stop every ambitious/crooked cop from starting every investigation with ‘the fruit of the poisonous tree’?
It is relevant because that specific conviction caused the public discomfort. (The defendant is claiming that the group he's affiliated with is not a gang, simply a group of acquaintances, and Palantir's analysis caused police to believe it was a gang.)
It's amazing how companies can get so big predicting the future. The process is a glorified dart throw, especially in social systems. The variables are always changing, and the effect of reacting to a prediction (like increasing police presence) changes the expected outcome, so predictions become elusive even if initially correct.
Also, when the variables like homicides are relatively low numbers, there are huge percentage swings that happen naturally.
Macro indicators like weather and per capita income have long been known to be correlated with crime, but trying to predict and proactively reduce it is much harder.
The promise of AI and predictive analytics is huge, but it misses the mark in non-closed systems.
I doubt the actual effectiveness or usefulness of the system mattered much to the police department. Most likely this was used for one of two purposes: a) as mentioned above, parallel construction and b) an excuse to arrest people they wanted to arrest anyway, with the system as cover.
I don't see the parallel construction here. Parallel construction would have to involve use of material that should require a warrant, but one was never obtained. Just generating valuable leads that result in investigation that leads to a warrant isn't parallel construction.
All of those things are true x10 in the stock market, but people still seem to be able to consistently make money. I think you're overstating the difficulty of the problem a bit. There's a lot you can do with simple models to predict crime - they're not perfect, but as long as their users understand that, that's fine.
Also, certain crimes like drug use are common to the point where even models highlighting "links" or "suspects" purely at random might yield increased prosecutions if law enforcement believed in the model enough to investigate more thoroughly than they would if simply following a "hunch", carrying out random checks or conducting a "sweep of the area". And if a model is largely uncorrelated with actual rather than revealed propensity to commit crime, and rather more closely linked to demographics, then its effect on who police prioritise their resources chasing could be a big problem. Certainly a far more likely problem than a model failing to surface any evidence of any criminal activity.
For other reasons, criminals citing false negatives from limited social network based profiling tools failing to identify them as a gang member as a case for their defence, as the article suggested one New Orleans defendant hoped to do, would also be problematic.
I'm not saying it's trivial, or anything close to it. I'm saying that people do it. There are a number of quant firms that are and have been consistently profitable (two sigma, renaissance, etc..).
Two sigma hasn't been all that consistent. First, their main stage for success only began around 2011, after the financial collapse when recovery was well under way. Second, they had a pretty bad 2017. They might not be the best example here. I'm not overly familiar with renaisance to comment on them.
> The owners’ earnings are also driven by Renaissance’s fabled Medallion Fund, which is closed to outsiders. It has earned an estimated annualized return of 35 percent since 1982.
The fact that a coin that comes up heads 75% of the time came up tails one time does not mean its not a biased coin.
Thanks for the renaisance link, interesting firm. I still dont think 2 sigma was a good example for a counter claim though, their primary success has been post-financial collapse. Renaisance on the other hand has consistent performance over the decades though isnt really representative of the field. But I take your point none the less.
Not all stock markets have the "consistent" growth the US has. I put the word in quotes because that exact notion of the past predicting the future it's what's inherently wrong with many of these companies.
It's super easy to create a model that could beat the market when run against historical data. It's a whole other thing to beat the market in real time.
Evidence of this would be that's it's possible to leverage stock positions by multiple factors. If anyone can accurately predict the future and profit from it, then it's easy to turn $5k into $5M with enough leverage and compounding profits. The problem is no one can do it consistently at that level of accuracy.
Of course there are. They do it by figuring out how to get material non-public information. That's been the scam for as long as there has been a stock market, the rest is noise.
If enough people flip enough coins, there will be outliers. It’s not proof of the ability to manipulate the falling coin, it’s just how probabilities work. Take the same group and keep them flipping coins, and that becomes clear, but in this case the rewards of luck are persistent, in the form of investment.
Many quant firms beat the market with a consistency that puts their p-values well below the threshold that would be required to make this claim, even given the multiple comparisons problem you bring up.
The question you're begging is whether the computer oracle is accurate or transparent at all, let alone more so than traditional police work. The point isn't that the whole thing is a bad idea, it's that the comparison to wall street should not be encouraging. Frankly from the article, it sounds like the system they had just wasn't that useful, so this is moot.
From the outside, it's reminiscent of the LA school system's ipad fiasco. A flashy silicon valley purchase that hits the top three requirements for a government program:
1) Appear to be useful by spending a lot of money
2) Without addressing actual structural problems
3) While funneling large amounts of public money to well connected contractors.
My point is that there is nothing wrong with this technology in principle. I'm not saying anything about this technology in particular. The comparison to wall street should be extremely encouraging - a small number of extremely smart firms consistently beat the market. If a small number of firms can produce policing tools that make it more efficient, that'd be great.
It's unacceptable in principle. If a wall street model is right more than it is wrong then it wins. If police are wrong half the time we have a big problem on our hands.
Nobody makes all the right calls on wall street. Some just win more than they lose.
I don't see how that's relevant. As has been pointed out to you many times "it's as good as wall street" isn't an acceptable standard for criminal investigation.
Sigh. Convictions are the only place that you need a 100% success rate. How do you think a criminal investigation gets conducted? Do you think they only investigate a lead when they are 100% sure that it will lead directly to a conviction? If a piece of software generated them 5 extra leads, and 1 of those extra leads lead to actionable evidence - that's useful. You're right - 'good as wall street' isn't the standard, much less good than wall street is a perfectly acceptable standard for software that acts as an additive, informational aid to policing.
There are a number of quant funds that consistently make money. Most of them are not trying to take anyone's money, because they're not open to the public, because they don't need more AUM.
Of particular concern to some was the use of Palantir as a tool that could aid investigators both in connecting suspected gang members to others in the community, and in identifying people deemed at high risk of either committing gun violence or being the victim of it.
??? Isn't that precisely what the tool is meant to do?
and mr palantir got up from his desk, steaming envelopes, and walked down the street, into the future. "ill be a postman" he thought. "wont have to spy on anybody".
Since we are talking about a technology that basically speculates about people, it's kind of funny that according to the article the "New Orleans Police Department's Director of Analytics" is called Ben Horwitz. Hmm...
http://leitang.net/presentation/Community%20Detection%20in%2...
https://www.researchgate.net/publication/260598010_PREDICTIN...