Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Palantir’s User Manual for Cops (vice.com)
152 points by ericzawo on July 12, 2019 | hide | past | favorite | 150 comments



> "Palantir software is instrumental to the operations of ICE, which is planning one of the largest-ever targeted immigration enforcement raids this weekend on thousands of undocumented families. Activists argue raids of this scale would be impossible without software like Palantir."

Once and for all, it is the policy makers, not the tech industry, who are responsible for these operations. Tech enables people doing things easily, which can be good or evil. It is people, who should decide, be concern and push the lawmakers towards the right/justified path.


I used to think the same. I now think that this reasoning is faulty.

Cars can be used for good or evil. For the most part we leave it up to the users to do the right thing, however we don't just leave it up to the users. If a modern car manufacturer were to make a vehicle with no safety features whatsoever, we would rightly call them out as a bad actor. Seat belts, air bags, bumpers, crumple zones, defoggers, windshield wipers, and on and on. We have laws that require a basic level of safety for these potentially dangerous machines.

There is no reason we shouldn't expect tech and software firms to include basic "safety" considerations of their own.

Remember all those companies that helped implement censorship and spying systems for authoritarian regimes? Still thing it should up to "the lawmakers"?


At the same time, a lot of those safety features are enforced by the governments of the people they're manufactured for. Depending on the country you buy a vehicle in, it may have more or fewer security features.

Of course, for cars, safety is a marketable feature so it behooves them to make them somewhat safe. In this instance, it seems like these precautions are against the user's interests, which complicates things.


I think this is a good analog for what Palantir does, actually. The thing to note is that Palantir is not in the data collection industry, or even the data storage industry. They are in the data analysis industry. One way or another California cops are going to build or buy tools that allow them to access their data, it might as well be from a company like Palantir which is security-minded.


I'm an American citizen who is happy to see the law enforced. I don't live in Atherton or Palo Alto. There were two Central American gang-related shootings near my family's home last week; there were two MS-13 murders in the broader area earlier this year. I grew up here, and it didn't used to be like this. I want to see those who come here illegally, or who have been denied refugee status as the result of due process, deported.

I don't care what race, nationality, or religion they are. I also want to see families and young kids kept together, and treated humanely. But at the end of the day, we [the U.S.] are a nation of laws, or we are not. I'm proud of Palantir's efforts to help enforce the (democratically-enacted, internationally-conventional) law of the land.


Much of what they are doing stretches and very likely breaks constitutionality. The fact that they can track you without any probable cause for any amount of time and then store that information means that essentially they no longer have to get warrants, meaning there is no oversight, on anyone they want as long as they claim it's related to a case. That's absolutely wrong.

No one wants gang members running around shooting people up. We are very clearly not a nation of laws as laws apply differently depending on the very things you claim not to care about. Sentences are harsher, applications of justice are different depending on what race, nationality, or religion they are or even how much justice they can afford. This isn't an affront against the police, I'm good friends with officers, but at the end of the day as citizens it's our responsibility to hold our police accountable and establish the limits imposed upon them. Our founding fathers lived under a police state that was unaccountable to the people and we are drifting closer to that line. Our character as a nation is not defined by what we do when it's easy, but how we act when the decision are hard.


> very likely breaks constitutionality... they can track you without any probable cause... they no longer have to get warrants.

Does anything in Palantir's database require a warrant? Law enforcement can absolutely follow your car without a warrant, check if you've come up on a license plate reader, call another department and see if they have your phone number, list of associates, ect. Point being, police don't need a warrant to investigate you. I don't know if they even need reasonable suspicion or probable cause. You might be able to claim harassment if they interfere with your life. Otherwise, constitutional protections apply to restricting your freedom (eg. detaining/arresting you), ceasing/searching your property, and monitoring your private life (eg wiretaps, bugs, ect). It seems like what Palantir's doing is making it easier for departments to share information that would normally require lots of digging.

I'm not defending the practice at all. I despise this kind of mass surveillance, and am particularly concerned that it's being done in secret. My opinion is that we don't have enough, or practically any, privacy protections in the USA; at least on public property. Might sound like a contradiction, but we spend much of our lives in public, and shouldn't have to hide in our homes to prevent mass surveillance. I don't think the constitution goes far enough to protect us anymore. The founding fathers could not even imagine the world we live in today. Even George Orwell couldn't conceive of the capabilities we have today when he wrote 1984. Yet we've done close to nothing to rein in these capabilities, and all the movement is in the wrong direction.


The fact is that human rights violations and child abuse are happening now. Here is one recent example from DHS' own inspector-general: https://www.oig.dhs.gov/sites/default/files/assets/2019-07/O...

Take a look at the pictures and pretend it's your family members. Still proud?


Palantir doesn't build holding cells. They've worked with the government for many years, long prior to this recent and ongoing debacle. I'm ashamed of DHS for providing unhealthy facilities, and I'll agitate for their rapid improvement. Family separation was quickly curtailed after the initial public outcry, and I expect a similar result here. None of this would diminish my goodwill towards Palantir for improving government I.T. systems.


They don’t build the cells. They just help fill them.

If you’re helping put people into inhumane conditions, you’re at the very least an accessory to these crimes.


As someone that has been incarcerated, I would be instigating a riot in those conditions.


The problem is most people that want undocumented immigrants deported don't actually want them deported. They just want them to be an underclass available for cheap labor and keep their mouths shut. Undocumented workers are essential to the economy and the same people that hate them hire them.

If you want rule of law, provide work permits and a path to citizenship. Immigrants of all variety aren't going away. We need them.


>The problem is most people that want undocumented immigrants deported don't actually want them deported. They just want them to be an underclass available for cheap labor

Do you have source backing this up?


They are essential to the economy because American employers can pay below-market wages for back-breaking labor. I'd prefer to make such practices illegal, rather than continually subjecting undocumented immigrants to awful working conditions with no healthcare or legal recourse in cases of abuse.

If a concerted effort was made to improve the lives of such workers, I think you'd be surprised how quickly Americans would be willing to do those jobs.


I think you'd be surprised how fast the companies hiring all those workers would go bankrupt, leaving the United States without an agricultural industry (which is as critical to national security as the rule of law).

Americans are, on the whole, unwilling to pay the prices required to provide people with a living wage and proper labor protections.


>Americans are, on the whole, unwilling to pay the prices required to provide people with a living wage and proper labor protections.

I would frame it as, "Americans are, on the whole, unwilling to pay higher prices for ethically-sourced food when lower prices can be had elsewhere." The solution is to eliminate the cheaper, unethical alternative.


Labor is only a small percent of agricultural costs. They could profitably pay pickers far more without a substantial increase in food prices.


US citizens never stay past lunch at any of the farms doing picking. The work is too hard, nobody here wants it.


The work is so back breaking that no US citizens want the work. They get easier jobs. I heard interviews with a couple of farmers, no US citizens had ever come back after lunch.


Do you know if they were paid a wage worthy of destroying your back for? Or is it in our best interest to subject brown people to slave tier labor for little pay?


I think the job is for people without more opportunity. Paying more would help, but I still think it would mostly be immigrants doing the job. The other issue is that paying more would result in more automation for crops that can be automated.


If paying more money for back-breaking work is going to result in more automation, we should welcome it. There's no way that I will stand on the side arguing that unemployment is worse than slavery. But yeah, if they raise the wage for these slave-tier jobs and still only immigrants do them, why the hell not? (As long as they aren't here illegally of course).


I agree. A minimum wage for everyone, with no underclass, would be ideal. As would a work permit program.


I believe a [citation needed] is required for the claim that US immigration policy is "internationally conventional".

There is of course enormous variance in how countries handle immigration, but the ethical and conventional policy could be summarized as:

1. Border enforcement. Ie, try to prevent people from entering illegally

2. Tolerance for those who are illegally present

(2) Means that a person who emigrated illegally should be able to exist outside the shadows. That is, they should be able to get ID, work, drive, etc while their immigration case is decided, often indefinitely. As well, parents of children born-in country can apply for residence in the EU, for example.

This obviously makes an enormous amount of practical sense. All of the "problems" with illegal immigration ('job stealing', not paying taxes, etc) are exacerbated when people cannot legally work. Distinguishing between MS-13 members and hotel room cleaners is harder when the latter cannot integrate.

Lastly, there is an enormous moral cost to the "crackdown" immigration policy enacted by the US Government. Undocumented immigrants live in constant fear, poverty and exploitation. This is a real ethical and moral cost for an act which is, at the end of the day, pretty benign.

The US is a nation of laws, but the punishment (Can't complain when you're exploited! Can't sign a lease! Get deported to a country you left when you were two weeks old!) for illegal immigration is completely disproportionate to the crime.


Mexico is deporting illegal migrants at the highest rate since 2006, and has deployed the military on their southern border:

https://www.teledoce.com/telemundo/internacionales/mexico-au...

The E.U. tightened their border control policies after a wave of terrorist attacks and upsurge in far-right political activity. We ran the experiment, and it turns out that's what happens when you let in millions of people with little job skills from cultures that broadly have, among other things, very regressive views on women's civil rights.


1) I think if you re-read my comment you'll notice that the issue with the US policy isn't loose borders, it's unethical treatment of people who make it _past_ the border.

2) Mexico's deportations are just as unethical as the US ones.


The main issue with the US policy is that states doesn't follow it, leaving millions of undocumented immigrants in the country. The correct course of action is to document undocumented immigrants and then keep track of them while you decide if they are allowed to stay or not, not to try to hide them like many American states do.


> 2. Tolerance for those who are illegally present

I don't think this is true in any other country than USA, and it is only true in USA due to States not being allowed to enact their own immigration laws. California is very tolerant of illegal immigrants since according to them they are not really illegal.

In most countries it works like this: People arrive to a country, as soon as it is noticed that they are illegal they get contacted by authorities and they get to seek asylum etc. If they don't then they are not tolerated. If they do then during the asylum process they are not illegal any longer, and hence tolerated. As soon as a decision is made to deport them they are illegal if they stay and will not be tolerated no matter what they do.


  I don't think this is true in any other country than USA
Are you unaware that a majority of refugee applicants in Sweden (for one example) are ultimately deported?


I am from Sweden, so yes I am aware how it works. Sweden doesn't tolerate undocumented immigrants. What happens is that people arrive, they get documented, then after a while they get either get deported or get allowed to stay. Having undocumented immigrants never makes sense ever in my opinion.


> (2) Means that a person who emigrated illegally should be able to exist outside the shadows.

This doesn’t make much sense to me. You are saying people who win the lottery of illegally getting in to a country without detection should be rewarded. People who figure out how to beat country’s systems of legal entry should be handed awards of permanent residency. Why? If citizens don’t want to punish illegal immigrants then they should just eliminate relevant laws. But if laws exist then it should be applied uniformly, equally and with due process to everyone.


Most countries in eu (if not all) dont give birth-right citizenship. So i dont know if your arguments hold true. Usa is one of the few countries which does that.


4 incidents in an unspecified broad region within a years time isn't really significant unless you're talking about an extremely underpopulated region.


What specific region are you talking about?


Something tells me you’d have a very different reaction if the same piece of software was used to track “legal” firearms crossing us state borders, for example.


I don't accept this. This is shirking responsibility for what one has brought into the world. Technology has a moral component that can't be ignored in the name of progressive idealism.

It's both/and.


Still, if someone crafts a knife and someone uses it to hurt another I'll blame the user, not the crafter.


sure. but I sell you a knife and you come back and ask for a bunch of features that make it easier to hide, and a special curve to make it easier to get underneath the human ribcage, and a wider blade that makes wounds heal more slowly, etc....at some point you have to accept your complicity


That’s not compelling. Knives have practical and constructive uses outside harming people. Palantir is being used as designed.


A better analogy would be someone with a knife asking Palantir to draw up a list of people that can be stabbed with impunity.


The time has passed to blame the knife maker, unfortunately. I'd like to give him a piece of my mind. Kidding aside, I see where you're coming from. I'm not trying to excuse lawmakers or the people who fire the gun, operate the drone, etc. I'm just saying the knife inventor is a part of the chain, he or she is not a neutral party.


You can't have it both ways. Police can easily hold someone as guilty of a crime for being in the same car as someone committing a crime. If you can be charged for that then a creator can be held complicit for any actions done using their software as intended.


Within reason. I am no apologist for a lot of police misdeeds and dual standards, but whilst you _can_, often you won't, depending on context:

multiple high speed chases around where I live have resulted from police lighting a car up, and the driver failing to stop. Passengers in the car were "released on scene", because there was a recognition that they did not commit the failure to stop (and other associated offenses), nor could they safely extricate themselves from proximity.


What if somebody discovers two feuding tribes in the jungle and decides to sell them deadly knives by advertising their killing ability?

Note: My comment is responding to the general idea of disclaiming responsibility. I don't know enough about this company to judge.


There is a difference between crafting a knife and crafting a tool specifically for harmful purposes. The IBM engineers who helped Germany facilitate the holocaust are probably a bit more involved in evil than a knife crafter.


Hell. They forward deployed engineers (just like Palantir!)

> Particularly powerful are the newly-released copies of the IBM concentration camp codes. IBM maintained a customer site, known as the Hollerith Department, in virtually every concentration camp to sort or process punch cards and track prisoners.

https://www.huffpost.com/entry/ibm-holocaust_b_1301691


Does the crafter still avoid blame if they were knowingly contracting to craft knives for a gang that has a history of hurting others?


Remember that in Nuremberg trials a prevailing opinion was that merely obeying illegal orders can be criminal.

Similarly if the technology being built can be reasonably used for great harms - and one building it is supposed to differentiate between 'knife' and 'Death Star'-level of potential threat - building the technology by itself may become the responsibility.


Once and for all, it is the policy makers, not the tech industry, who are responsible for these operations.

Prefacing your opinion with 'once and for all' is not much better than saying 'mine is the only valid opinion' and doesn't make for a good discussion.

Policy makers certainly create the demand for such commercial services, but it is a choice to supply that demand or not, and those who find such demands morally questionable have the capability to frustrate them by refusing to participate or obstructing the supply.

Individuals can of course abdicate the moral decision-making if it makes them uncomfortable, but acting without regard to the good or ill of the outcome is an implicit endorsement of it. If people later decide that certain outcomes are a moral ill, indifferent participants may find themselves considered culpable.


I'm sorry but your response is not based at all in historical fact. A huge part of forcing a governments hand to do the right thing is having corporations and other businesses not work with them (one example is the fall of apartheid in south africa, among other things).

Bottom up direct action is the only real way to fight injustice, law makers move far too slowly to effectively be a barrier to all injustice. Additionally, building a weapon is not simply building a tool (note that I'm not saying weapons aren't ever necessary), and it's foolish to suggest otherwise.


I don't think it makes a lot of sense to say the tech industry isn't responsible. At least as engineers I think there should be some ethical considerations.

I think there is a difference between a truly neutral tool being used for harmful purposes, and a tool specifically made for harmful purposes.


Someone in the tech industry decides to build such product for the lawmakers. These people in the tech industry know how this software is being used and they continue supporting it.

Yes, lawmakers are responsible for it, too. But let's not justify the "we just built and sold a weapon, we didn't fire it" mentality.


Let us remember that Peter Thiel founded Palantir to give himself backdoor influence on policy.


Peter Thiel would make an excellent Bond villain. You can feel empathy for him for being genuinely wronged and at the same time be disgusted by how he's harmed others in the pursuit of his private justice. He should never be underestimated and just when you think you've found a weakness he can expose it as a secret strength. If he weren't using his tremendous skills for so much ill in the world he'd be extremely admirable and hard not to cite as a role model.


While yes the final decision lies with someone else, I am very uncomfortable with this idea that the creators of these tools are completely blameless.

Tools are abused all the time, but not every tool is as destructive and likely to be abused. The use and abuse should be considered, and mitigation undertaken as part of any basic engineering ethics standard.

Tools aren’t natural, and certainly not these tools. They were made very intentionally. To use an extreme example, you don’t get to make the proverbial genetically targeted bioweapon, and the throw up your hands when it’s used and say, “Hey I’m blameless. I just made the thing.” Well, the outcome of its use, and it’s appeal to bad actors is obvious, it wouldn’t have existed if you didn’t build it.

But let’s look at Palantir in particular. It’s not just software, it also a consultancy via its “forward deployed engineers”. This means that it’s not just a value neutral tool, but actively aiding customers to do whatever.


>Once and for all, it is the policy makers, not the tech industry

This is what people working on horrible things say to themselves to justify collecting a (giant) paycheck. Many parties are at fault, whether they write the laws or the software.


This is the tech equivalent of saying they were just following orders.


Seriously want to understand: what exactly is wrong with equipping cops with better tech to catch people who blatantly violate laws? I am not talking about tech that itself violates constitution/laws but rather the tech that works within the boundary of constitution/laws while helping cops do their job better.


They can use it to skirt the law completely and you’d never even know.


> it is the policy makers, not the tech industry, who are responsible for these operations.

Organizationally responsible. Ethically, if you provide means to do X and can reasonably predict that someone (e.g. government) will apply X to do something unethical, then you are at least partially responsible for the final unethical result.


"I am innocent of this man's blood; see you to it."


Since not everyone will get the reference: according to the Christian Bible's New Testament, this is what Pontius Pilate (the Roman administrator who sentenced Jesus Christ to death) said when he reluctantly allowed the angry mob to decide JC's fate. https://en.wikipedia.org/wiki/Pontius_Pilate


Sounds reasonable, but when I look around...


I'm mixed on this.

I hired a former CIA analyst a few years ago. Part of his job at the CIA was to contribute to the daily briefings provided to president Obama. I asked him how he felt about the ethics of spying on American citizens. He said I wouldn't believe the number of threats that were prevented on a daily basis because of the tools and data they have access to. He said they don't share the information about what or how many crimes they prevent because it would cause panic. They also don't share that information because it would give away how they were uncovering the plans.

It seemed like a viable explanation and I don't think he had any incentive to embellish...but it also seemed like something big brother would say to get me to give up freedoms.


> it also seemed like something big brother would say to get me to give up freedoms

Or to keep up morale at the agency.

The one indisputable fact is that actual attacks are very rare. So there are only two possibilities:

1. Law enforcement is exceptionally effective at stopping them, or

2. There are very few attempted attacks to begin with.

Here's a data point to help you decide which of these is more likely:

https://abcnews.go.com/US/tsa-fails-tests-latest-undercover-...


I don't think a datapoint about a bunch of airport security guards paid minimum wage with minimal training is relevant to the intelligence community.

At least I hope not.


Law enforcement (of which TSA is a subset) is a customer of the intelligence community. If they aren't, what could possibly be the point?

(You might also want to re-read the headline.)


You should've just stopped after number 2. I trust the intelligence community as little as the next techno-privacy-nut but obviously no one can come to any decision on a complex question with ONE data point.


That depends on the data point and the hypothesis. For example, the claim that the Titanic was unsinkable was pretty definitively falsified with a single data point.


Keep in mind that internally lots of stats/etc are posted around things like cyber attacks, where a 'port scan' is counted as 65k different attacks (all of the tcp ports getting a 'malicious' SYN == an attack).

It's the functional equivalent of calling satellite overflights attacks, every time they fly by.

'We are under constant attack!' also helps the internal justification of working for/with agencies like that.


I don't think that we can compare cybersecurity to government intelligence agencies. Cybersecurity is a field that can only grow by making people fear for their computer security, which is why you often hear such exaggerated numbers for "attacks".

You could argue the same thing about the CIA, but there's no direct financial incentive like there is for growing your cybersecurity business.

I believe the real crux of the problem is intellectuals won't trust intel agencies because there's no evidence that they're being used exclusively for good, and the intel agencies can't provide that evidence without burning their methodology.

There's absolutely no way for us as outsiders to evaluate whether mass intelligence collection tools like PRISM are being used for good or bad, or whether they're even necessary. So instead we have to judge them based on what we know


I was using that as an illustration, not a direct comparison. Govt employees are graded and promoted based on two primary things: # of subordinates and $ managed. There is a tremendous incentive to grow your career through a mindset that you're always under attack. And, depending on your personal point of view YOU MIGHT NOT BE WRONG.

I didn't say they weren't doing that, I was just suggesting that that frame of mind is fostered for a variety of reasons inside those agencies.

And 'intellectuals' can look at history and see that, categorically, the intel agencies have not used their powers exclusively for good at any point in history under any government. That doesn't mean they're not necessary, worthwhile, or even honorable in intention. But the facts are the facts.


You may be onto something, but I am not certain it is just a quality of govt employees. BSA depts seem to house a lot of people, who would love nothing more than to expand those laws even further. I am not saying you are wrong. I think it is just a sad reality that people push in the direction that benefit them most.

edit. added not. completely changes the ending. sry


It doesn't seem to pass the smell test, does it? Presumably these attacks would be thwarted by raiding and arresting the conspirators... the feds might be able to snuff out a story that so-and-so was plotting to assassinate the president, but I don't think they'd be able to hide it if they were kicking in that many doors on a daily basis. That would get out eventually if only through rumor and word of mouth. Maybe they plant bags of coke and call it a drug bust?


"According to an old story, a lord of ancient China once asked his physician, a member of a family of healers, which of them was the most skilled in the art.

The physician, whose reputation was such that his name became synonymous with medical science in China, replied, "My eldest brother sees the the spirit of sickness and removes it before it takes shape, so his name does not get out of the house.

My elder brother cures illness when it is still extremely minute, so his name does not get out of the neighborhood.

As for me, I puncture veins, prescribe potions, and massage skin, so from time to time my name gets out and is heard among the lords."

Sun Tzu - Art of War translator's intro


Being generous: The CIA isn't generally in the business of arresting people. They're a little more hands on in the dark corners of the world, traditionally.

Maybe they are simply leaning on financiers via other means, and preventing attacks by re-rerouting or intercepting cash.

Remember: they are very explicitly not law enforcement. Read 'The Black Banners' by Ali Soufan of the FBI to get a good sense of how not-law-enforcement the CIA is.


That does make sense, but the problem is it's not provable. Citizens just have to take the gov'ts word that all of this spying is keeping us safe. That's just ripe for abuse.

And also keep in mind that the people "preventing" these threats have a vested interested in making these "threats" sound as sinister as possible. It's similar to drug busts by cops - if they seize a kilo of cocaine, they don't say they took "$10,000 worth of drugs off the street", they say they took "$200,000 worth of drugs off the street", by making the most favorable calculation (selling individual grams at an inflated price).


In communities like HN there is always been a belief that compromising privacy is not actually required to keep people safe. Whilst I generally agree with this, I don't think it is set in stone. The threat could evolve to the point where this kind of tech is the only realistic method of detection at scale. That would be a scary world to live in. We would loose our privacy and eventually our security as attackers evolve. This could happen, but probably won't because terrorists are still very limited in numbers and talent.


Think how many more attacks could be prevented if everyone had to wear an ankle monitor at all times.


Nah, smartphones have much better data logging especially with ML sensor fusion.


> He said they don't share the information about what or how many crimes they prevent because it would cause panic.

You always hear people say this, but they don't seem to worry about the panic that line might cause.


Would this be all that shocking to the general public? Watch any detective show and the fictional tools there are at least as capable as this real life tool.


Not any more than it should be shocking to a forum of computer scientists that police agencies have tools to query a database and produce visualizations.


"Truth is stranger than fiction, but it is because Fiction is obliged to stick to possibilities; Truth isn't."


One way to explore that question would be to make a game to expose people to these ethical questions. I am thinking like "Papers Please" but recasted as ICE gathering up illegals and their families.


I feel so ignorant that I didn't know such a company was behind tslint. I wonder what Palantir uses TS for specifically


Building web applications, I'd imagine. Why is that surprising?


fwiw tslint is soon to be deprecated https://github.com/palantir/tslint/issues/4534


It blows my mind that Google and Facebook have been demonized by the wider media as "surveillance capitalism" while Palantir, who for all intents and purposes is literally a surveillance tech giant, has largely escaped public consciousness and criticism.


One piece of data they cannot access is firearm ownership. It is illegal to use a computer to store information about guns in America.


This is genuinely the most amusing joke I've heard today. How do you think background checks for purchases work?


I am not sure why it is amusing? The interpretation is correct:

The background system is called NICS [1], and all information submitted to it is destroyed after 1 day. The system only outputs PROCEED or DENY to the FFL when someone is being checked for a firearms purchase.

1 - "Per Title 28, Code of Federal Regulations, Part 25.9(b)(1), (2), and (3), the NICS Section must destroy all identifying information on allowed transactions prior to the start of the next NICS operational day. If a potential purchaser is delayed or denied a firearm and successfully appeals the decision, the NICS Section cannot retain a record of the overturned appeal."

https://www.fbi.gov/services/cjis/nics/about-nics


What about FFL? Can FFL entities keep data about gun trade and share it with third parties?


Background checks do not contain information about gun ownership. It is federally illegal to have a registry of gun ownership.


https://www.gq.com/story/inside-federal-bureau-of-way-too-ma...

> There's no telling how many guns we have in America—and when one gets used in a crime, no way for the cops to connect it to its owner. The only place the police can turn for help is a Kafkaesque agency in West Virginia, where, thanks to the gun lobby, computers are illegal and detective work is absurdly antiquated. On purpose.

> That's been a federal law, thanks to the NRA, since 1986: No searchable database of America's gun owners. So people here have to use paper, sort through enormous stacks of forms and record books that gun stores are required to keep and to eventually turn over to the feds when requested. It's kind of like a library in the old days—but without the card catalog. They can use pictures of paper, like microfilm (they recently got the go-ahead to convert the microfilm to PDFs), as long as the pictures of paper are not searchable. You have to flip through and read. No searching by gun owner. No searching by name.


I am guessing the parent is referring to the “Acquisitions and Dispositions” bound book, which up until recently had to be kept on paper. It is permissible to keep these records in software if of the last year or two.

These books are kept by firearms sellers (FFL) and must be present to the ATF upon request


They check against data for crime. How do you think background checks work?


spinlock seems mutex


This is not true. California does register and track all guns sold and transferred in the State.


The tool tracks where you have been, your bank account, etc. So while it may not tell what guns were purchased, it can tell if they have visted a shooting range.


What motivates extremely talented tech workers to devote their limited working years to making the world a worse place to live in?


I almost guarantee that these engineers have a very different value set than you. They could possibly see this as granting law enforcement legitimate and authorized access to collect evidence needed to prosecute real criminals. In their minds abuses of the technology they make is likely the illegal part not the existence of their tool itself.

Most of the responses I'm seeing here are about money and while that is probably a factor at some level, I like to believe that people need to have a lot of pressure put on them by society for other reasons to give up their personal morals in exchange for money.

"No one is the villain in their own story".


I asked the question in earnest, so I appreciate your reply. I think you're probably right.


So far every job orientation I've done has been 10% information useful for doing the job and 90% making me feel good about the job. You probably didn't take the job if you didn't feel at least a little good about it. If you don't feel good about the job anymore, then you probably quit. This pattern makes sure that the organization is mostly full of people that support it.

Most of your feelings aren't rational conclusions based on sound analytical reasoning, and if your day-to-day is just making widgets for other widgets, you probably aren't often forced to update them.


I will add to this.

If you look at us biologically speaking, we are herding creatures.

That means the majority will go along with what the leaders want.

While the Milgram experiment was skewed, the reality is most people will shock you to death just because an authority told them so. And if you ask them at the end if what they did was right, they will say yes, or they wouldn't have done it.

Being a decent of German Jews, I normally think: "Who of the people that I know wouldn't turn me in."

The sad reality: very few. It's by design. Herds don't work when everyone is trying to go their own way. There will always be individuals who eschew herd. Biologically, they generally die, but a few find new environments or start their own new herds. Of course the herd problem then starts over.


What leads well-off Americans, whose safety and comfort is guaranteed by the constant efforts of law enforcement, to conclude that legal efforts to make law enforcement more efficient would make the world a worse place?


Ethics. The amount and scope of information collected by things like Palantir Gotham is beyond invasive. The fact that a police officer can type in your name and know where you were driving is disconcerting. It is about as Big Brother as you can get and not the kind of world I would like to live in. This question is a slippery slope, where people become more and more accustomed to less privacy. If you think this is justifiable, I would highly suggest reading up about the the Stasi in East Germany or read 1984.


This exact issue was pointed out many times with automated license plate scanning came into existence. Is it really Palantir's fault for making software to aggregate the data? In my view it is our fault as citizens for allowing the government to collect such data in the first place.

Any other company with enough developers and data scientists can do what Palantir does. It's the massive amount of information their software uses that is the concern. But that data is collected by our government, and so we have legal recourse to it, either under current law or by passing new privacy laws.


That’s like asking if it’s really a murderer’s fault for pulling the trigger of a gun manufactured by someone else. Yes, it absolutely is their fault and no, “if they didn’t, someone else would” is not a valid moral excuse, ever.


Who is the murderer in this analogy? To me it's the government misusing such a tool.


Palantir. The government is the gun manufacturer collecting all the needed data in various places.


Why though? Palantir isn't the one using the tools for ill. The government is collecting the information and using it, possibly, to violate civil liberties and constitutional rights.

What can Palantir do to you?


That's not what Palantir Gotham does. It's much closer to a sophisticated database and Tableau than some sort of big-brother invasive nightmare.


Paying attention to world history, for one.

For another, I've personally seen my work get turned from a human safety product in one market into a direct instrument of human oppression in another market. Lots of tech is just a tool, which can be used for good or evil. The thing is, once good people see what is possible, it provides them an easy route to doing evil which they might not have had before.


I'm sorry to hear about your personal experience; I've been in a similar situation myself, related to voice assistant technology. However, I don't think either of our experiences necessarily imply that Palantir is evil. For example:

Typewriters can be turned to evil. IBM showed this to the world with their WWII-era Nazi collaboration. That doesn't mean the world should abandon typewriters, or that no self-respecting engineer should work on typewriters, or that they would be necessarily used for evil by the law-enforcement arm of a democratic society.

Should police agencies collect and store real-time location data for every private car? Probably not. I'd support legislation to restrict such practices. Should Palantir help police officers sort and present the data they collect, in general including not only LPR data but also criminal records, known associates, and vehicle information? I'd say yes.


Typewriters can be turned to evil. IBM showed this to the world with their WWII-era Nazi collaboration. That doesn't mean the world should abandon typewriters, or that no self-respecting engineer should work on typewriters, or that they would be necessarily used for evil by the law-enforcement arm of a democratic society.

IBM's relationship with the Third Reich was not based on the mere supply of typewriters; such a suggestion reduces your credibility on this topic to zero.

https://en.wikipedia.org/wiki/IBM_and_the_Holocaust


>IBM's relationship with the Third Reich was not based on the mere supply of typewriters; such a suggestion reduces your credibility on this topic to zero.

"Punchcards can be turned to evil."

The specific type of dual-use technology is irrelevant to my point.


Misrepresentation of facts corrodes discourse.


Defending American liberties often result in uncomfortable situations. For instance, defending free speech. To defend free speech, we often find ourselves having to seemingly defend truly awful things (depending on your personal ethics), because the principle of free speech overrides our feelings on a particular bit of speech; hate speech aside.

It’s not like the people against mass surveillance all want their kids to live in unsafe neighborhoods, or be blind to terrorist plots. But everything must be balanced within our set of liberties — and the American way of due process, etc, is purposefully skewed to protect individual rights and liberties.

This is why gun rights is such a hot button issue — some would like to see stricter regulations, because they feel unsafe sending their kids to school in a world of mass shootings. Others feel those restrictions place an undue burden on individual liberties afforded by the constitution.

When it comes to law enforcement, there is a lot of abuse of Power present in the system, which disproportionately affects people of color. Thus, there’s a very valid argument to be made that some of this tech is too dangerous in the hands of law enforcement, despite how much people of all political stripes want their families to be safe.

My aim here is to illuminate the differences in the arguments. I’m not trying to force my (admittedly liberal) point of view on the conversation, because HN isn’t really for politics. My goal here is to show how, despite politics, people can all desire safety for their family, and personal liberties, but have honest disagreements about the best way to achieve that.


Law enforcement is not without bias, and just because they’re using discretion for you today doesn’t mean they won’t use it against you later.


Quite simply the belief that:

1) these efforts do not guarantee their safety and comfort (though others might, these don't) and

2) that even if they did, the ethical price of hunting down undocumented immigrants is inexcusably high. That in the same way that you wouldn't behead someone for speeding, you shouldn't separate families and cause people to live in the shadows simply for having the privilege of cleaning your hotel sheets.


Given the persistant patterns of unprofessional amd unethical behavior amongst law enforcement, it's understandable that people are against giving them access to more powerful tools that have great potential for abuse.


An understanding of the inevitability of abuse of unchecked power.


$, like someone else has said, but I also would guess that many of the engineers do not have a clear idea of the overarching functionality of the application.

I'm sure internal communication about the product is extremely positive, with phrases like "improved law enforcement accuracy by XX%, decreased customer costs and time by YY," so some may truly believe they are doing something that everyone would agree is good.


Somehow I have a feeling that nearly all the engineers working on it have a clearer idea of the overarching function of the application than you do.


Would you rather tech companies not be trying to help law enforcement or our military, and just leave them to their own devices?

I have my answer, but I don't think it's nearly this black and white. Palantir probably does a lot of good, just like all the big tech companies, even though it's the bad stuff that mostly makes the news.


> Would you rather tech companies not be trying to help law enforcement or our military, and just leave them to their own devices?

Yes, I'd rather not aid law enforcement in spreading disinformation and targeting journalists, for example [1].

Though I do agree that it's not as cut and dry as "Palantir is evil".

[1] https://www.thetechherald.com/articles/Data-intelligence-fir...


I mean, how many of us work at Microsoft or Amazon? Companies currently in a bidding war to build the "War Cloud".


Why a worse place? In this particular example, Palantir is used by the cops to run investigations; it is just a more convenient user interface and analytical engine to already existing data sources. I don’t see anything particularly unethical here.


> just a more convenient user interface

This. A difference between a rock in a sling and a nuclear bomb can, hyperbolically, viewed as a better UI for rather similar goal.

Does it explain the things? Do you see how quantitative differences become qualitative?


I'm sure if you had a shred of empathy in your whole body, you could think for about five seconds and realize that maybe not everyone else had the same system of values as you.


Is the work-life balance at Palantir better? Read on HN that defence companies have much better pacing.


Most defense contracts require 40-hour work weeks. A company can't ask you to work more than 40 hours typically or be in breach of contract (being unfair to the competition who bid on the contract). So yes, when I worked on DoD contracts I was in at 7am, out at 3:45pm.


In general, yes, defense contractors are better. But they also have a old school thinking about compensation and equity, so it’s a trade off. Not sure about Palantir specifically though.


$


Surprisingly, at least for the Bay Area, the money doesn’t even seem to be that good: https://www.levels.fyi/?compare=Palantir,Google,Facebook&tra...


It's making it worse until something bad happens to someone you value.

If Obama's daughters (or Trump's if you are on the other side) were killed in some horrific racist incident, wouldn't you approve the usage of such software if it was the only hope of getting the perpetrator? Or would you say "as much as I am saddened, I cannot approve such massive invasion of privacy".


Or would you say "as much as I am saddened, I cannot approve such massive invasion of privacy".

Yes, I would take that stance every time, and twice on Sunday.


Funny, you seem to be working on systems very similar to Palantir (twitter handle on HN profile):

https://twitter.com/FogbeamLabs/status/1086757478312960002

What safeguards do you have in place to make sure that personally identifiable information of customers that companies using your tech have is not aggregated and pulled together for nefarious use? For example very targeted lead sourcing or targeted advertisement?

Oh, none, you actually advertise that you are mining all the databases for:

> Prospect and identify leads

But I understand that it can be hard to see certain things when your income depends on you not seeing them.


At the end of the day, the only aspect of what we do that is really anything like Palantir is that we build a search index, using Lucene. That's it, an inverted text index. There really is no meaningful way to know that the data being put in is PII, or to regulate how the orgs that use it, do so. And even if we did put in any such safeguard, all our stuff is Open Source, meaning anybody could rebuild it without the safeguard, and we'd be none the wiser.

The differences between us and Palantir then:

1. We don't pitch our software to intelligence agencies, law enforcement, etc., or encourage it's use for these kinds of ends. But we can't specifically block those uses, or we'd be in violation of the OSD.

2. We've been very public with our unwillingness to embrace working with intelligence agencies and the like. See, for example: https://www.wraltechwire.com/2014/04/30/why-a-triangle-tech-...

3. Everything we do is Open Source, meaning that at least the public can take a look inside and see what's going on... modulo any changes a given end user organization makes and keeps private.

4. Our technology is positioned primarily for internal knowledge management / collaboration use inside organizations. But, again, we have no means, legal or technical, to stop somebody from using it for other purposes. And even if we did, they could just download Lucene, ManifoldCF, blah, blah, etc., and build up their own Nefarious Indexing System.

But I understand that it can be hard to see certain things when your income depends on you not seeing them.

There is nothing in this regard that we "don't see". Taking your argument to it's logical conclusion, even a worker mining sand somewhere, to use to fabricate silicon chips, which can be used to power computers, which can be used to run privacy violating software, is "guilty". I don't think I need to point out the absurdity of that position. Furthermore, if we really just cared about "get all the money at any cost" we would have immediately jumped at a chance to talk to In-Q-Tel and the possibility of juicy, rich contracts supplying the CIA and their brethren with technology.


> There really is no meaningful way to know that the data being put in is PII, or to regulate how the orgs that use it, do so. And even if we did put in any such safeguard

Since you advertise that your tech is suitable for lead sourcing, you obviously don't see anything wrong with mining and linking databases of PII information, as long as it's done by "the good guys".


Since you advertise that your tech is suitable for lead sourcing

We don't. The line you quoted above is from my LinkedIn profile where it's describing my responsibilities as founder of the company. So, of course, part of what I do is prospecting and identifying leads. All companies do that, I don't think anything we do falls into the "nefarious" range. We aren't using retargeting or buying user information from data mining companies, etc. Our prospecting basically starts and stops with Twitter and LinkedIn... Not exactly spook stuff.

And while I can respect that some people take things to such extremes that they can even find mundane things like advertising unethical; I'm pretty comfortable with my own sense of ethics and our attempt to do the right things. In either case, attempting to draw any parallel between us and Palantir is an exercise in absurdity. Notice that there's no HN top-page stories titled "Fogbeam Lab's Secret User Manual For Cops", etc. :-)


I apologize for reading the linkedin line in the wrong key.


No worries. I understand where you're coming from. And believe me, I've spent a not inconsiderable amount of time thinking about these issues. I tend to look at raw technology as being "ethically neutral" but it does bother me that there doesn't seem to be any way to truly ensure that tech is only used for noble / beneficial ends. But I don't feel like I can let that stop me from working on tech in general... the only other alternative seems to be to turn Amish or something. And somehow I'm not quite comfortable with that.


>If Obama's daughters (or Trump's if you are on the other side) were killed in some horrific racist incident, wouldn't you approve the usage of such software if it was the only hope of getting the perpetrator? Or would you say "as much as I am saddened, I cannot approve such massive invasion of privacy".

The latter, every time. The nature of principles is that they do not change with changing circumstances. I find your example extremely unpersuasive.


Are you alleging that Palantir enables or encourages "horrific racist incidents"? Could you be more specific?

EDIT: I see what you mean. I had mistaken the meaning of OP. Please ignore this comment.


I think the reference here is to the problem of fatal shooting of minorities by police in the US. The OP is attempting to draw attention to the problem by an exaggerated situation in which the former first daughters are involved in such an incodent.


They're saying the opposite. They're saying that Palantir is helping catch violent racists after they commit crimes.


I was under the impression "Gotham" is the name of their NYC office, not a product.


Por que no los dos?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: