Hacker News new | past | comments | ask | show | jobs | submit login

There are a number of cynical comments here about how they weren't making money on the technology and are just announcing this for PR reasons. Well, maybe, but isn't that sort of cynical response even worse?

I'm rabidly against the use of facial recognition on unwilling subjects, whether it's a government actor (by far the most oppressive use) or a corporate actor. I'm rabidly against public space cameras. I want to see this technology die and never return.

We all love to pounce on companies for doing things we don't like. Why don't we celebrate this as the victory it is? Of course there's a PR component here. Why wouldn't they make an announcement? Why wouldn't they do it now when the audience might be more receptive to the idea? The fact that they weren't making money off it is unlikely to be the only reason they're canceling it. IBM plays the long game, and there's absolutely a market for this technology. A huge and profitable market. They could have kept at it and turned a profit.

So, yeah, they're trying to make some hay, but not every corporate action is purely cynical and evil. Let's appreciate that they've made a positive change, and let's hope that it increases awareness of a horrible technology, and puts pressure on the more egregious actors like Amazon and the defense industry. We don't have to pat IBM on the back, but we can cut them some slack.




Amen. Even if they're doing it for purely selfish reasons, it's still great that they used it as an opportunity to set an example. And I hasten to add, it's not exactly an unmitigated marketing win. It'll certainly raise some eyebrows in some circles to see IBM taking a moral stance on something.


It's just software, if an actor wanted to use facial recognition they can write it themselves. Isn't the correct attack vector regulation vs having a company pull its products? If IBM isn't doing it someone else will and it'll just be less competition, meaning a better funded product for the industry winner.


The primary customers for this technology are governments. They want it. They're not going to regulate it away from themselves. At best they'll regulate the far less dangerous civil use so as to pay lip service to concerns and amplify their "think of the children" misdirection. They can't write it as quickly or as effectively themselves. Governments don't write software. They pay IBM, Amazon, and Lockheed to do it. If most vendors grow a conscience (or fear consumer retribution), then no one writes it. Or it becomes prohibitively expensive. Or it doesn't work as well (which isn't necessarily always good, but is in this case, because it will reduce public support).


All governments already are using the best in class, from a German company, Cognitec. Not much chances for Northrop or IBM to make an impact there.


> They're not going to regulate it away from themselves.

Hum... Speak for the US here. The EU already did that, Brazil (my country) already did that, I think from our neighbors here Chile and Argentina already did that.

Besides the government software that works is all written by the government. Too bad 99% of government software is contracted out.


Assigning morality to facial recognition, a task most humans can do is more than a bit bizzare in my opinion. I tend to operate on a "free extended thought" model when it comes to processing available publically available inputs.

I am certainly not oblivious to the abuses of it and it being a Morton's fork in practice - if it is inaccurate you get many innocents harmed by idiots treating a screening tool as a unique identifier. If it is 100% accurate you can trace anyone perfectly.

Personally I see the cynicism as an acknowledgment of their apparent capabilities as lacking. Claiming moral high ground for what you aren't capable of isn't exactly meaningful. A dyscalcic janitor boasting about their morals not getting rich designing missles for the military industrial complex doesn't mean much because they would be incapable in the first place. Assigning morality to it just gets silly.


> Assigning morality to facial recognition, a task most humans can do is more than a bit bizzare in my opinion. I tend to operate on a "free extended thought" model when it comes to processing available publically available inputs.

The problem with that is that while in theory you could use a million people to do things like facial recognition on the scale allowed by technology, in practice, this would be incredible expensive, so it doesn't happen.

If I can do it do my neighbor it is vastly different from me being able to do it to my whole city. It has very different consequences and should be weighted on its own.


>The problem with that is that while in theory you could use a million people to do things like facial recognition on the scale allowed by technology, in practice, this would be incredible expensive, so it doesn't happen.

Rather, I would say that if you use a million people to do this, it would be morally abhorrent as well. Just look at Stasi informants in the DDR.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: