Who's working on technology like this, and why? And why isn't it self-evidently bad to the people working on it?
I think a code of professional ethics around software engineering is long past due. Journalists started doing this in the 20s[1] after a series of events, including the Spanish-American war, made the awful potential of ethical lapses in journalism obvious to everyone.[2]
We can't continue to maintain the reflexive belief that technology is neutral and is only dangerous depending on how it's used. At some point people have to be willing to refuse to work on certain things because of the obvious social implications those things would have.
I don't know how anybody could be working on things like lethal drones, facial recognition, locked bootloaders, deep packet inspection, or other freedom-reducing technology without considering the consequences of their work.
And I recognize that not everybody thinks the technologies I mentioned above are categorically wrong, but it'd be cool to start a conversation to draw lines about what is.
1. http://www.spj.org/ethicscode.asp
2. Spare me, I know the profession isn't perfect and ethical lapses still abound, but at least we have some way of knowing when an ethical standard has been broken.
My own experience in the software industry is that professional society membership and conference attendance is relatively rare, especially when I compare it to other fields I have exposure to, like the library world, where membership in at least one professional society is de rigueur. I wonder if the problem is not a lack of a code of professional ethics but rather a lack of exposure to them?
What's wrong with facial recognition? It has numerous positive uses. Automatically annotating family photo albums being one, trivial example.
Even "lethal" drones--it's not like there's one software developer who makes LethalDrone OS. There are many components to it that have very positive possibilities, for example auto stabilizing flight controls, which can and will end up being used in search and rescue drones.
I'm also a litte perplexed about why anybody needs tagging in photos. If you take photos of your family, you already know who they are. Same thing with pictures of your friends. It seems like the only reason anybody uses tagging on Facebook is to alert their friends that they're in a photo.
To me, that use case for facial recognition always felt like a front. It's an edge case, used as a justification for technology where the base case is surveillance agencies using it to identify whoever they might be after in public.
And you're telling me at no point did any software engineer write the code that allows the pilot to fire a gun, or do calculations to account for the impact of "kickback" from firing a projectile on a drone's flight path, or...
Sure, they probably did and I'm not trying to argue that it's impossible to use technology for evil. In fact, I'm someone who has protested and left a project because of the ethical implications it posed.
My position, and the only point I'm trying to make is that these technologies aren't themselves sinister. It's their uses that we need to cast light on.
Saying that no one really "needs" facial recognition or photo tagging is obvious. For that matter, no one needs an iPhone, or a computer, or the internet, or books. But, it does open up new possibilities to have those technologies to work for us. Before now, there's no practical way you could query your family photo album for all photos of aunt Beulah from 1994-1998. Is it needed? No. But neither is having any recollection of your relatives.
I'd be interested in seeing what kind of criteria you can come up with for dichotomizing good and bad technology purely on technical grounds. That is, without taking into account usage and motivations for usage. I mean that sincerely and not jest.
I do agree with you that there should be some sort of "guild ethics" we adhere to, but not that we should ostracize certain technologies. We should refuse to work on certain projects or for certain organizations when we know they will be used for wrong. Firewall technology? Good. Great Firewall of China? Bad.
I don't think this is creepy at all - in fact, this is how human vision works. Besides, the fact that the article's on a website called "Macgasm" doesn't lend much credence to the opinions expressed in it.
When I notice a stranger wearing a certain brand of shirt I don't walk up to them and start asking them if they want to buy an array of similar shirts from me and whip out a credit card reader. In what way is this "how human vision works"?
Google has explicitly stated that they do not intend to, and will not use Glass for advertising. This feature is similar to "Find My Friends" and is clearly opt-in (as it requires prior training) -- have you even read the article?
"Google has explicitly stated that they do not intend to, and will not use Glass for advertising."
If you believe that, I've got a bridge to sell you. Google makes its money with advertising. Its products exist in order to gather information about you and your environment, so that it can more effectively manipulate you to buy its clients' products, services or ideas.
Glass is linked to your Google account. The device has GPS, a camera and a microphone. Everything you capture, send and share goes through Google and will be of great interest to them.
EDIT: I searched for a Google statement, the only one I could find is from last year, when the Google Glass lead said "there are no plans to display advertisements through Glass “at the moment."
Google Glass has the advantage that you can't tell if it's recording. "Don't worry, I'm not recording a video," they'll say. You can't accuse people of wearing glasses that may or may not record.
Similarly, when people use smartphones, they could be recording with the back camera, but people are okay with that. They assume no recording takes place simply because it's the more likely situation.
Even with a red LED, many people will not recognize Google Glass as a video capturing device. And if this product takes off, what's a person to do if he doesn't want to be recorded and tracked? Imagine sitting in the subway or at Starbucks around 10 Glass users.
I don't think video capture is the problem, it's bigger than that. Google Glass has hardware for not only capturing video (like CCTV does), but also audio, timecode information, and GPS coordinates. All that data combined, linked to the user's Google account, that makes for a data mining wet dream.
I wasn't suggesting humans will be looking at all the footage like with CCTV monitoring, Google has millions of servers to do that. It already analyzes the content of each and every YouTube video -- it provides automatic closed captioning, translations, it recognizes soundtrack and links to music stores, it displays ads depending on the video content, etc etc.
All right, I hear you. If the concern is that there will be automated algorithms looking through the video/GPS/etc. data, that is way more plausible of a concern.
So let's suppose the best case scenario for Google. Suppose they have all the access to 24/7 video, GPS data, facial tracking, etc. basically as much information as can potentially be gathered. And suppose they have all the computational power needed to process it in any realistic way they desire.
Can you suggest in what ways that might be bad for me? So perhaps Google can target me with the most relevant ads out of all ads. Is that a bad thing? I wouldn't mind seeing relevant ads rather than irrelevant ones anyway.
But what other hypothetical problems could come out of this?
I can imagine if I were a criminal and tried to hide something from others, then this would be a concern. But suppose I don't have much to hide, only personal private stuff (which, if exposed, wouldn't be much different from any other person's personal private stuff).
I'm interested in hearing what people have to say about this.
Google controls much of what web users see. Not only through its general search engine, also through YouTube, Google News, Blogger, and its many other sites. Google decides which results are shown and in which order. They have the power to change public opinion, one person at a time, without anyone noticing it's happening. The more information they know about you, the easier it is for them to manipulate you.
"The problem with social search and personal results is that it biases the results based on the perspective of your friends. If I had a lot of friends who worked for Chrysler and I asked them to name the best car on the road, chances are they’d pick a Chrysler car. But if I asked the general public, I’d probably get a different response. It’s like that old joke Democrats use to tell after the 1972 election, ” I don’t know how Richard Nixon got elected, all my friends voted for George McGovern.” I’m sure many Republicans felt the same way after the 2008 election." [1]
Oh, and "I have nothing to hide" is a well known fallacy. [2]
But if Google does try to manipulate people in significant ways without them knowing, eventually it will get out and people's trust in Google will be broken. That's why I don't think they'll risk doing it.
I mean, for instance, if they try to steal your credit card info by reading your emails and steal money from you. If they pull something like that off, it may work in short-term but definitely not in the long-term.
If they do, they can only get away with manipulating people once.
Of course they can manipulate in a smaller scale, in small and harmless ways (by showing you more ads of competitor A vs. competitor B), but I think we all accept that as it is already true today. By helping me find what I'm looking for faster, they're manipulating me into saving time.
So if Google ever betrays my trust on a large scale, I will stop using them.
I tried reading the [2] article and despite its length, I couldn't find very good arguments. I agree it's bad if the government knows more about you without you knowing about them or what they know. But if everyone knows everything, that's something I'm less concerned about.
If you really want to take a naked photo of me, I would ask "why would you want to? Seriously?" Even if you put it online for everyone to see, chances are no one will care. There's 6 billion other people who also have naked bodies.
You seem to think that all criminals will be caught eventually, that the truth will prevail (in a timely fashion), that information wants to be free, that people are smart and that they care, and that they will revolt if they see injustice. Perhaps you even think that governments and publicly traded corporations are benign in nature.
I believe none of that (anymore).
I've organized and participated in dozens of protest rallies (for instance: I joined 1 million protesters at a rally in Florence [1]), joined boycotts and picket lines, was a member of a labor union, served in the city council, organized political campaigns, wrote for activist magazines.
I still try to spread the message of equality, solidarity, human rights, and pacifism, education, health, and safe foods. The sad truth is that most people just don't care too much. And it's not just that they don't care about others or society at large, they don't even care too much about their own basic rights.
How else to explain that we still have the same broken financial system that caused our current economic crisis (6 years running!). Why we still have the PATRIOT act, the TSA, Guantanamo, the War on Drugs, various wiretapping laws, etc. Why there's no mass outrage when a corporations lies and cheats, when they leak your personal information, or when it refuses to pay its taxes.
I'm saddened by all of this, and I see it getting worse, not better. People are signing their liberties away, bit by bit.
If Google put a light on the glasses that lights up whenever the glasses are recording, would you people stop acting like it's a big deal? A wearable computer with a heads-up interface is more than a spy camcorder.
I understand the sentiment, but not really the direction of it. Why is Google Glass special here? We spend most of our public lives on CCTV already (seriously, we really do). Why is it more upsetting that normal people have access to this now instead of restaurants and stores and workplaces and public venues and...
I'm not saying there's no privacy issue here. I'm saying this ship has sailed. Why pick on Google and not your local convenience store?
There is not one company that has access to all the CCTV cameras placed in restaurants and stores and workplaces and public venues.
If there were, that would be way creepy in and of itself. However, this is worse: the world's most powerful advertising company is turning consumers into spies.
It was bad enough for Google to track users' interests and whereabouts using its online services. Then with Streetview and its apps on Android, it started massively gathering physical information as well. With Glass, it'll have a cheap workforce that gathers the information of others as well.
Even if you've never consciously used a Google product or service in your life, if you live in a developed nation, Google knows about you and it already uses that information to influence your behavior [1][2].
Imagine how your life will change if only 1% of the people around you start using Google sponsored video cameras that are stealth, have high quality imaging, are always on, location aware, and always connected to the Internet: that's what Glass is.
The big thing will be because CCTV is used (generally) to record video and store it for X months until it's taped over by the store, Google on the other hand is a datavore and will quite happily use the information in a completely different way to CCTV.
CCTV is (and has been!) subject to virtually every abuse I can imagine Google Glass being used for. Glass will "generally" not be used for privacy invasion either, so I don't see how this logic works. (And the "won't be stored" point seems silly given that you can trivially store a surveillance camera record...). Obviously both can be abused. So why the one-sided outrage?
I'm on CCTV all day, but my insurance company and employer don't use a mosaic from that data in quoting rates or doing background checks. Glass or anything similar, can change all of that.
The point is that valuable privacy silos are lost due to centralization by a sophisticated party, that makes money by selling that privacy.
Of course CCTV has flaws, but for Ma and Pa shops with their own setups it's unlikely to be pwned by someone running facial recognition trackers or what not.
The outrage still is about the whom not the what. Most people are completely used to being filmed, what they're not used to is the data being sent to an organisation that is genuinely very, very good at processing data. I'd be surprised if most governments had data analysts and storage systems half as good as Google.
I believe you're invoking the fallacy of grey here.
The Sophisticate: "The world isn't black and white. No one does pure good or pure bad. It's all gray. Therefore, no one is better than anyone else."
The Zetet: "Knowing only gray, you conclude that all grays are the same shade. You mock the simplicity of the two-color view, yet you replace it with a one-color view..."
I'm sorry, is this the same New Scientist that suggested (arguably hyped) a piece of technology that has no hope of working in the real world as a plausible replacement for other modes of transportation? OK, just thought I'd clear that up. Thanks for playing... next.
This is no different than Google's "Try these too" feature when browsing for images.
When you're digging into NS for panic fodder, you know you're desperate. You know what else is creepy, follows us around, but we all take for granted? Voice recognition.
As an expat living in France I just discovered this thing called "droit d'image." I wonder how Google is going to reconcile their glasses with people's right to privacy (or right to not be recognizably recorded) in public. It's one thing to drive a few hundred Map cars around and scrub the images. It will be something else when thousands of these are deployed and everyone on the other side of the glass will need to have consent.
I feel like Google is losing a PR war around Glass, almost as badly as Adobe lost its around Flash.
Glass will probably be adopted among adventure sports enthusiasts and niches of skilled laborers, but if they want to see them become accepted in everyday life, they need to do a better job assuaging people's concerns around privacy, fashionability, and information addiction.
You do understand the reason most people think it's creepy isn't necessarily that it's recording, but that it's recording and sending video to a company who's had so many run in's with privacy laws and wants to store as much information on a person as possible? Right? Not everything is about Apple vs Google.
Why's the name a problem? Aside from assuming it's a witty zing from an Apple loving fanboy that Google's new product is doomed, I can think of quite a few other sources that have been discussing Glass in terms of privacy in the same terms. Not everyone who likes Apple products hates Google, and vice versa.
I think a code of professional ethics around software engineering is long past due. Journalists started doing this in the 20s[1] after a series of events, including the Spanish-American war, made the awful potential of ethical lapses in journalism obvious to everyone.[2]
We can't continue to maintain the reflexive belief that technology is neutral and is only dangerous depending on how it's used. At some point people have to be willing to refuse to work on certain things because of the obvious social implications those things would have.
I don't know how anybody could be working on things like lethal drones, facial recognition, locked bootloaders, deep packet inspection, or other freedom-reducing technology without considering the consequences of their work.
And I recognize that not everybody thinks the technologies I mentioned above are categorically wrong, but it'd be cool to start a conversation to draw lines about what is.
1. http://www.spj.org/ethicscode.asp 2. Spare me, I know the profession isn't perfect and ethical lapses still abound, but at least we have some way of knowing when an ethical standard has been broken.