Hacker News new | past | comments | ask | show | jobs | submit login
Apple is struggling to become an AI powerhouse (washingtonpost.com)
224 points by ghosh on June 8, 2017 | hide | past | favorite | 252 comments



On a technical point, I was impressed by their CoreML model format. The specification is open, optimized for maximum transferability, it can convert models from Keras (with TensorFlow), scikit-learn and others, and their Python model converter is also open source: https://pypi.python.org/pypi/coremltools

Once you have your model as a CoreML file, it is stupidly simple to incorporate it into an app (the live demo with the flower model was very impressive), and Xcode will convert it to a machine-optimized version.

I was skeptical when I saw the announcement, but honestly it seems like a game changer -- to be able to drag-and-drop models that other people have trained into an app and use them with virtually zero boilerplate is just great.

Video here: https://developer.apple.com/videos/play/wwdc2017/703/


Apparently you can only view that video in Safari or the wwdc app. That's awful.


I believe it's a HLS video. If you're on Windows, Edge has support for HLS. Otherwise, you'll have to resort to using a media player.


I'm in Firefox on Android.


There's a Chrome extension that allows you to see the videos: https://chrome.google.com/webstore/detail/wwdchrome/lohmkcog...


Apple's streams are playable by mpv, mplayer2, VLC, and any ffmpeg-based player. You don't need Safari, just a player that supports HLS.


I'm in Firefox on Android. That I can't play a streaming video is ridiculous.


In what sense? caffe2 has been able to run models trained elsewhere for a long time already. Do you mean it in the sense that the mobile platform has a standard that will rally a larger community behind common models?


I am an Apple fanboy but I am also a privacy fanboy. As AI, IoT, and consumer tracking continue to invade our daily lives I think Apple will continue to do quite well if they maintain there pro-privacy heading. Privacy is a feature, it's something people want, and unfortunately its becoming a luxury. That is something that attracts Apple's customers.


I'm one of the few in the "tech world" that isn't so much pro-privacy. Google can read my email, Apple can track my movements, my Amazon Echo can even listen in when it wants. The end result is a huge benefit to me. How is any AI supposed to know what I want, if I try to be an anonymous user? I prefer targeted ads. I prefer submitting my information to "robots" in the hopes that my user experience is tailored to what I want. I see all of these articles on HN on how to break free from Google or some other eco-system because they're "evil." I dunno, I guess I don't have a CIA-level job that most of these commenters have where they need to be off the grid.

"Alexa, play me that song I like. And while you're at it, order me a case of my favorite beer. Thanks."


You are making two assumptions that aren't necessarily true. First, you're assuming that you have control over what kind of information is being collected and shared, and you're assuming it's always good for you.

Here's a fairly innocent scenario: if Amazon Echo can learn everything about you, they may discover that you're crazy about Paris and you have good income. Then they can sell this info to every travel website who'll put you into a pricing bucket which would guarantee that you'll overpay for your next trip.

But generally speaking, we take advantage of information asymmetry all the time: buying real estate, getting a new job, planning a vacation, etc. Sometimes you have the leverage, sometimes you don't. With massive information gathering by a handful of big players you'll most likely end up on the losing side in the future, especially when dealing with businesses/governments who can afford to buy a profile on you before engaging. Imagine if your religious beliefs and either you voted for Trump were available to anyone via an API call which takes your face image as the input and costs $0.02 to make.


That's a really good scenario that shows the potential risks of a lack of privacy, without being susceptible to "just don't have anything to hide". Thanks for sharing, I'll have to remember that.


I think we're arguing about semantics here. I can see a valid reason for giving away my information if I know it will be used responsibly. I would like to let Google have my information if it helps turns my phone into a better assistant. I would like to extend segments of this information (namely location) to Uber and Lyft so that I can get rides easier. I would NOT like Google to sell my information to third parties that I have not agreed to, and this extends to Uber and Lyft as well. I want to choose which services get to use my data, and I want there to be consequences if those services leak my data.

Now you may say that you can't trust corporations with this responsibility, but what if there WAS something we could do to enforce that responsibility? How much control I have over my information should be left up to me, and it would be great if an entity I trust could enforce that. As of this moment, we don't have that option. However, it doesn't mean that such a future is impossible. If we argue about it enough, maybe the technology and the courts can find a way to make it happen.


If you have knowledge and control over who gets your information and how they use, well that's effectively the definition of good privacy.


But you don't. You have only control over the former. History has shown us time and time again you don't have control over the latter.


I agree. My point is that it doesn't make sense to say "I'm not really concerned with privacy issues, I give lots of personal data to several services and it improves my life." Because that's effectively saying "I'm not concerned with privacy issues as long as my privacy is really good."


How do you have control over the former?

Data is being gleamed about you even if you aren't using their products, and it's being used by companies you haven't authorized all the time.


Our legal structure needs to catch up and punish companies who use data that we did not authorize. Data should be seen as personal property.


That will never meaningfully happen, because it is contrary to third party doctrine and would make it impossible to do business with people.

If you tell me that you like bananas, and I bring you bananas, I'm not being unethical. The problem with online services is that you're baring your soul to the eye of sauron without realizing what it means. The only way to win is not to play.


You may want to look into the EU GDPR, which requires all companies to strictly manage all personal information, or face a penalty of up to 5% of worldwide turnover.

This is set to activate in a year, and it is a huge problem for software used in the EU because almost nobody is ready for it.

https://en.m.wikipedia.org/wiki/General_Data_Protection_Regu...


I didn't say complete control. That's... well... not actually possible (ever, since time began really, if someone can see you they can track data about you).

But you can have reasonable control of it and you can't with the latter.


I would extremely argue the term reasonable.


>But generally speaking, we take advantage of information asymmetry all the time:

Sure we do. But this is generally regarded as a bad thing and indicative of a failure in our commercial system to engender fair outcomes. Informational asymmetries aren't things we should be actively involved in exacerbating.


I think that's just something we tell each other so the children can sleep at night.

Of course we exacerbate information asymmetries. That's what we do. That's why the neanderthals are in our museums, instead of the other way around.

Saying we shouldn't is like saying birds shouldn't fly, because it's not fair to the fish.


There's a difference between survival and luxuries (like a trip to Paris). Ethics and politics should be about finding the boundaries between those things, and try to provide the basics (shelter, education, privacy) and let people try to make not 100% fair businesses out of the other things.


If it's so ethical, why "provide" and "let"? Can't you trust people to do the right thing?

If not, how do you know it is in fact the right thing?

If you prescribe and enforce your ethics, I think you're wandering close to another definition of politics.


Because there are people with behavior that do not conform to the agreed upon norms.

My ethics? I never said mine.

And yes, there are inherent dangers (like tyranny of the majority), but there are trivial ethical issues where there is no good solution for allowing the [or any] minority to self-determine. (Like rules regarding sex with children, just to throw in the classic think of the children. But of course the age of consent is not a magical line somehow creating an overnight transformative event for every resident of that state.)


You make a good point about the pricing asymmetry.

But the point remains that most people are voluntarily, enthusiastically, giving up personal data in exchange for features that they find valuable.

It's hard to see that as a privacy issue.

Being part of the world involves being known. In the proverbial "olde days," when people lived in much smaller communities, everyone knew everyone's business.

Government spying on its own citizens is more problematic. But again, being a citizen, by definition, means aspects of an individual's life are known, and are public data.

This notion of "privacy" is a special, legal, construct. It's meaning in everyday language is provisional.

What is more problematic, IMO, is the excessive use of classification by the Federal Government. This practice is vulnerable to abuse, as it enables legally enforced secrecy to cover up misconduct.


Well, according to a study conducted a few years ago, most people actually doesn't give up their information enthusiastically.

https://techcrunch.com/2015/06/06/the-online-privacy-lie-is-...


> But the point remains that most people are voluntarily, enthusiastically, giving up personal data in exchange for features that they find valuable

No, that's demonstrably false. Most people have no idea how exactly they are tracked and what information is being collected about them. How would they? It's not always as simple as "Uhh, so I guess if I write an email in Gmail Google can see my email?!" And even then, I think the majority don't actually think that Google does that. "I mean, email is private is it not?!" - is what some people likely think.

Do you actually think that most people know that Facebook and Google track them across the web and know most of the sites they visit when they aren't even on Google or Facebook's websites, through Google Analytics code and the Like button (without having to press on it)?

How long before the technology and bandwidth becomes advanced enough that it's actually possible to capture all conversations from mics 24/7 of the people around a device, and store it for analysis in the cloud? Do you think people expect that to happen right now, or are "enthusiastic" about it? No, they don't and aren't.

In fact some call this idea a "conspiracy theory." But ten years from now we'll probably discover Facebook or Amazon has already been doing it for a few years, and a few more years later the privacy invasion apoligists will start saying "well yeah, of course they listen to your microphone, that's why it's there! To analyze your context and give you even more targeted products and ads!"

Companies first abuse new ways to track people (I guess they would call that "innovation"), and maybe 10 years later, regulators catch-up to it and force them to pull-back their tracking a little bit, after the information has long been stored and used in their databases.


I'd say its voluntary in the same way a frog is voluntarily boiling itself.


Are we imagining a future where sellers can track our every want but consumers lose the ability to shop around? Why wouldn't market forces and competition protect us from this?


To maximise profit, a corporation must find the highest possible price that customers are still willing to pay. If a company finds a way to do this separately for each customer, then the other market players need to follow, otherwise they will loose their capital (i.e. shareholders jump ship). Information asymmetry is a very devious market hazard, it makes it such that the market appeara to be working perfectly well, but from the consumer point of view it appears like fraud. Nothing you can do other than protecting your data.


Companies sell at lower prices than the maximum acceptable price of the customer all the time for a variety of reasons. In order for what you say to be true there has to be a monopoly - otherwise competition will drive prices below that point (and this happens all the time)


There doesn't have to be an absolute monopoly, just a monopoly to a very specific service that you are looking for. Like a direct flight on a less frequented route. Or a delivery of baby wipes that arrives until tomorrow lunchtime at your home address. Oh, you also searched for "what to do when out of baby wipes"? Better jack up that price then...


Because of companies like Amazon. Most specialized shops in many countries have died because they can't compete with Amazon's leverage and scale, and it will only get worse.


This already happens and has been going on for a decade. https://www.theguardian.com/money/blog/2010/aug/07/computer-...


Monopolistic behavior. Lots of products can only be bought from a couple of companies. Competition need not apply.

Even in a competitive market place they can collectively decide to overcharge the same customers.


As an aside, you should try checking airfares in incognito mode to ensure this is not happening. Even refreshing the page for the same fare can affect the pricing model.


Afaik Amazon already does that with book pricing, or did.


You say "overpay" but if the user pays for it then that would have been an efficient price - making the entire transaction beneficial.

Economically it would be the market clearing price so would be beneficial to the market generally. Sounds like a great thing.


Actually, no. Such price isn't "efficient" because of how efficient markets are defined [1]: "...prices fully reflect all available information." But in this case one of the market participants (you) are not aware that others are getting a better deal.

In fact, this would lead to widespread pricing discrimination [2] based on what they learned (or bought) about you. Imagine seeing 20% markup on virtually everything available to you online, because every retailer can see your higher-than-average income before their page loads in your browser.

Personalized content (which everyone seems to love) is just an inch away from personalized pricing.

[1] https://en.wikipedia.org/wiki/Efficient-market_hypothesis

[2] https://en.wikipedia.org/wiki/Price_discrimination


Yes, note the all available information.

It's not perfectly efficient because there isn't perfect information, however it's more efficient than it otherwise would be.

More information in the system = greater efficiency.


> More information in the system = greater efficiency.

More information does not always increase market efficiency. In fact, it can have just the opposite effect when there is information asymmetry.

https://en.wikipedia.org/wiki/Information_asymmetry

http://www.economicsonline.co.uk/Market_failures/Information...


For sure, but only in the short term. History shows that there is a push and pull here and that on aggregate more information is being input to the market from all sides and available for price discrimination. Inefficiencies are great for enterprises to attempt to capture, and by doing so exposing the inefficiency.


Suppose a company invented an aids vaccine, and then charged gay folks and injection drug users 20x, because of hey - those at higher risk are willing to pay more for protection would that be a great thing?

Or let's go back to the Paris example. If everyone is paying the most they can afford, that leaves less money on the table for restaurants, cheese and wine when actually in Paris. It's 'great' if you only look at the market dynamics of flights and ignore the economic/social benefits of cheap travel.


Depends on whether you believe healthcare should be treated the same as other products in the marketplace, or if you think they are a public good.


I am trying to understand what you mean by "public good". It's not the formal economics meaning (https://en.wikipedia.org/wiki/Public_good), because healthcare clearly is excludable and rivalrous.

The notion of a "public good" also isn't about how things should be treated, but more about how things necessarily are.

It sort of seems you're making an argument unrelated to economics here, even though it's using words also used in the jargon of economists.


Nope, I'm using exactly as intended.

If one is to believe that healthcare products should be subject to market forces, then their prices should be variable in the same way other products are. In which case the answer to the question is, yes someone with a "niche" health problem would pay a substantially high price.

If however one believes that healthcare should not be subject to market forces, that is, it should be available to all equally (non-excludable) and cannot incur asymmetric costs between users (non-rival) then the answer is no, it would not be good.


The important thing about public goods is, you can't exclude them if you tried. By saying we should pretend that healthcare is a public good, you are saying we have a choice in the matter. This on face makes it a non public good.

This is a fine opinion to have, but you do yourself no favors by misusing jargon like this.


The problem with healthcare is that it's often not negotiable. Let's say you're a billionaire in a world with zero privacy. You get cancer. One company owns the patent on the drug you need to survive. Is it okay for them to charge you a billion dollars for treatment?


> You say "overpay" but if the user pays for it then that would have been an efficient price

This says nothing of the possibility that one party has a lot of information, and the other party has relatively little information, and the guy with very little information is getting screwed.


The implicit assumption is that the service is charging your air fare or whatever based on your ability to pay. So in that sense, in aggregate, it drives equilibrium between supply and demand - and thus is comprised of market clearing prices, as I said.


There is a difference between competitive single price and monopolistic pricing.


Market clearing prices can strongly favor one party or another, depending on the information asymmetries present.


Wouldn't that price be the consumer surplus price (above the market clearing price)? And wouldn't that transaction have been a result from an inefficiency in the market (lack of perfect information)?

Genuinely wondering. My economics is a bit rusty so not sure if what I said is remotely valid


In fact, in a monopolistic market, perfect price discrimination (every consumer pays exactly their individual marginal utility) results in allocative efficiency.

The consumer surplus is zero, but the producer surplus is the maximum possible value.

In other words, the monopolistic supplier would otherwise charge a higher price for everyone if it is unable to price discriminate. If price discrimination is successful, more consumers can afford the product.


I'm ceaselessly amused by the folks fearing Amazon Echos or Google Homes as though they're constantly listening microphones sent to the NSA. As though we don't already carry devices on our person everywhere we go that have mics (smartphones).

Also, while we trade our privacy to Google (or Amazon, or whomever) in exchange for customization and convenience (a social contract I'm generally happy to sign off on), those companies have even more incentive to keep our data safe.

Google is one of the most valuable companies in the world precisely because it, and only it, has the AdSense knowledge (and whatever other knowledge Google collects about me) to target me.

Insurance companies go to Google and say "show our ads to people Googling insurance companies" – that's how Google makes money. It's not as though Google says "here you go State Farm, here's everyone who's been looking up car insurance." It's business model is based on proprietary customer knowledge. It can't give away this data; it's incentivized to limit it to its own ad-targeting tech.

Are there still problems with this model? Sure. If the government decides to subpoena Google on me, they'll turn over my Gmail. But is it a hell of a lot easier getting access to Google services (e.g. Google Maps knowing where I generally go and what the traffic is like) versus using, say, Duck Duck Go on a VPN (let alone Tor or Tails)? For me, and I'd assume most people, yes.

EDIT: I would also point out that we've long been facing the privacy vs. convenience issue. It used to be that merely signing up for a landline meant getting your phone # listed in the White Pages. Paying utility bills makes your name and home address a matter of public record (unless you choose to shield them via owning and paying through a corporate shell). Ditto real estate transactions involving your name/address. All public records, unless you choose to hire attorneys to setup shell corps for the sake of privacy. Not so expensive to do this now in the age of LegalZoom, etc., but this used to cost quite the pretty penny.

This debate is nothing new; it's merely evolving.


> those companies have even more incentive to keep our data safe.

Then why aren't they doing it, and why aren't they informing us when breaches happen?

I absolutely think providing that data should be voluntary. If you want to send it to them, go ahead! But in many ways it's not. I can't remove my data from those companies, nor can I control what they do with it, and that's a serious problem.


You really don't get it do you. Most of us aren't worried about Amazon and Google per se but rather what happens to our data when (a) they are compromised and (b) government surveillance increases in scope.

There have been countless incidents (and these will only increase in frequency) where people's sensitive information have been stolen and used for blackmail and identity fraud. There is also the increasing use of private data by governments for example in deciding on visa entry or immigration cases. The use cases for criminals and governments are only going to increase in scope and sophistication and will be applied not just to future data collected but current data.

These are all legitimate situations which are completely unprecedented and only possible because of the increased data collection policies of site like Google or Facebook.


The problem isn't really privacy, it's privacy asymmetry.

Would Facebook agree to make all of their employee web searches public? Would Google? How about all phone traffic? Emails?

Thought experiment: imagine a world where everyone can see what everyone else is doing all of the time.

Assume absolutely no exceptions or restrictions. You can eavesdrop on anyone in the world. Anyone can eavesdrop on you.

How many "I am fine with no privacy" advocates would be happy with this?

It's an extreme thought experiment to highlight how asymmetric the current model is. In the current model privacy is becoming a privilege that is available more and more selectively.

To eliminate the privilege, you either need user controls and permissions for specific profitable use cases, or you need full openness - which I think most people would find terrifying, for all kinds of reasons.


Exactly.

I would have far more respect for no privacy advocates if they made public a daily ISO of the contents of their computer.

Would they really have the same position once their identity had been stolen, their credit cards maxed out and every thing they have said taken out of context and made available to their friends, family, boss and the TSA.


> It's an extreme thought experiment to highlight how asymmetric the current model is. In the current model privacy is becoming a privilege that is available more and more selectively.

Slightly tangential to this topic, your comment reminded me of this short talk titled "Your smartphone is a civil rights issue" by Christopher Soghoian. [1] It truly is a great privilege to be able to control one's privacy in today's world (to whatever extent it is possible).

[1]: https://www.ted.com/talks/christopher_soghoian_your_smartpho...


> I'm ceaselessly amused by the folks fearing Amazon Echos or Google Homes as though they're constantly listening microphones sent to the NSA. As though we don't already carry devices on our person everywhere we go that have mics (smartphones).

> ...

> Are there still problems with this model? Sure. If the government decides to subpoena Google on me, they'll turn over my Gmail.

It seems to me like you've read about the Snowden revelations but don't see any issues with warrantless tapping and mass surveillance. As I said in another comment, privacy is not just about you or me. It's about all humans and the rights that we have granted ourselves in many countries around the world.


Prepare to be downvoted to death, but I think some people just want attention and feel important by crying doom.


I'm unclear why I must forfeit my right to privacy because others (you) think it's a non-issue.

Because you haven't defined privacy: The ability to control what is publicly known about oneself. I'd also add the ability to assess what is known about oneself.

You also ignore identity vis a vis privacy. A completely transparent world (a la Brin) has made identity theft a booming industry.

Lastly, everyone ignores property rights. I am my data, my data is me (see identity above). At the very least, if someone is making a buck at my expense, I want my cut.


"Who you are" is not simply a function of what you want.

And really, Alexa is not really that much more powerful by knowing what your favorite beer is; you already know that information. Until you consider Alexa to be an actual friend, any personal details about you that it keeps are marginally useless if it cannot function autonomously - by ordering your favorite beer without even asking. This is not an 'intelligent' behavior, it's just maintaining a list of preferences. To be honest, it sounds like you are conflating voice commands with AI proper. You can certainly have one without the other, and the former seems to be the thing you're valuing here.

I'm all for the concept of powerful AIs that know everything about me - so long as they know me as an actual person to care about rather than merely as a consumer to squeeze money out of. Which means understanding my boundaries and knowing what I consider offensive and acceptable behaviors.

What we have today are AIs that only learn about you what they find useful to their owners, not what I find useful for them to know.


Providing AI while respecting privacy is a difficult problem but it must be done. Apple has even published a white paper describing how they'll do it[1].

We should not have to sacrifice privacy for usability. There's a reasonable tradeoff and I believe Apple is doing the right approach.

1. https://www.google.com/amp/s/www.recode.net/platform/amp/201...


> "Alexa, play me that song I like. And while you're at it, order me a case of my favorite beer. Thanks."

The problem is that nerds are allergic to advertising because they are allergic to "being sold to" because they (well, we) like to think we're above such petty foolishness.

As such we are not the customer. Therefore nobody cares what we think.

And privacy will go away.

The problem with this is, what happens if privacy goes away too much and somebody abuses it? What happens when a malevolent dictator takes over and oh hey look at all these devices so conveniently set up to spy on all the people. How wonderful. What if a benevolent dictator takes over but their idea of good doesn't align with your idea of good? What if somebody decides that oh hey, drunkness is a problem, people do terrible things when they're drunk, let's round up everyone who buys more than 3 sixpacks of beer per week and put them in a holding facility and fix them.

You know all those futuristic utopia movies that turn out to be a distopia under the hood? We're living in one of those. The question you're answering is whether you're inside the utopia or outside. Those of us who embrace the lack of privacy and being sold to, we're on the inside. Those like Stallman are outside.


Who says that aversion to ads has something to do with believing I'm above being sold to? That sounds like nonsense. The reason I don't want to see ads is that they tend to be obnoxious and distracting, some actively disrupt the content I'm intent on viewing (interstitials, TV commercials, preroll video ads), and many end up serving malware.

It would be ludicrous to claim that I'm above being sold to. I'm not even sure what that would mean.


Perhaps you have nothing to hide, but other people do. The danger in your accepting these "robots" as they are is that those other people might end up not having a say in the matter.


The problem is not the utility these services provide -- which I agree scales with the amount of data you can give them! -- but rather the use of this private information in a number of ways (legal or not) that provide no benefit to the user, and in fact can be considered downright dangerous to the user.

This, I believe, opens the door for a company like Apple or otherwise to create a machine learning company with security at the forefront -- in essence, what if these useful machine learning models you use everyday were able to be locked down, so the personal data used by the model was only viewable by you, and NOT an employee of the machine learning company who produces said model?

(EDIT: A few word choice changes)


I see it this way too. For way too many people privacy seems to have turned into a kind of religious end in and of itself. I think evaluating privacy in real, material terms as in ("what actually happens to me if I gave up my privacy here, and what do I get?") makes more sense than treating it as quasi sacred.


In countries that are signatories to the ECHR (European Convention on Human Rights) it is _sacred_, given it enshrines a right to privacy in Article 8 for some 800 million people. The Court is pretty judicially active in enforcing the rule as well, even in member states that have governments with crappy human rights records like Russia.

America and a few others are arguably outliers in the western world for failing to recognize privacy, and the right to private family life, as a fundamental right in some form of constitutional legislation.


What if you found out today that your neighbors have been watching you undress for the past year? In material terms, nothing has changed and there is no material harm to you, but knowledge of that level of intrusive exposure without consent is something most people object to.


I genuinely don't really care about things like this. I mean sure It'd be a little odd if my neighbour would stare at me but I generally don't mind, as you said no harm done


> I see all of these articles on HN on how to break free from Google or some other eco-system because they're "evil." I dunno, I guess I don't have a CIA-level job that most of these commenters have where they need to be off the grid.

As a highly pro-privacy person, I find it ridiculous when people build up arguments like "I guess I don't have a (sic) CIA- level job that most of these commenters have...". Seriously, privacy is not just about you or me! It's about every human in this world, and the balance of power between layperson/citizen and state/corporation. Without protections like privacy (which is one among many), we'd never enjoy all the freedoms we take for granted every single moment. I'd strongly recommend reading this short and succinct article titled "Privacy protects the bothersome" [1] by Martin Fowler.

[1]: https://www.martinfowler.com/articles/bothersome-privacy.htm...


Except Apple shows that you don't have to sacrifice privacy. Just about all of their ML stuff happens on the device. So you can have things that are quite similar to what Google gives you, without giving up that privacy.


You gotta train on something though


I'm also not so hung up on privacy.

Think about the massive data gathering we could do in the medical field for example, to test the effectiveness of drugs and treatments if you could wear a "smart" bracelet that put its data in the cloud - how many more diseases could we cure?

Not that I'd feel comfortable sharing my back account # with you...


It's always that way--the surveillance state grows up around you, all the while ensuring you that no harm will come to you as long as you don't resist or subvert. But here's the thing about that: At some point, if we end up in a tyranny, your acquiescence and silence won't protect you. You will be targeted, perhaps randomly, perhaps by a neighbor with some petty vendetta. The point is, you have no idea what devils you are playing with. The Germans happily murdered the elderly, the retarded, homosexuals, gypsies, on and on. None of those people were actively fighting the regime, they were just simply useless eaters who got in the way.

The same story has played out dozens of times in history. Stalin, Pol Pot--tyrants kill for irrational reasons.


> How is any AI supposed to know what I want, if I try to be an anonymous user?

Store the data locally?


I used to feel incredibly frustrated at not being able to respond to this with words, based off of what I feel. This was until I read this post: https://medium.com/@FabioAEsteves/i-have-nothing-to-hide-why... - which almost perfectly encapsulates the emotion - "I have nothing to hide, Why should I care about my privacy?"


20 years from now you'll still be hearing the same songs and you'll still be drinking the same beer then. You'll be left to an echo-chamber of things that are similar but not different. If you're perfectly comfortable with that now amount of me trying to provide a counter-argument will do. I however think being left to my own devices without things being curated from past perspective is more enriching.


"Alexa, play me that song I like. And while you're at it, order me a case of my favorite beer. Thanks."

Wow, now I'll never have to leave my chair!


How much do they pay you?


Can't claim to be a fanboy, but I am a longtime user[1], and this is the primary reason I stick to them for my phone. I frankly can't imagine using Android the way I use my Iphone - the privacy invasions considered acceptable are well in to the "no go" range for me, before even getting to the malware that pass for "free apps". If I were somehow forced to use Android, I'm pretty sure I'd treat it as an adversarial device, mostly like a feature-phone.

Trust is vital in devices like this.

[1] I'm becoming less of a fan, as MacOS becomes "cloud aware" and less of a Unix workstation.


Not sure how you use your iphone or what you would need from a smartphone, but I don't understand the distinction.

Apps like Lyft, Yelp and Uber already know where you are so you can't use those.

You're left with maps and browser. People tend to use google maps and Waze on iPhone anyway so that point is gone as well. If they use Apple maps, I am not convinced that apple doesn't store location history.

So then you're left with a browser. Android lets Firefox run its own browser engine and you can install all the privacy extensions there.

On the other hand, with Android it's possible to take privacy a lot more seriously. Lineage OS lets you have much more fine-grained controls over all the apps and you can forego the whole google services privacy menace.


You've reduced the problem to only GPS location which is just wrong. Fact is, Android apps targeting 5.0 or lower can still read anything on my phone - SMS, location, photos, contacts, and well, everything else. And any root or ROM based add-ons to stop them are an impractical usability nightmare, where I could just own an iPhone.


Weren't such permissions made explicit when the app was installed?

I'm not worried about SMS because everyone and their mother can see it now.

I don't think apps that didn't require contacts during installation could access them.

To me all Apple products are a usability nightmare. I am forced to do simple things the way Apple wants me to do them instead of the way I want to do them.


Or you could say:

To me all Android products are a usability nightmare. I am forced to do simple things the way Google wants me to do them instead of the way I want to do them.

That line of reasoning just doesn't scale.

Like only learning to type on a Dvorak keyboard, doing things "your way" can have consequences. Rampant, arbitrary customisation of interfaces is, in my estimation, more often than not an exercise in painting yourself into a corner.


Completely agree with you on the interface front.

I gave up on interface consistency though because every app now has its own interface and vendors like Samsung have their own for each product.

I miss the old days of windows apps with consistent menus and obvious ways to use the programs. Now it's like playing some sort of text RPG where you have to try everything to figure out where some setting is.

The problem I have with apple is that things are not there because "art". Google is also very much guilty of it. But at least they don't copy all the apple problems and usually expose more of the internal workings of things than apple does.


>Apps like Lyft, Yelp and Uber already know where you are so you can't use those.

Only when you use them.

You can either not install them at all (problem solved).

or only allow them to "know your location" when you explicitly launch them and only for the duration of you actively using them on screen (problem lessened).


I don't know if they've fixed this, or if was just me not working the phone right, but Disney's park maps kept tracking when the app was closed.


From the "Privacy and Your Apps" presentation on Tuesday at WWDC:

> in iOS 11, users will be able to choose the “when in use” option at all times [when asked for permission to use their location]


> Disney's park maps kept tracking when the app was closed

Closed or in the background?

I was under the impression (in iOS 10 and earlier), that "Always" meant anytime the app was open, and "While in use" is only when the app is in the foreground. So if an app has "Always" on for its Location Services, it will stop tracking when I swipe up and quit it.

If I've been incorrect this whole time, please pass the tinfoil.


Additionally, with iOS 11, it seems Apple has forced the "while in use" option on apps that didn't include it on their own for location permissions.


You're correct. Manually killing an app from the recent apps list will stop it from doing background processing (e.g. tracking location). I'm not sure if they'll be allowed to start again if e.g. they receive a silent push, and I'm also not sure if they're allowed to handle geofences, but besides those two questions, killing the app stops it tracking you.


You can do similar things on Android. You can also turn off your gps in 2 seconds as well.


With the crazy malware and oparque permissions nobody really knows what's fully going on on Android though.

You could turn off the gps but that's too coarse: you might want it for some apps.


But that's how permissions work on Android though. You can grant them to an app when it's requested, or not. Or you can turn it off at the system level with the bug switch in the settings shade. I have an iPad and the permissions models are basically identical at this point.


For an Android alternative, check out CopperheadOS [0]. I'd argue it's better than iOS in privacy, since you can compile it yourself (excluding a few binary blobs).

[0] https://copperhead.co/android/


All these Android alternatives really make me wish I have someone other than Verizon as a carrier.


Could you elaborate on Android's weaknesses when it comes to privacy and contrast this with Apple's approach? Asking genuinely to update my understanding.


The glib response is to point out the difference between Apple's and Google's business models. Apple makes money by selling devices, Google sells ads (i.e. sells their customers' data to advertisers).


>(i.e. sells their customers' data to advertisers)

They take the ads, and distribute them to their users based on user taste that's their secret sauce. Giving user data to advertisers would be giving away their secrets I think.


> Google sells ads (i.e. sells their customers' data to advertisers).

They sell targeted ads, they do not sell customers' data to advertisers. i.e. Advertisers get to advertise on Google's platforms, but they do not get access to Customer data.


And that Apple is the OEM behind the device that runs their OS, and Google (usually) isn't.


Also that Android only gained the ability to enforce iOS-style fine-grained app permissions (prompting on first use for access to camera, location, etc., instead of a blanket grant at install time) one major version ago, which means that many if not most users can't actually get the benefit without buying a newer phone.

I gather there are some questions around the strength of its app sandboxing as well, but I can't speak to that at all. Perhaps someone better informed will do so.


Those permissions are also opt-in for devs at the moment (by choosing certain OS versions at build time you can still use the old permissions system).

I'm an Android dev and to me the biggest problem with Android from every aspect (development, security, monetization) is how poorly new versions of Android are supported on devices.

Apple's dig at Android 7 usage vs iOS 10 usage was spot on.


> (i.e. sells their customers' data to advertisers).

This is not true


Couple of examples:

1) iOS has had lazy permissions since the very early days e.g. when an app wants to use this microphone then ask the user right then instead of when the app is installed. This feature is still not implemented on all Android devices nor is it as seamless nor is it mandatory.

2) Apple is providing more and more frameworks e.g. CoreML which run only on the device. This means that developers don't have to run their own server side apps to process data.


I don't really see that as lazy. I see that as considerate/beneficial to the user. I much prefer to be asked in context what/how the app is going to use the microphone for rather than on initial install, when I might be like "why the heck does this app need to use the microphone?"


It was being used in the programming sense, not as an insult. When some code passes around a "placeholder" instead of a value and waits until it is used to bother calculating it, that is commonly described as "lazy".


Got it! Thanks for the explanation and not assuming I'm a programmer, because I'm not!


"[1] I'm becoming less of a fan, as MacOS becomes "cloud aware" and less of a Unix workstation."

How is it becoming less of a Unix workstation? It can still do all the Unixy stuff it could before.


What I mean is twofold:

- They're paying much less attention to the "Unixy stuff" than they used to. Old versions of common things take forever to be updated. Compatibility has always been shit, but it is getting worse. SIP means a number of things I tend to modify, can't be, without hassle[1].

- With a workstation, I expect to be able tailor it specifically to my work requirements. This is becoming increasingly difficult at the proliferation of "cloud" daemons increases and they lock down the system - the documentation for most of the daemons ranges from "sucks" to "nonexistent", so it isn't clear if any given one needed for stable operation or if it yet another thing related to trying to sell me music or some such[2]. Reverting the machine to a state where "root" means "root" is rather difficult (See SIP).

Apple's focus is on chasing the flavor-of-the-month social features and keeping up with the GOOGs. Which makes sense - the set of people buying that stuff is much larger than the workstation market. But it means they don't care so much about it, and make choices like letting the core OS get as buggy and unstable as it has become.

It is mostly fine for me. I built a nice king-hell PC running Debian that I'm very happy with. I didn't think I would, but after my last MBP died unexpectedly, I ended up buying a bottom-rung used Air because I'm hopelessly dependant on OmniFocus and a couple other OS X only productivity tools, and I sync my phone local-only. So it tends to live in the dining room, where I work on my schedule and read mail via ssh over breakfast.

So yeah, Apple still has a place for me, but it is peripheral. I spent mid-4 figures on building a PC this year, and mid-3 on that MBA. And I'm thinking about what I really need out of OmniFocus, and how much of that I can build myself.

[1] This is a good thing for many people, but I expect full control of my machines, and I do occasionally sketchy things like patching Safari. The problem with SIP is that I only need full control sporadically, and not for long, but it is a serious PITA to disable/enable. So it just stays disabled.

[2] One example of many is `parsecd`. No man page. Googling about shows that a lot of people think it to be related to location-based lookups for Siri data. Despite the fact that Siri is disabled on my box, it ran and asked for network access all the time, until I disabled it. Now, WTF is 'LaterAgent', and why is it in my process table?


I really cannot agree with you. I still don't see anything that affects using OS X as a Unix station. It works just as well as it did before. And I've never had an issue with SIP.


I'm glad it works for you.


That resonates well with myself (similar) experience.

The very same omnifocus problem solution:

* shorterm for migration: omnifocus on ipad

* longterm: trello - it will be slightly different, and you have to come with your own GTD system for it, BUT surprisingly the differences allow me to get fresh and new approach to GTD. it's worth.

as a bonus: you can check taskwarrior instead.

I tried every other software, but it sucked, meaning got some problems. IMO only trello or taskwarrior _might_ (still in the process) do.


parsecd is apparently "Used for Suggestions in Spotlight, Messages, Lookup and Safari" according to a Reddit comment (https://www.reddit.com/r/mac/comments/54870l/what_is_comappl...).

LaterAgent is used for system updates. Specifically, it appears to be what handles reminding you again later when the system prompts you to install an update and you select the option to remind you again later.


I wish I could get the equivalent of 'xset r rate 160 80' on a Mac. And about that Alt-Tab behavior...


I seriously doubt that there is a significant number of customers that Apple attracts by being pro-privacy. On HN and similar environments, sure, plenty of people here value privacy, as do I. However, don't think for a second that the community on HN is in any way representative of Apple's customer base at all. Apple customers buy Apple products because they work well, look nice, are popular here in the US, etc., but it's NOT because Apple has a strong privacy stance. The majority of Apple customers couldn't care less about privacy (and probably don't know much about it either) and given the chance, would choose Siri working well and not returning search results from Bing over more privacy any day.

So, I agree with your sentiment, but I do not agree with your assessment that Apple's stance on privacy is anything even remotely close to a significant factor in attracting customers.


I'm not so sure. It becomes a lot harder to explain Apple's very public focus on privacy features if you don't accept that it is a significant factor in smartphone buyers' decisionmaking.


In the past, Apple has marketed certain products towards enthusiasts and power-users, which gave those products a certain "prestige" that elevated Apple's image more generally. (I'm thinking about Pro products here, as they were marketed towards arts professionals). Perhaps Apple seeks to take a similar marketing angle here -- by appeal to computer enthusiasts' desire to privacy, maybe they gain some prestige more generally? I'm not sure.


Makes sense to me. Apple's sweet spot for marketing is people who don't care very much about tinkering with their stuff, but are fussy enough about how it looks and works to pay a premium for good design that works out of the box.

Prioritizing privacy and security are one of those things that fit as being fussy about things working well, but not wanting to put the time into configuring them to be so. If that crowd likes your stuff and recommends them, then the less technically savvy folks will pick up on the preferences of that crowd and it build's Apple's brand as making premium products.


For their younger demographic I would agree they don't care bout privacy yet. For ~30+ year olds I think it is a big pull. Especially those that have ever had their identity stolen (even though its not really relevant). The average person isn't going to understand how or why Apple's products are better at privacy but if they get the message out there I am certain a good chunk of consumers will see it as a plus.


>For their younger demographic I would agree they don't care bout privacy yet.

Maybe it depends on who you're dealing with, but in my experience teenagers these days are a lot savvier about these things than the typical people in my (30+) cohort are. (They kind of have to be, no? They actually have parents and teachers snooping on them in their real lives so it's a more concrete concern for them.)

This is one of the reasons kids gravitate to Snapchat more than Facebook or Instagram after all: posts on there expire. They have a 'right to be forgotten' by default that they don't get with the other services. So I'd say they still care about keeping personal information personal, it's just motivated by different concerns than it is for the 30+ demographic since they don't have as much sensitive health and financial information floating out there.


I originally moved to Apple because they had amazing products which were leap years ahead of the competition in terms of build quality and features. These days I feel that they have fallen behind in this space, perfect examples would be the new Galaxy 8 for mobiles and the Dell XPS series for laptops...

A major reason for me as to why I continue to stick with Apple is exactly that, the privacy and security focus they have.


That's probably true in the US, though I wonder if it is more important internationally.

There are a lot of people living under potentially abusive dictatorships.


I don't think the "average consumer" sticks with Apple because of their privacy stances though. They stick with Apple because of the well polished "cool" devices. If Apple can't keep up with the cool features of less privacy focused companies I don't think the average consumer is going to stick with them in the long run. The Apple coolness factor will slowly wear off and some other devices will become the new hotness. The privacy focused consumers won't be enough to sustain Apple if that happens.


While I don't really believe Apple will not also eventually succumb to the idea of making more money by incorporating tracking and selling out users, I really liked their initiatives for privacy-preserving data collection like the differential privacy approach they promised last year.

I hope they publish more in that direction.


> it's something people want

I feel the same as you do, but are we sure that the mainstream public actually WANTS privacy, especially when compared to the utility companies are providing by taking it away?


I don't think the tech community gives the general public enough credit when it comes to privacy. If you asked someone whether they wanted a phone that has more privacy risks but also more features or a phone that has less features but more privacy I don't think the former would win hands down. Maybe more would pick the former but a big chunk would pick the latter and that is OK for Apple. They've never needed the majority of the market.


If you blatantly told the consumer, yes. However, most of the privacy-risking utility barely reminds us of what we're giving up, so really it feels like we're losing nothing and gaining utility.

The lack of privacy has to feel tangible in consequence for consumers to care, IMO. Right now it doesn't.


Completely agree. I switched between Android and iOS for years because I enjoyed aspects of both, but the privacy concerns around Google (and by proxy, Android, thanks to the Google Play Store monopoly on app distribution) have seemingly permanently tipped the scales towards iOS at this point.


"(and by proxy, Android, thanks to the Google Play Store monopoly on app distribution) have seemingly permanently tipped the scales towards iOS at this point."

Do you even know what you are talking about? On an Android phone you could install other App stores and side load apps. You can't do any of this on iOS.


I think the argument is that app distribution is controlled by Apple on iOS and by Google on Android, and GP is happier ceding control to the former but not so much the latter. As Google Play Services become more entrenched in the basic functioning of the OS and apps on the platform, it becomes harder to extricate Google out of the Android experience, even if one has the ability to use other app stores.

I think it's a very reasonable position.


exactly, recently android was developed in a way that it's impossible to disable some core services which are directly associated with google services, so AOSP as such is almost impossible to exist separately from google.


You can side load apps to which you have the source code on iOS, for free.


> Six years later, the technology giant is struggling to find its voice in AI.

I know that this is the most common approach to the situation but the article -as many other articles that have argued this before- provides no proof other than analysts speaking in broad strokes.

I use Siri, Alexa and Assistant several times a week if not everyday and I'd hesitate a lot before saying that Siri is behind. Alexa is more responsive but it's domain is also more narrow. Siri does a pretty good job when I dictate a message or want to set a timer. It's limited, sure, but so are the other assistants I use. I might be the odd case, but speech recognition of Siri vs Assitant is pretty comparable in my experience.

Moving onto services. Google Photos is outstanding, yes. But I fail to see any other example that proves the big advantage other players have over Apple in the ML or AI field.

Am I missing something? I feel like a lot of these articles argue their view from the experience they got from Siri 5 years ago.


I don't know if that's your case or not, but I feel like English-speakers living in English-speaking countries dealing with only other English-speakers may never experience the misery of using Siri in other languages.

If I set it to, say, Portuguese or French, its capabilities are greatly diminished. Even simple tasks as "play a song by Pearl Jam" are impossible because Siri will try to find some Portuguese phrase that sounds remotely like "pearl jam". If I set it to English, then I can't use it to send a text to my wife (in Portuguese) or get directions to local streets (named in French). Also people's names.

Google Assistant at least allows me to tell it that I speak those other languages so if I speak in any of them, it will understand as such. I can say "play Pearl Jam" in English, "envie uma mensagem para minha esposa" in Portuguese or "trouve-moi le boulevard laurier" in French and Google Assistant will get what language I'm using. It's a big help. Not perfect, but at least it's usable.

EDITED: grammar


This is "The Texas Problem". If I set the phone to english I can't say "give me directions to 'Taqueria Numero Uno'" and if I set it to spanish I can't say "donde esta el 'Wal-Mart' mas cercano" ... at least that was the case 2-3 years ago. Just this 5 de Mayo it got tripped up with "walk a moly" instead of "guacamole"


I have used Siri faithfully from 2011 til about the start of this year where I bought a google home.

Siri is flat out dumb and frustrating to use comparatively in terms of only understanding 85 percent of my queries vs. Google Home understanding 97 percent.

It seems to me Apple has not taken all their hordes of cash and used it to up the AI game. Looks like they are solely focused on the present while competitors are building the future in which they foolishly try to catch up with.

Like last weeks WWDC they debuted the AR kit but did not bother to create an amazing/innovate AR app to showcase and get the masses excited for AR and their possible upcoming AR gear.


>> Like last weeks WWDC they debuted the AR kit but did not bother to create an amazing/innovate AR app to showcase and get the masses excited for AR and their possible upcoming AR gear.

Why would they announce an app for supposed upcoming AR product before announcing the actual product? You announce the developer kit 6 months before that happens, get them working on apps, and then when you announce your AR product there are lots of apps, not just an in-house one, ready to go. Even if an AR specific product doesn't launch soon they've brought a good AR API to devs a few months before iOS 11 - and when it does (if developers take advantage of the API's) millions of people will be using AR daily on their iOS devices. Right now I know absolutely nobody using AR on any platform. That's going to change very quickly and an in-house app won't make a bit of difference.


> Like last weeks WWDC they debuted the AR kit but did not bother to create an amazing/innovate AR app to showcase and get the masses excited for AR and their possible upcoming AR gear.

It's a developer conference not a product launch, so of course it focused on the code rather than "getting the masses excited". The point is that developers can now spend the next few months creating those AR applications to impress people before iOS 11 with ARKit is launched to the public.


Yeah but they showed new features in iMessage, the camera app and etc. all copycat ideas.

They could have built an AR feature into the camera app to show they are not just followers of all other companies who innovate they still are innovators. Further to showcase and get the masses excited about AR.


I feel this fundamentally misunderstands how each of their AI's work.

You're describing an interface problem. Google's AI is extremely good at recognizing speech in comparison to Siri but that's a wholly different thing to deep understanding.

Siri is a more advanced AI in many ways due to it's understanding of intents. Google's is much more command based (under the hood). However Apple's speech recognition is letting it down here. Intents are a much more sophisticated way to interact with an AI than Google's command based structure but they are correspondingly more complex to integrate and you realistically need to infer things more from the user (which Apple needs to get better at).

I don't disagree that Apple needs to step up their game with how people interact with Siri, but it's a perceptual issue with the interface not with the underlying AI.

(source - have discussed these exact issues with one of the founders of Nuance).


This runs contrary to the experience of most people.

See http://www.businessinsider.com/siri-vs-google-assistant-cort...

Siri is dead last. Surprisingly, Cortana did quite well (I didn't realize Microsoft was catching up in this space).

Those responses require a lot more than just voice recognition - they require an understanding of context and "intent".


That study isn't very useful.

> A recent study […] asked the major voice assistants 5,000 general knowledge questions

General knowledge questions is just one of the many things you can use voice assistants for. It's well known that Siri is the worst at general knowledge questions, but that doesn't mean Siri is the worst digital assistant.


This morning I told my Google Home "Can you turn your volume all the way up?" and it did it. It maxed the volume, not just turned it up a little bit. To me, this indicates that the Google Assistant is actually pretty good at understanding intent and not just repeating exact commands. Or maybe _I'm_ just misunderstanding your definition of 'commands' and 'intents'.


I mean.. that's a command for sure.

This is where a good UI can obscure the underlying thing though. Google is trying to find intent via the text of speech and that's one way to go about it for sure. It's totally valid. It's more limited, but it'll work great if there is a command that it can match it to.

But there has to be an underlying single command that is "turn volume up"

With an intent based system you can chain complex intents together and the result is the behavior of those intents interaction.

It's... ugh, I'm explaining this poorly.

Think of intents as building blocks to create something from whereas commands are just specific things. I wish I could explain this better.


I don't really understand your distinction between intents and commands. I've created apps that leverage a variety of bot frameworks, and most of them seem to fall under your "command" criticisms, but are labeled as intents. I think I understand the heart of your argument which is akin to saying google handles intents / actions as if it were filling inputs on a web form that ultimately goes to an api for response generation.

Having said that, I don't know how you can say that Siris backend is much better from an intent perspective without being able to leverage it properly because of the shortcomings related to the UI. From what I've seen, it doesn't even handle context well. Now it sounds like Siri will be used to do proactive things which is certainly new and different from Google Assistant. Yet, I suspect that logic is just being branded as Siri because it is a push to label Siri as your intelligent assistant as opposed to the weird robot thing you can use to check the weather


So what would be an advantage of Apple's system (assuming they improve speech recognition) and how would that contrast with Google's approach?


The power to create connections between things of the underlying system is greater.

Basically if Apple actually manages to get the voice recognition part down (so it parses intent/structure rather than just pure words) it'll be very powerful.

Right now Apple has a frontend that is more suitable for a command-based system and Google has a frontend that is better for am intent based system.

It's easier to improve the frontend than create a better backend though. So that's the competitive trade-off that Google has made.

I actually think it was smart of Google, and quite un-Apple like of Apple to do it the way they did. Usually they are very focused on the interface as well.

Then again, at the time, Siri was quite good at that. It's just Google focused on it (clearly).


Oh I see, thanks!


[citation needed]


Ultimately Siri is a chatbot, and the pushback against Siri is a pushback against expectations with chatbots. The movie "Her" is still utter fantasy to mankind.

It's a massive stretch to call voice recognition AI, and the recognition leap that happened recently is not so much better from what was built into the early OS X versions. Apple's largest AI project that I've seen so far has been music recommendations on apple music.


Siri is obviously behind Google Assistant on general knowledge queries. It seems like if Siri doesn't understand something, it falls back to unhelpfully searching the web, something that is not useful if you can't look at your phone. Assistant on the other hand attempts (with decent accuracy) to extract a useful answer to your question, and reads it back to you.


Assistant also uses your previous queries and the semantics of your question to guide the search, as google search does.


For me, Siri is horribly behind. I've gone to the point of installing Google on my iPhone and using it even though it's a struggle to activate it compared to Siri.

My big problem with Siri is that I have to time when I start talking. Also, her voice recognition is awful in noisy settings.

I have an iPhone 7


> My big problem with Siri is that I have to time when I start talking.

What do you mean? You can just say "Hey Siri, blah blah blah" and Siri will pick up the "blah blah blah" just fine.

If you're adding an artificial delay after "Hey Siri", waiting for her to respond before continuing, then you're just making it harder. Just speak conversationally.


>If you're adding an artificial delay after "Hey Siri", waiting for her to respond before continuing, then you're just making it harder. Just speak conversationally.

I actually find this frustrating and I'm surprised Apple went this way given their usual commitment to humanistic interaction design. When I'm addressing someone in real life, I call out their names and wait for acknowledgement that they're listening before rattling off questions at them.

The fact that Siri doesn't acknowledge it's listening or even give you a visual indicator that it's attention has been caught (like a person does by making eye contact) makes interaction with it feel strange and unnatural.

It's especially weird when you're in a room with multiple "Hey Siri" capable devices. If I call out "Hey Siri" I'd like to know whether I should be speaking at my iPhone, my iPad, my girlfriends iPhone, or a hypothetical HomePod. Which ones should I direct my voice at and expect a response from? All? None? Some? Can I tell my girlfriend's iPhone to go away because I'm talking to my phone right now? I'd like to. . .


> The fact that Siri doesn't acknowledge it's listening or even give you a visual indicator that it's attention has been caught

Yes it does. It makes a very distinct sound when it starts listening, and the screen itself transitions to the Siri screen and shows the animating waveform.


It's weird. It takes several seconds after saying "Hey Siri" before it starts doing that. It works fine if you say "Hey Siri blah blah blah." But if you wait for the indication, then it actually messes up. It's listening before it's brought up, but it doesn't seem to listen to what you say immediately after it's brought up, it takes a second. It's very strange.


And it doesn't start listening until about a second later.

Which is a big problem.


Tangent: wouldn't it be nice if you could say "Siri, ask Google to..."

I wonder if implementing the feature of calling 3rd party assistants from the flagship assistant would pay off strategically.

On one hand, you appeal to people who like other assistants better, or like different assistants for different things. On the other hand, you give up on forcing people to get used to your assistant.

Also, could an isolationist provider stop other assistants from calling their one? Technically? Legally?


Lol, you can't even ask Siri to play a song on Spotify so you can forget about that. Anyway at that point why do you want to play telephone with chatbots? Just get an android phone.


>I wonder if implementing the feature of calling 3rd party assistants from the flagship assistant would pay off strategically.

It'll never happen, but I can definitely see some utility in building this in. For example: "Hey Siri, have Alexa order some paper towels from Amazon." I can see Amazon allowing this since they're going for a more generic approach. But I don't think Google, Apple, MS, or Samsung will.


>Also, her voice recognition is awful in noisy settings.

I figured that was more of a hardware problem?


Talk to the top of the phone. There is a separate mic up there for Siri. Took me a while to work this out.


I've been rather surprised how well Siri works.

Sometimes I'll be watching a baseball game in my living room with the volume pretty dang loud. The sound of the crowd in the background is almost like a static noise, and then you have the yellers, as well as the commentators speaking clearly, distinctly, and loudly. Then I use Siri to set a timer for something I'm cooking, or to "Call X person on speaker." and it picks up my voice without any hesitation or anything, and gets the words exactly right.

Definitely accurate.


Siri is still frustratingly hard to activate though. Once activated, it does a pretty reasonable job for what I need it to do. Alexa is much easier to activate (but the echo has the hardware specifically for it).


This article ticks me off. Apple announced some really good features on Monday (CoreML and a bunch of ML driven features). Having this kind of article after WWDC is unfair. As a consumer, I applaud Apple's approach. The so-called AI as pursued by Google & other tech companies isn't AI at all. True AI won't need as much data. You do need data for Machine Learning (i.e. pattern matching). That's what other tech firms are pursuing. Apple being a hardware, systems and frameworks company is doing the right thing respecting my privacy. They leave data mining and personalization to 3rd party apps. If and when true AI happens, there's a good chance Apple will be leading that. (Why? True AI won't happen overnight, and true AI won't require access to a billion customer's emails or browsing history).


> The so-called AI as pursued by Google & other tech companies isn't AI at all. True AI won't need as much data.

Deepmind / AlphaGo, Watson, and other like them are about the best thing we have nowadays when it comes to AI. Let's not even talk about True AI, which if you take any of the definitions can only become reality with so much data we're only brushing on it actually. Saying True AI won't need as much data is failing to understand how much "data" it takes to make a human brain function, and while that is a whole other can of worm, one thing is certain it's a shit load accumulated over time.

> If and when true AI happens, there's a good chance Apple will be leading that.

Apple is not even in the same arena when it comes to AI. And they shouldn't be, with the cash piles they have it's better for them to wait for somebody else to make the breakthrough and pull a Siri, buy a company that does what you want and integrate it how is needed for you product / consumers.

Also don't confuse AI/True AI with clever algorithm.


I spent the last two days in IBMs Watson Innovation Centre, working with the Watson Conversation APIs. To say I was not impressed is putting it mildly. Can you name any examples where Watson is good?


>> The so-called AI as pursued by Google & other tech companies isn't AI at all. True AI won't need as much data.

>Deepmind / AlphaGo, Watson, and other like them are about the best thing we have nowadays when it comes to AI. Let's not even talk about True AI, which if you take any of the definitions can only become reality with so much data we're only brushing on it actually. Saying True AI won't need as much data is failing to understand how much "data" it takes to make a human brain function, and while that is a whole other can of worm, one thing is certain it's a shit load accumulated over time.

I think you and I are saying the same thing. Namely, intelligence is an accumulation of years of experience, information & data (though human sensory system). But, once intelligence is established (say after 21 yrs of training in case of humans), learning is rather easy (i.e. one shot learning is sufficient to accomplish new, mundane tasks). You don't need to know someone's whole browsing history and emails to make personalized recommendations or to give intelligent answers to queries. The more intelligent the algorithm, presumably, the less personalized data it needs. So when it comes to true AI, you don't need to know as much about the user to help. This is one way to rationalize Apple's approach.


All the examples you've given are either still very academic or just marketing for 'clever algorithms'. The whole idea that one company or another is owning 'AI' is mainly analyst and marketing words - anything that moves over into practice and actual use just gets callled 'technology'.


I wouldn't worry too much. People keep buying iPhones because they love them, and developers use their platform because of the money and reach. The article is negative, but the two targets for these changes (developers, users) will benefit largely in the time to come.


Something Apple highlighted a couple times, and I think is relevant, is that since CoreML runs on the device, it works offline. More importantly, that's also what enables real time video analysis.

It may not be decisive, but it's worth remembering that there are benefits (other than privacy) to their approach.


Me: Hey Siri, what cheese goes well with fruit?

Siri: Here's what I found on the web for "What cheese goes well with fruit..."

Me: Hey Google, what cheese goes well with fruit?

Google: "Edam"


"Is Obama planning a coup?"

Siri: Here's what I found on the web for "is obama planning a coo"

Google: According to details exposed in Western Center for Journalism’s exclusive video, not only could Obama be in bed with communist Chinese, but Obama may in fact be planning a communist coup d’etat at the end of his term in 2016!

(Definitive answering style is not always better...)

More:

https://www.washingtonpost.com/news/the-intersect/wp/2017/03...

http://www.businessinsider.com/google-home-claims-obama-plan...


At least Siri acknowledges that it doesn't have the information necessary. Google Assistant comes up with an answer, but it might as well have picked a cheese at random. It's not like there's a universal cheese that pairs with all fruit.

Would you have felt satisfied if you asked, "What wine goes well with dinner?" and heard "Zinfandel" in response?

An actually useful AI would ask you what kind of fruit(s) you want to pair it with. Not throw out an answer at random.


> At least Siri acknowledges that it doesn't have the information necessary.

I've had two failures in recent memory where Siri recognized the queries correctly, but completely misinterpreted them.

"What's <time> Eastern Time in Pacific Time?" => time in Pacific, MO.

"Increase volume by 10%" => lowered volume to almost 0.

The latter failure took me a while to figure out: Siri had interpreted my query as set volume to 10 (out of 255?). Indeed, "increase / decrease volume by X" had the same behavior.

Also, asking Siri to play a particular song whose title's in Japanese might as well be asking it to play a random song. Setting its language to Japanese helps a little, but then of course I have the inverse problem. I wish I could switch Siri's language settings verbally, or be able to say "In <language X>, <query in language X>."


I assume you're trying to say that Google's answer is better, right? Google's certainly being more authoritative, but the answer to your question is actually fairly subjective, there's more answers than just "Edam", so IMO surfacing search results is better than just having Google pick one answer.

Google's also been known to produce completely incorrect "authoritative" answers.


He didn't ask, "What cheese goes best with fruit?" he said "goes well with fruit." So Google's answer is technically more correct. The best kind of correct.


I have an iPhone, sometimes use Siri, and I prefer Google's answer. Simply, if I asked a verbal question, I prefer a verbal response. If I ever ask Siri a question, and I get the "Here's what I found on the web" answer, I consider it a failed operation, and I take a different tack.


If you are speaking to your device for this kind of question, the authoritative answer is what you want. You expect me, after taking to my device, to go over and pick it up and search through web results? If I wanted to do that, I wouldn't have asked the question by voice in the first place!


An "authoritative answer" that may not be true, has no context, and may have been put there by SEOs or propagandists, is worse than no answer at all.


I believe you have failed to understand the original point you are replying to.

"Edam" might be authoritative, but it is not correct. The problem isn't simply that the answer is fundamentally subjective, the bigger issue is that there's not enough contextual information in the question to even arrive at an answer that one could consider subjectively correct.

I replied elsewhere, but it's like answering the question "What wine would pair well with dinner?" with "Zinfandel". Without having even a vague idea of what's on the menu, you're just picking a wine at random. Maybe it's correct, maybe it isn't. But being authoritative when there's not enough information to actually speak with authority is not the right answer here.


For the record, this exact search with Google Assistant did not provide an authoritative answer for me and instead gave a bit of a voice summary and a link for more information. This is still way better than just directing me to the search with no more voice feedback.


The agent should then ask for more information; not point me to a web search. In most cases, the authoritative answer is what I'm looking for and the question less likely to be this subjective.

The thing is Siri won't give me an answer either way.


Nobody's arguing that sending you off to search results isn't particularly useful. We're just pointing out that authoritatively answering a nuanced question without enough context isn't any better, it's just differently wrong (and in some cases, strictly worse).

Whether or not Siri or Google Assistant correctly and authoritatively answers more (or more relevant) questions is a completely different topic than what's being discussed here.


The authoratative, wrong answer is what I want? Or, what if I ask, say, which actor is the best in the world? Is there any authoratative sounding answer that is of any value?


We've already seen the problem with Google confidently giving answers that it got from searching the Web.

"Hey Google, is this batshit conspiracy theory I heard about true?"

Google: "Yeah, sounds right"


You: Hey HN, what cheese goes well with fruit? Me: Which fruit?


Siri: "Oh you have a question. Let me Google that for you."


IF ONLY, instead it uses bing....


Edam! Stilton surely.


Well yes, - they don't publish, they don't have a reputation (no teams like FAIR, DeepMind, MetaMind, MSFT AI, etc.), they don't attract the top AI talent, and anecdotally, they don't even even know how to hire for AI (had a brilliant colleague working in AI turned away because he didn't know some facets about the Python language when they hit him with Leetcode whiteboard questions; IMO they aren't in a position with AI where they get to flex programmer egos).


> had a brilliant colleague working in AI turned away because he didn't know some facets about the Python language when they hit him with Leetcode whiteboard questions

I mean, to be fair, isn't that every company? Google pulls the same sort of nonsense.


My understanding is that this is not the case when you get up to the research level, where they use altogether different signaling (citation count, impact factor, etc) to disqualify applicants.


> they don't publish

Your info is outdated. At a conference last December, Apple's director of AI research announced that their AI researchers will be free to publish their findings.

https://9to5mac.com/2016/12/06/apple-ai-researchers-can-publ...


What have they published so far is the question then.


This is interesting because while Apple's (comparative) dedication to privacy is endearing, it's a long-term existential threat.

Google knows all about me and its assistant is, usually, great. Amazon has troves of data on what I buy, and I get to yell at Alexa to order more TP as soon as I see we're on the last roll.

Apple knows much less about me and, while I'm still an Apple fan and am tied to iPhones/Macs thanks to iMessage, Siri stinks as a result.

If voice assistants based on machine learning (specifically, personalized voice assistants) are the next big thing, Apple's privacy ethos will separate it from its major tech competitors – either in a great way, or a very negative way.


Totally agree. Lots of commentators here on HN love to get angry about companies like Google collecting so much of their data, which is perfectly justified. However, you have to be willing to accept the consequences of privacy policy like that, which is the kind that Apple somehow still carries out stringently. The consequences are that any service or product that relies on data, ML, AI etc. aren't going to work well coming from a privacy conscious company like Apple, at least not nearly as well as the products from the companies like Google which don't value privacy as much. You can't complain about Google taking your data but also complain about Siri being a load of garbage, you can't have it both ways (and I see lots of people here trying to have it both ways).

It will be interesting to see if Apple holds its ground on privacy with the increase of AI/ML driven features and products. Coupled with Apple's closed culture which discourages open research (although they have been improving this), their concern for privacy could put them well behind other companies in this space. Depending on how you look at privacy vs. product, this could be a good thing or bad thing.


Aside from recommendation engines and targeted advertising, what does this deep knowledge of everywhere I've been and everything I've done do to dramatically improve the experience of using the AI?

I think it makes Apple slower at making their assistant good at parsing what you're saying and returning an answer, but that's a problem that benefits from crunching reams of data in general, not so much knowing everything about you personally.

For one thing, a virtual assistant that works even when I don't have an active internet connection seems like a perk in itself no? The Siri approach is closer to being there than the Alexa/Google approach.


What can Google do in the cloud that Apple can't do on the device?


Apple can only look at you to learn - Google can look at you, and a million other people like you


I think many Apple users would be happy if only Siri could answer factual questions objectively and follow a conversation, being happy with a minimum of privacy collection like current location, name of spouse for messaging, etc.

While some may prefer a knowledge about you or you via people like you, as in, "Please recommend me a great movie", Siri is currently not in a stage where she may tell you stuff like "How do I mix a White Russian?" Google Assistant give me 6 steps with a photo and a follow up question "What about Screwdriver?" Siri gets utterly wrecked in the knowledge based questions.

I think the main problem is not privacy-related, but knowledge base-related. Google is building upon a fricking huge search engine via a knowledge graph and the sky is the limit for how well an AI can do. Apple is building on what, a shut down Ping social network, Apple Music listening habits, Wolfram Alpha hopes if all else fails, and a sparse Bing Search API if that failed too?


>While some may prefer a knowledge about you or you via people like you, as in, "Please recommend me a great movie". . .

Not to mention that Apple positioned itself as a brand for people who "Think Different." I don't think people who got taken in by that messaging would be attracted to the prospect of services that can more efficiently pigeonhole them.


I thought Apple figured out a way to create unique IDs and profiles that can't be traced back to the individual person? So they can still find people like you for analytical purposes to tune their models.


Probably can't get all the metrics and data needed to actually tune/make it work.

I periodically delete all my history from google servers, thanks to their privacy tool, and google gets just a dumb for me as siri.


Speech recognition and natural language understanding.

If you watched the HomePod reveal you'd know that it sends your speech to the cloud for understanding. Which seems like a pretty clear admission that this can't all be done on-device easily.


The simplest example would be speech recognition. With the exception of Keyword Spotting Systems like Hey Siri or OK Google, it's currently practically impossible to implement larger vocabulary speech recognition on device.


Google will happily permit you to download trained models of <100MB to your phone via Google Translate, which permit not only offline vocabulary recognition of a remarkably wide range, but will also then translate that input into another language (also offline).

I believe the restrictions imposed by keyword-spotting have more to do with the always-listening and/or power-efficient nature of the task, rather than the restrictions on the legibility of offline speech recognition.


I'm pretty sure larger vocabulary local speech recognition on devices with less computing power than modern smartphones has existed (products for continuous speech recognition on PC go back to at least Dragon Naturally Speaking in 1997.)


I've still not seen any AI that really gets me excited. Google Now does some neat things - but if the home automation products from Amazon and Google are representative of the state-of-the-art - we have an awful long way to go before AI is any kind of game changer.

More likely AI/ML is another trip down buzzword lane. It can hang out with IoT, VR, Big Data, and hell, even containers and microservices.

It feels like everyone is casting about for that next revolutionary technological innovation, but maybe we just need to be at a plateau for awhile.


I, too, don't get the hype about voice assistants. There are a few uses cases where they may be genuinely useful like if you are driving and don't want to take your hands off the wheel, but for nearly everything else, it is usually faster, more accurate, and more reliable to use a GUI with touch or a keyboard.


Google and Amazon may get there faster, but personally, I'm quite ok with it taking a bit longer to get a truly smart digital assistant if that means I get to keep my privacy.


I love Apple's commitment to privacy, but the notion that privacy precludes powerful AI strikes me as a false dichotomy, and a cop-out. There must be some middle-ground.


Anonymous, transparent data-gathering perhaps? If Apple stripped data of any kind of personally-identifying information and allowed users to view all of the data that was being sent, it could be a good middle ground. However, the problem with that solution is that it's already possible (not easy) to identify users with very little information. Stripping out identifying information doesn't necessarily mean that the information wouldn't be useful at all in the right hands (say, the government) to identify someone.


Apple talked last year about their approach here, which IIRC is basically to introduce random noise into the data they collect. This way, data at the individual level may be wildly inaccurate (and therefore not suitable for actually identifying specific people), but in the aggregate it still produces the same results.

Of course I may be completely misrepresenting this. I encourage you to go look up the relevant info from WWDC 2016 (I believe that's where they talked about it).


Honest question: in the context of the realistic range of consumer AI applications in 2017, what are some meaningful shortcomings Apple products have? And do those shortcomings have any interest to the majority of consumers (and not say, to developers or analysts)?


As an owner of an iPhone 7 and a Pixel I'd say there are not many. Google Assistant is way ahead of Siri at pretty much every imaginable task, and Google Photos can do a lot more than iOS Photos.

That said, none of those features interest me much. And I'm not sure how many average consumers they do interest. It's more of a 'oh, that's cool I guess'. I prefer my iPhone 7 still.


Yeah, most people I know do use their 'assistant' to some degree, but it's not even close to crucial for any of them. That said, Siri is a running joke among many of us, so if/when assistants become more important, Apple might have a problem if they don't make some big improvements.

Anecdotally speaking, anyways.


One thing to remember is, it's easier to build AI and ML when you are willing to use lots of user data but it's not impossible if you put limits on your access to user data.

For example, when you meet someone, they don't need to know everything about your life to be useful. The same is the case for building internet AIs. Using more pre-trained models. Asking the right questions and listening to the responses.

Sure, it's easier if you have access and are willing to use user data to build those predictive models. But there is some real value in the world for companies who respect user privacy AND build predictive models.


> ... the company hired Russ Salakhutdinov, a Carnegie Mellon professor whose expertise is in an area of artificial intelligence known as “deep” or “unsupervised” learning, a complex branch of machine learning in which computers are trained to replicate the way the brain’s neurons fire when they recognize objects or speech.

The author of the article clearly made a mistake saying "deep learning" is the same as "unsupervised learning".


Regarding privacy, when human beings meet a new person, they do not need tons of data about that person to understand their spoken language. We learn English or any other language once and then rarely need to adapt to new people, unless they have a strong accent or speak a truly different dialect of the language. This demonstrates that the much touted speech recognition technologies from Google, Amazon, and Apple do not match human level performance. Are they really exceeding the marginal performance of the Hidden Markov Model based Dragon Naturally Speaking which took around two months of training to achieve its maximum accuracy ten years ago? Or are they just running similar models with huge numbers of adjustable parameters tuned to each user on the "cloud?"

If Apple invested in genuinely new, creative AI technologies that matched human level speech recognition and other tasks then they could preserve their purported emphasis on privacy. They would not need to collect huge amounts of personal data on most customers, unlike Google or Amazon.


I am not an expert in AI, ML or any other thing like his.

But related to

> Regarding privacy, when human beings meet a new person, they do not need tons of data about that person to understand their spoken language.

I think that we understand context and symbols and syntax because we used it/learned it before.

So when I meet a new person I use previous cultural/language learnings to understand the new person. Not just dictionary.

And even if I met a person who speaks a new language, I might understand after a period the words but still I use logic and symbols to get that when that person points to ice and says something that something means ice. And this I think is possible because of some previous knowledge on how to interpret the interaction that I discovered before about this context.

I think it is the same with Assistants: there is an advantage in being able to access and process large amount of communication so that you can infer meaning or predictions.


I probably won't receive any medals for this, but it is my most humble opinion that most of these AI gimmicks (Siri, Alexa, etc...) are useless.

I will concede I get moderate gag value out of Google automatically creating animated gifs from a string of photographs.


AI is a funny field where you may still be in a great position by just having millions of your capable devices distributed to people. Software wise it's relatively easy to adopt state of the art - without necessarily doing research yourself. On the cloud side they have the capacity as well, that's not a blocker for them.


I haven't made up my mind on this yet: I really like Apple's privacy policies but I also like Google Assistant on my iPhone.

As usual, for a business trip I took this week, I went all-in with Google (location, my email forwarded to my gmail account, used Google Travell app, etc.). Very convenient for keeping my schedule, travel arrangements, etc. in order.

That said, when I returned home last night, as usual after a trip, I stopped forwarding my email to gmail, uninstalled most Google apps from my iPhone, and turned off location.

When I am at home (most of the time) I like using duck-duck-go, and just Apple's software.

EDIT: I have been working in the field of AI and machine learning for about 30 years. I think that Apple will find a sweet spot between good privacy and making Siri into a useful digital assistant.


Collecting location data, visited URLs and phone books is absolutely not required to make speech recognition. You don't have to learn computer vision algorithms on people's private photos: you can use public photos instead. Even "context based computing" does not seem to work better if data from all users is put into single database. Why "AI" and surveillance are usually treated as related?

Most creepy forms of surveillance are used for ads (but I'm not sure if it even works, I always see completely irrelevant ads) and Apple is not advertising company.


Currently, the only documentation I've seen for extending Siri depend on an app running on a particular device. More than anything else, this device-centric focus is holding Siri back when compared to Alexa and Google Home.

Am I missing something? Has there been any announcement that third party developers could add skills or actions across all Siri instances via the cloud?


Apple is struggling to become an AI powerhouse, but it just deployed top-notch machine learning frameworks to millions of portable devices around the world in a way in which its hundreds of thousands of developers can use machine learning now. I want to be struggling at investments in the same way. :)


Why on earth does Apple want to become an "AI powerhouse"? Please, Apple, do what you're good at: design great consumer electronic products. You should be hiring the top talent in fields like HCI, AR/VR, and wearable computing, not AI.


Does anyone have details on the AIKit & Apple CNN for AMD graphics cards? This was previously the exclusive territory of Nvidia and this week, Apple promised expanding that for the other half of discrete graphics cards. Will there be Python APIs?


So the deep learning machinery is becoming part of OS. I am wondering if part of the model (the relatively stable layers) will also be released with OS.


The description of differential privacy in the article is completely wrong. I am really shocked it was described so poorly.


$221B in cash on hand. They can get any AI researcher super start they want. No?


I would love to hear all of the reasons why top minds reject top companies. It's a fascinating subject.


The company's culture, perhaps.


Wall St. would punish them for raising salaries.


They've just entered the race no?


WashPo, owned by Bezos whose fortune is in Amazon, is criticizing a competitor.


I bet they won't be quoting this headline at the next WWDC.



Two reasons, One) Apple is currently a lifestyle company and not a technology company so AI is well outside their wheelhouse in what they have spent the last 5 years doing.

Two) Apple is never really bleeding edge with it's products, it waits to see what the industry is doing and then subsequently tries to take something and make it better, something tough to do with AI/ML. You can't really just take someone's idea and build on it with $$, there needs to be a lot of groundwork done first to even get to that point and it looks like Apple has barely done any (looking straight at Siri for an example)


Lifestyle company? What does that even mean? If Apple isn't a tech company I don't know what is. They design their own processors, programming languages, operating systems, web browser, batteries, etc. Sounds like a tech company to me.


The last few years of their 'tech' have been less than inspiring... They have focused on Apple becoming part of your life with a watch or a touchbar and not updating their hardware and software with anything that moves the needle (AppleTV languishing, Siri a joke compared to Amazon, Google and even Microsoft, MacBook Pros barely updated after years of neglect, MacPro potentially on the horizon for a reasonable update in 2018 after years of nothing. and now HomePod is basically a boombox that connects to the internet...not really a stellar new product especially without AI). This is clearly evident with them in scramble mode with the last several press/developer conferences & intentional leaks (how often has Apple mentioned products that are no where close to shipping?), they are in triage mode. I still feel T.Cook will be out in ~ 2 years.


ok




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: