Hacker News new | past | comments | ask | show | jobs | submit login
Apple’s announcement on artificial intelligence is a big shift for the company (washingtonpost.com)
119 points by monsieurpng on June 14, 2016 | hide | past | favorite | 77 comments



It's not so much about AI as it is about putting all services behind an Apple interface. "Users will soon be able to use Slack, Uber, or Skype, by talking directly to Siri." That doesn't mean launching the vendor's app. It means bypassing it.

Apple is taking control of the user experience with third parties. It's the next generation of the "portal" concept. Expect to see Apple standards on what your API needs to look like.


iOS apps have always been behind an Apple interface. I fail to see how registering a voice command is different from registering, say, a mailto: handler. There'll be guidelines (hopefully) regarding preferred sentence structure, because it's a basic need of such an interface.

Users may often not see the app – that's kinda the purpose of voice commands. But the interface isn't replaced by an Apple interface, its surface is simply reduced.

You'll say 'Facebook message mom' or something like it, and it's going to message mom. You won't see the FB logo, because you won't see any logo. You may see more your mom, though.

In fact, third-party apps will profit, because they are no longer second-class citizen compared to Apple-provided apps.

There are certain tasks that are predestined for Siri – such as starting A Nike+/Runkeeper/etc. run: easy one-shot commands that replace unlocking, navigating, opening, menus etc. I'd suspect if this were a major conspiracy, Apple would have opened the API earlier.


I think you're right in that this won't have much immediate impact on the service providers.

But think of this as a step towards commoditization of services. It's a small step from "Hey Siri, get me an Uber" to "Hey Siri get me a ride", whereupon an Apple bidding algorithm gets you an Uber, a Lyft, a GrabCar, etc. based on e.g. price, wait time, and reviews. As a user, I wouldn't necessarily care which platform got used as long as it meets my need.


>mailto: handler

ios hasnt previously let you set a default app for protocols or filetypes.


Making 'mailto' a bad choice, but iOS does allow you to register URL protocol handlers (better choice: 'x-fooapp:'). I believe it's undefined which app is chosen in the case of conflicts.


Homekit has worked like this for a while, hasn't it? Seems like its pretty much Apple's MO.

As an Android dev familiar with Intent style interoperability, I simultaneously detest and envy Apple's hand built APIs.


What do you envy? I've been thinking about going into native mobile development, and while I'm pretty sure I'll focus on iOS, at least at first, I'm curious to learn more about the differences.


The strictness appears to be reassuring, at least from the outside. Most of the APIs put Apple in the middle of any interaction so any ambiguity becomes the developers fault.

Android is great because apps can talk to each other with intents. Your app sends the "I want to take a photo" Intent and default photo app will handle it and return you the photo data. Simple right? Except you're talking to third party code and the request and response format is so flexible its very hard to check that you (or the other app) have implemented everything correctly. Did you specify the front camera or the back camera? What does no specification mean?

Its the burden of freedom.


"Burden of freedom"... I'd say it's just another consequence of the fact that APIs are user interfaces, and require the same level of attention from designers.


APIs are programming interfaces. They should be concealed from the user in full...


The users in this example are developers; developers are users of APIs.


Voice is a new interface that replaces the GUI, simple as that. It's more similar to a CLI. Either way, it's an Apple interface, but so is the iPhone screen.

Voice (and messaging) are constrained interfaces, so you need to put more horsepower behind them to help them work and become interactive.


As I understand it, those Slack/Uber/Skype Siri widgets are opt-in by the developer and it is the developer who needs to write those. The same goes today for sharing extensions and notification center widgets. So Apple is not bypassing vendor's app per se, it just provides a way to not launch whole app UI when it's not required, but it's the developer who is still in control.


Reading your comments with the word "portal" gave me an eerie flashback to AOL and then Yahoo!


Nice one!!!


Yep. First, take control of the distribution with the stores. Then take control with the paiement system. And now take control of the data input.

The golden jail is getting a new door.


- The App Store has been operational for four years. I can still install anything I want.

- This is opening a previously private API for third parties. I fail to see how this is, if anything, going to make them more competitive with Apple-provided apps.

- If golden jail is getting a new door – doesn't that actually mean more freedom?


New room. New door would be fine, golden or not ;)


It's a one way trap-door.


> And now take control of the data input

How do 3rd party keyboards fit into your narrative?


Once again, RMS had it right the first time.

Paraphrasing, "Apple puts the user in a prison... But its a very beautiful prison."


> Such technology can make the phone or other device appear smarter because it anticipates the types of activities people want to do.

Currently all the attempts I seen to anticipate what I want make applications far more annoying (do you want to send this email to your mother as well?). Its kind of like an uncanny valley. An application that truly could anticipate my needs would be good, but when it tries to anticipate and gets it wrong, it becomes worse than the application that doesn't try and silently waits for me to give it instructions.


Is this even a solvable problem?

Imagine a human PA who knows you really well. You'll still have to clarify what you want from him/her a good fraction of the time.

I suspect AI lives in a special kind of uncanny valley where we're less tolerant of mistakes and imperfections than we would be if we were telling humans to attempt the same tasks.

I'm not sure why this is, but it could be because in spite of all the bugs and failures, we still expect computers to be far more predictable and reliable than humans.

If AI doesn't match the expectation, it's perceived as more frustrating and less useful than perhaps it really is.


Google at their last conference put AI so central – as the next big thing, that I just can't help to think, that Apple was forced to follow that path. So they used the terminology of "AI" and "deep learning" in their presentation. Yet it didn't make me confident they are really up to it. That they really have the expertise, to make this into something that does work (even on the phone itself! - all the other deep learning algorithm need huge frameworks with GPUs!)

Myself, I wasn't able to experience the promised virtues of AI from Google itself. So I wouldn't go as far to call it a bluff from Google. But definitely from Apple, as of now.


I see it more as a back-and-forth. If anything Apple fired the first shot in anger in the platform AI wars by integrating Siri into iOS. Before then voice control was just simple vocal commands and dictation.

It seemed for a while though that Apple was falling behind. Google Now and Cortana overtook them in capability and now they're fighting for the lead again.

I wonder what happened. Perhaps the original Siri team didn't properly gel into the Apple corporate culture? I know the founders left after a few years. Anyway, it looks like Apple now have a solid internal Siri team able to push the platform forwards.


Note that the words "Artificial Intelligence" were not once uttered during the keynote.


"Deep learning" was clearly heard at least a few times.


These are taboo for significant part of the general population, almost like formulas in slides.


AI has already gone through at least one prior boom and bust cycle. Expectations in this area have a tendency to get wildly inflated. Does AI mean general AI? Or just a cleverly applied machine learning algorithm? Seems wise to me to stay away from the term.


The general population doesn't watch Apple keynotes, though.


Craig (and tim I think) said AI


> For example, Apple will now scan your photos using facial recognition to cluster people together in your photo collection.

Apple's OS X photo apps have done facial recognition since 2009. This new thing is more advanced as well as a first for iOS, but I'd hardly call it a "big shift".


The first is doing it on a mobile device, locally, without sending data to the cloud. That's pretty big, and only been possible recently with the performance increase in mobile SoC's.


They really emphasized that this is done on the device, too! Definitely cognizant of the growing hesitation (amongst the crowd in attendance) towards what data corporations have on users.


Previous versions already did that.


Previous versions did not do that on mobile devices, only Macs.


Good for Apple they are finally responding to the market and developers by opening up their wall a bit.


Apple lead with Graphical User Interfaces for the better part of 30 years. First with the original Mac bringing Xerox technology to the masses. Then with the NeXTStep revived iMac, the clever iPod, and the smart phone. Now they struggle in the post GUI era of Voice and A.I.


According to Google/Android fanboys circle jerk, yes.


Another company going to do what they don't have any clue how to do. Google+, Bing, ... Seems like we now have 4-5 companies copying each other in a silly way all the time. Apple/MS simply can't do cloud properly whereas Google barely keeps with Amazon, Amazon's AI is horrible comparing to Google, Apple is copying MS' surface and design (!), Google+ can't do Facebook at all, Facebook can't do ads properly, MS can't do search... Looks wonderful for Apple's AI.


I mostly agree except that Facebook is doing ads quite well. FB's ad revenue is growing by $5B per year, its pretty impressive growth.


Facebook can't do ads properly? I don't have any ties to FB but man, Facebook does ads extremely well. I am not sure on what argument you are basing this on.


I thought MS cloud is doing fine in comparison to Google -- MS is second while Google is third. However, I totally agree with you on others. Particularly, Apple with its closed culture. How many researchers Apple can hire AI researchers while the researchers are so high in demand. The researchers can go to other companies, whom are more open and let the researchers publish their work.


Technically they are still well behind both Amazon and Google (which is shooting itself in its feet by dumbing down the infrastructure they offer to their clients). The same that you stated for AI holds true for distributed systems - not that many top people around. MS is obviously leveraging their Win platform to get subscribers even if they can't match Amazon.


Besides the fact that I disagree with some of your observations, it's not really clear what your point is. Are you saying these companies shouldn't try to compete if it isn't in an area where they've already been successful?


The point is not to do "me too" projects, rather bite the bullet and form an alliance with somebody that offers you better tech and in turn entice them to use what you do best and where they can't compete with you. Instead we have Google destroying all its services to propel up Google+, MS killing their desktop OS and destroying Nokia to catch up with mobile, Amazon pushing tablets nobody wants, Apple ripping off Surface stand and stylus in iPad Pro and neglecting its desktop in the process, and even Linux emulating obfuscating windows registry in systemd.


Gotcha, well that's a great point but it's lost in your original post. I agree with most of your examples, except about Apple in regards to both their new AI work and the iPad Pro.

As far as AI work, I personally really appreciate their emphasis on privacy, and that kind of work seems like it would be a lot harder (or nearly impossible) if they were collaborating with another company. I'd rather have a subpar experience and my privacy than a great experience and my data being sold, mined, subpoenaed, or otherwise being made accessible to systems/organizations that I'm not aware of and don't consent to.

As far as the iPad Pro, I don't own one but when I replace my aging iPad I plan on getting a Pro with the pencil and keyboard because 1) these are features I wanted and I don't mind if someone else implemented them first and 2) all the reviews I've seen about them say they nailed it with their own implementation. I don't think they could've done that in collaboration with MS.


IMO, it doesn't matter whether they can do it, because they must do it.

If they don't somehow learn to do it, they will be eaten by Google and Facebook; the money is not in the hardware anymore. A decade from now, it certainly won't be.


your claim that Apple is copying MS surface puts the rest of this in perspective.


Except that nobody pays for Google services.


No it's not, Apple have since decade facial recognition in iPhone, voice recognition in Siri (with a bit of AI to generate the answer), and handwriting recognition too.


Face recognition and picture processing is considered AI?


This is the real AI problem. Once an AI problem is solved, whatever solved it is no longer considered an example of AI.


This is starting to become a fallacy rather than a cute tongue in cheek saying by AI researchers.

Once a problem that is used to push AI research is solved better than humans then it is usually no longer cutting edge AI research. Chess for example. Minimax search with tweaks for board eval. Done. Handwriting recognition - better than humans by a committee of NNs, but still considered an AI research problem. It is a benchmark for algorithms. That doesn't mean just because an algorithm can do it that we have general artificial intelligence.

A general AI will have to identify digits and play a decent game of chess and have a conversation.

That doesn't mean the problem domains that drive AI research are not AI after they are solved individually.

A lot of researchers think "this is a great problem, surely it will require human level general intelligence". When an algorithm is identified that solves the problem without needing general intelligence, they look for the next problem domain... or keep on improving in that domain (image recognition, but not necessarily of dog breeds, that's solved).

This intermediate focus gives direction to AI research while general intelligence gives purpose.

None of those researchers are saying solving X means solving AI. None are saying each of these domains is sufficient for AI, rather they might be necessary and are a good place to start to advance the field.

About the only behavior or skill, X, that researchers think may be sufficient for AI is an entity that can communicate well enough to pass an unbiased Turing test.

Artificial general intelligence is a long way off, no matter how many cats can be identified from a pile of photos.

Maybe it is a definitions issue. AI is whatever AI researchers are working on at the cutting edge to advance the field. AGI (general intelligence) is much broader and no algorithm has come close to demonstrating AGI.


Is anybody anywhere working in "human level intelligence"?

People like to talk about it, but I doubt any serious researcher claims to be solving it. It's a non-specified problem, I can't even imagine what a paper on it would look like. How would you know you achieved it?

In practice AI is those concrete problems. And the fact that they often stop being AI after solved is a real PR problem for the researchers.


> Is anybody anywhere working in "human level intelligence"?

Yes. http://www.abi.auckland.ac.nz/en/about/our-research/animate-...


You must be a millennial. Those challenges have been pushing AI research for decades (and still are). In case you haven't noticed, "build an AI" consists of quite a few different problem domains -- not just chatbots and fully formed humanoid robots.


I agree! Six months ago this was called machine learning, now it's all AI.

Sort of like BigData and Web 2.0, vague terms used for almost everything.


Machine Learning used to be considered a sub-discipline of AI (open an AI textbook and you'll see a couple chapters on it). It's grown so big that now it's a term used in and of itself outside of AI. It is starting to be seen as distinctive from "traditional" AI in some circles. So the correct understanding is either that Machine Learning is a subset of AI or that Machine Learning is a field that emerged from AI.


I don't think that's right, as the techniques are still called machine learning.

Though the distinction between the terms "machine learning" and "artificial intelligence" is somewhat unclear (depending on usage) I think a useful one is to see ML as a technique/strategy and AI as the goal.

It's also not as though AI is or means the same as "strong AI" or AGI [1].

[1]: https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


Yes


You're confusing AI and AGI (artificial general intelligence). AGI is a subset of AI focused on introspective systems that are resourceful and don't get stuck. Like humans. Face recognition and picture processing are not AGI.


Yes. And reusing COTS radar sensors by Bosch or Continental or Delphi in a Tesla equals "Silicon Valley breakthrough in autonomous driving".


All of those features sound awful to me. I don't want AI scanning my text messages and "anticipating" what I want to do. That's just way too creepy. The photo stuff, irrelevant. I realized long ago that I never look at my photos later so i stopped taking them. But if I did I don't think I'd want Apple slurping them all up to run recognition algorithms.


>The photo stuff, irrelevant. I realized long ago that I never look at my photos later so i stopped taking them.

I'm sure this makes photography irrelevant for everybody /s


I've been jealous of Android adding those features, but I never considered using them because of the privacy implications. What excites me about Apple doing it is that they seem to really focus on maintaining privacy. I'd love more 'AI' on my phone if they can pull that off!


During the keynote, they mentioned the face recognition and other ML processing happens on the phone hardware to protects your privacy.


Oh good, well at least it's not being calculated on a device with a constantly scanning promiscuous radio.


If someone hacks your phone, they can get your photos regardless of whether your phone is doing facial recognition on them.


Yes, that's exactly my point. The article portrays the facial recognition as "safe" because it occurs on an iPhone. iOS is not a secure enclave, and pretending that it is can be very dangerous.


If it's closed source and you have little-to-no control over the software, how much of a difference does it make?

Interestingly the results of this still end up on iCloud by the looks, and it back fills from iCloud (will that add big battery drain for first install of iOS 10?)


All of the machine learning happens on-device and Apple never touches your data. Only your own phone touches your data.


Is that data never synced to the cloud?

What happens if you lose your phone. Do you lose all your photos?


At the moment, no. Each device you have to process the same data locally and doesn't get sync'ed. You can clearly disconnect the network and see it work in Photos.

> What happens if you lose your phone. Do you lose all your photos?

Not sure what you're asking as this has nothing to do with AI or anything being discussed here.

If you don't back up your data or use Photos iCloud Library to upload your photos automatically to your iCloud account, then yes, you lose all of your photos.


That's nothing. For years, iPhones have had this creepy technology which scans what you're typing and anticipates which word you're trying to type. It's Steve Jobs' greatest coup: getting millions of people to willingly let their phone AIs spy on every letter they type.

Viva la vida! Down with the robot revolution!


"He Who Controls the Keyboard, Controls the Universe" — Baron Harkonnen, probably


> I realized long ago that I never look at my photos later so i stopped taking them.

Just wait 'till you have old enough kids (and, later, grand kids).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: