The author's particular gripe is that the Watson advertisements showing someone sitting down and talking to "Watson." They bother me as well (and did so when I was working at IBM in the Watson group) because they portray a capability that nothing in IBM can provide. Nobody can provide it (again to the author's point) because dialog systems (those which interact with a user through conversational speech) don't exist out side specific, tightly constrained, decision trees (like voice mail or customer support prompts).
If SpaceX were to advertise like that, they would have famous people sitting in their living room, on mars, and talking about what they liked about the Martian way of life. In that case I believe that most people would understand that SpaceX wasn't already hosting people on Mars.
Unfortunately many, many people think that talking to your computer in actually already possible, they just haven't experienced it yet. Not sure how we fix that.
It all goes back to how the Watson that played jeopardy - what the vast majority think of when they hear the word "Watson" - was a really cool research experiment and amazing advertising.
A lot of the people that pay for "Watson" probably think they're paying for something really similar to the Watson that beat Ken Jennings at Jeopardy! and cracked jokes on TV. They're paying for something that might use some of the same algorithms and software, but they're not actually getting something that seems as sentient and "clever" as what was on TV.
To me, the whole "Watson" division does seem like false advertising.
> Unfortunately many, many people think that talking to your computer in actually already possible, they just haven't experienced it yet.
Given how often another person can't correctly infer meaning when people talk, I doubt it will ever be how people imagine it.
Initially, I imagine it will be a lot of people trying to talk normally and a lot of AI responses asking how your poorly worded worded request should be carried out, choice A, or choice B. You'll be annoyed because why can't the system just do the thing that's right 90% of the time? The problem is that 10% of the time you're just not explaining yourself well at all and 10% is really quite a lot of the time, and probably ranges from a few requests to dozens of requests a day, and while being asked what you meant is annoying, having the wrong thing done can be catastrophic.
As humans, we get around this to some degree by modeling the person we are conversing with as thinking "what would they want to do right now", which is another class of problem for AIs I assume, so I think we may not get human level interaction until AI's can fairly closely model humans themselves.
I imagine we'll probably develop some AI pidgin language that provides more formality that as long as we follow it AI voice interactions become less annoying (but you have to spend a bit of time before to formulate it).
Then again, maybe I'm way off base. You sound like you would know much better than me. :)
In a previous job I looked into the viability of pen computing, specifically which companies had succeeded commercially with a pen-based interface.
A lot of folks are too young to know this, but there was a time when it was accepted wisdom that much of personal computing would eventually converge to a pen-based interface, with the primary enabling technology being handwriting recognition.
What I found is that computer handwriting recognition can never ever be good enough to satisfy people's expectations. Why? Because no one is good enough to satisfy most people's expectations for handwriting recognition, including other humans, and often even including themselves!
Frustration with reading handwriting is a given in human interaction. It will be a given in computer interaction too--only, people don't feel like they need to be polite to a computer. So they freely get mad about it.
The companies that had succeeded with pen computing were those that had figured out a way to work around interpreting natural handwriting.
One category was companies who are specialized in pen computing as an art interface--Wacom being the primary example at the time. No attempt at handwriting recognition at all.
The other was companies who had tricked people into learning a new kind of handwriting. The primary example is the Palm Pilot's "Graffiti" system of characters. The neat thing about Graffiti is it reversed customer expectations. Because it was a new way of writing, when recognition failed, customers often blamed themselves for not doing it right!
We all know what happened instead of pens: touchscreen keyboards. The pen interface continues to be a niche UI, focused more on art than text.
It's interesting to see the parallels with spoken word interfaces. I don't know what the answer is. I doubt many people would have predicted in 2001 that the answer to pen computing was a touchscreen keyboard without tactile keys. Heck people laughed about it in 2007 when the iPhone came out.
But I won't be surprised if the eventual product that "solves" talking to a computer, wins because it hacks its way around customer expectations in a novel way--rather than delivering a "perfect" voice interface.
> The other was companies who had tricked people into learning a new kind of handwriting [...] It's interesting parallels with spoken word interfaces. I don't know what the answer is.
Well, here's a possibility: we'll meet in the middle. Companies will train humans into learning a new kind of speech which is more easily machine-recognisable. These dialects will be very similar to English but more constrained in syntax and vocabulary; they will verge on being spoken programming languages. They'll bear a resemblence to, and be constructed in similar ways to, jargon and phrasing used in aviation, optimised for clarity and unambiguity over low-quality connections. Throw in optional macros and optimisations which increase expressive power but make it sound like gibberish to the uninitiated. And then kids who grow up with these dialects will be able to voice-activate their devices with an unreal degree of fluency, almost like musical instruments.
Of course, voice assistants are already doing this to us a bit. Once I learn the phrasing that works reliably, I stick with it, even though repeating "Add ___ to shopping list" for every item feels like the equivalent of GOTO statements.
The challenge with learning anything new from our voicebots is the same as it already is: discoverability. I don't have the patience to listen to Siri's canned response when it's done what I've asked it to, and I'm probably not going to listen to it tell me "Next time, you can say xyz and I'll work better."
The easiest way to learn a new language is to be a kid and listen to native speakers converse. Unless Amazon produces a kids' show starring a family of Echos speaking their machine code, I don't see that happening.
I miss the times before voice assistants, when it was expected of user to train a voice interface. That way I could, if needed, negotiate pronunciation with the machine.
Your comment reminded me of an interesting video I saw a while back. The video is a talk [0] by a programmer explaining and demonstrating how he developed and used a voice recognition system to program in emacs, after a significant RSI injury.
The commands and syntax he uses to interact with the system might be a decent approximation of the custom English syntax you suggest.
I'd just like to see a voice interface that has any attempt at natural error correction.
We do this in a hundred ways in English, emphasis, meta-words, too many to fold into a system. But with a little user prodding, I think you could cover enough of this ground to still sound almost natural.
Asst.: Would you like me to read that back to you?
User: Yes
A: The click brown ox...
U: The _quick_ brown _fox_...
A: Ok. The the quick brown fox jumped over the lazy log.
Is that correct?
U: No.
A: Which part [is incorrect]?
U: The the quick
A: The the quick. What should it be?
U: The quick.
A: The quick brown fox jumped over the lazy log. Is that
correct?
U: No. Lazy dog.
A: Lazy log?
U: No. Lazy dog.
A: Spell "log." <Or "Spell" + recording of user's voice saying ambiguous word>
U: D-O-G.
A: Ah. The quick brown fox jumped over the lazy dog. Do
I have it now?
U: Yes.
A: Say 'send' to send message.
Agreed that voice systems are an incredibly hard problem, but I suspect rule-based natural language correction systems would go a long way.
I'm not so sure. Maybe I'm naive but I could see imitation and reinforcement learning actually becoming the little bit of magic that is missing to bring the area forward.
From what we know, those techiques play an important roles in how humans themselves go from baby glibberish to actual language fluency - and from what I understand there is already a lot of research ongoing how reinforcement learning algorithms can be used to learn walking cycles and similar things.
So I'd say, if you find a way to read the non-/semiverbal cues humans use to communicate understanding or misunderstanding among themselves and use those as penalties/rewards for computers, you might be on a way to learn some kind of languages that humans and computers can communicate with.
The same goes into the other direction. There is a sector of software where human users not only frequently master highly intricate interfaces, they even do it out of their own motivation and pride themselves on archieving fluency: Computer games. The secret here is a blazing fast feedback loop - which gives back penalty and reward cues in a similar way to reinforcement learning - and a carefully tuned learning curve which starts with simple, easy to learn concepts and gradually becomes more complex.
I would argue, if you combine those two techniques - using penalties and rewards to train the computers and communicate back to the user how well the computer understood you, you might be able to move to a middle-ground representation without it seeming like much effort on the human side.
The first generation Newton was insanely good at recognizing really terrible cursive handwriting and terrible at recognizing tidy or printed handwriting. The second generation Newton was very good at recognizing tidy handwriting (and you could switch back to the old system if you were one of the freaks whose handwriting the old system grokked). The Newton also had really nice correction system, e.g. gestures for changing the case of a single character or word. In my opinion the later Newton generations recognized handwriting as well as you could expect, when it failed you forgave it and could easily correct the errors, and was decently fast on the 192MHz ARM CPUs.
I got a two-in-one Windows notebook and I was really, really impressed with the handwriting recognition. It can even read my cursive with like 95% accuracy, which I never imagined. That said, it's still slower than typing.
Finger-on-nose! It seems to be a question of utility and efficiency rather than primarily an emotional response to effectiveness of a given UI.
I type emails because it's faster. I handwrite class notes because I often use freeform groupings and diagrams that would make using a word processor painful.
On the other hand, learning to type took some effort and I see many people who do not know how to properly type. The case for handwriting might have looked better if you assumed people would not all want to become typists.
Back in the early 2000s, the hybrid models (the ones with a keyboard) became the most popular Windows tablet products. A keyboard is a pretty convenient way to work around the shortcomings of handwriting recognition.
Most owners eventually used them like laptops unless illustrating, or operating in a sort of "personal presentation mode," where folding the screen flat was convenient. For most people, handwriting recognition was a cool trick that didn't get used much (as I suspect it was for you, eventually).
The utility of a handwriting recognition system is ultimately constrained by the method of correcting mistakes. Even a 99% accuracy means that on average you'll be correcting about one word per paragraph, which is enough to feel like a common task. If the only method of correction is via the pen, then the options are basically to select from a set of guesses the system gives you (if it does that), or rewrite the word until it gets recognized.
But if a product has a keyboard, then you can just use the keyboard for corrections. But if you're going to use the keyboard for corrections, why not just use it for writing in the first place...
There is software that allows you to draw the characters on a touch screen, but it's still slower than typing and most people either type the words phonetically (e.g. in Pinyin) or use something like Cangjie which is based on the shape of character components. Shape-based methods are harder to learn but can result in faster input.
For Japanese there are keyboard-based romaji and kana based input methods.
The correct Hanzi/Kanji is either inferred from context or, as a back-up, selected by the user.
Almost nobody uses kana input in Japanese. There are specialized systems for newspaper editors, stenographers, etc., that are just as fast as English-language equivalents, but learning them is too difficult for most people to bother.
Huh? The standard Japanese keyboard layout is Hiragana-based, and every touchscreen Japanese keyboard I've used defaults to 9key kana input. Do people really normally use the romanji IME instead?
Yes. Despite the fact that kana are printed on the keys, pretty much no Japanese person who is not very elderly uses kana input. They use the romaji (no N) input instead, and that's the default setting for keyboards on Japanese PCs.
You are right that the ten-key method is more common on cell phones (and older feature phones had kana assigned to number keys), although romaji input is also available.
Not sure about that. I guess it depends on how good the prediction is for your text, but in general the big problem is that the step where you select what character you actually meant takes time.
I've always thought it'd be nice to be able to code at the speed of thought - you can often envisage the code or classes that'll make up part of a solution, it's just so slow filtering that through your fingers and the keyboard; if the neural interface 'thing' could be solved that'd be a winner I think, both for coding and manipulating the 'UI'.
It seems within the bounds of possibility to me (from having no experience in the field ;), if patterns could be predictably matched in real-time or better time than it takes to whack the keyboard or move the mouse (can EEG or FMRI make predictable sort of patterns like that?).
>I've always thought it'd be nice to be able to code at the speed of thought - you can often envisage the code or classes that'll make up part of a solution,
In my experience that 'solution' isn't a solution but rather a poorly guessed attempt at one single part of it. Whenever I've thought I have the architecture of a class all figured out there is one corner case that destroys all the lovely logic and you either start from scratch on yet another brittle monolith, or accept that you are essentially writing spaghetti code. At that point documentation that explains what does what and why is the only way to 'solve' the issue of coding.
Unfortunately commercial-grade non-invasive EEG headsets can only reliably respond to things like intense emotional changes and the difference between sleep/awake state, but not much more than that. Awhile ago I had some moderate success classifying the difference between a subject watching an emotional short film vs. stand-up comedy. Mostly they are effective at picking up background noise and muscular movement and not much else.
This is based on my experience with the Muse [1] headset, which has 4 electrodes plus a reference electrode. I know there are similar devices with around a dozen or so electrodes, but I can't imagine they're a significant improvement.
So I think we're still a long way off from what you're describing :(
This comment is way too long to justify whatever point you are making. First of all, how long ago do you mean? Are you 100 years old and people in the 1940's thought computers would be operated by pens?
But also, when my college had a demo microsoft tablet in their store in 2003, I could sign my name as I usually do (typical poor male handwriting) and it recognized every letter. That seems pretty good to me.
I also had a Palm Pilot then and Grafitti sucked, but that's just one implementation.
In addition to what novia said about your first sentence, your comment is so weirdly off-base. You have very strong opinions about pen computing, but where in reality are they coming from?
No, we are not talking about the 1940s, why would anyone be talking about the 1940s? How does that make any sense at all?
We aren't talking about a Microsoft demo that worked well for you once, either. We are talking about the fact that, since the 2000s when it was tried repeatedly, people do not like to hand-write on computers. It is a strange thing not to be aware of when you're speaking so emphatically on the topic.
I don't have strong opinions about pen computing, but the comment I replied to was just giving a bad impression of the tech in the early 2000s. It worked quite well actually.
Since he didn't specify the timeframe, I gave him the benefit of the doubt and thought it he was talking about before my time.
I think you have a good point. Something that was fascinating for me was that one of the things the search engine did after being acquired by IBM was looking at search queries for common questions people would ask a search engine. Taking 5 years of search engine query strings (no PII[1]) and turning them into a 'question' that someone might ask.
What was interesting is that it was clear that search engines have trained a lot of people to speak 'search engine' rather than stick with English. Instead of "When is Pizza hut open?" you would see a query "Pizza Hut hours". That works for a search engine because you have presented it with a series of non stopword terms in priority order.
It reminded me of the constraint based naming papers that came out of MIT in the late 80's / early 90's. It also suggested to me ways to take dictated speech and reformat it into a search engine query. There is probably a couple of good academic papers in that data somewhere.
That experience suggests to me that what you are proposing (a pidgin type language for AI) is a very likely possibility.
One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question. The most recent example was changing the question
"Do you think you will see Avengers: Infinity War"
With things like "Do you think you will see Avengers something I swear." Similar sounds and people have already jumped to the question with enough context that they always answer as if he had said "Infinity War." How will conversational systems deal with that, I have no idea.
> One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question. The most recent example was changing the question
Yeah, that's exactly what I'm talking about. I suspect this works because the interviewees have a model of what they think Jimmy Fallon is going to ask, and when hearing a question, if they can't make out words or they sound wrong, the simplest thing is to assume you heard them wrong and think what it's likely he would have asked. To that, the answer is fairly obvious.
We've all experienced those moments when we're caught off guard when this fails. For example, you're at work and someone you're working with says something that sounds like it was wildly inappropriate, or out of character, or just doesn't follow the conversation at all, and you stop and say "excuse me? What was that?" because you think it's more likely you heard them wrong than to believe you fuzzy matched their statement "Hairy Ass?" and you were correct.
I've been using search engines since Lycos and before, so I reflexively use what you call "search engine" language. However, due to the evolution of Google, I've started experimenting with more english-like queries, because Google is getting increasingly reluctant to search for what I tell it in a mechanistic way, often just throwing away words, and from what I've read it's increasingly trying to derive information from what used to be stopwords.
One thing that is very frustrating now is if there are two or three things out there related to a query, and Google prefers one of them, it's more difficult than ever to make small tweaks to the query to get the one you want. Of course, sometimes I don't know exactly what I want, but I do know I don't want the topic that Google is finding. The more "intelligent" it becomes, the harder it is to find things that diverge from its prejudices.
Then I stop myself. Isn’t it possible that he expects Alexa to recognize a prompt that’s close enough? A person certainly would. Perhaps Dad isn’t being obstreperous. Maybe he doesn’t know how to interact with a machine pretending to be human—especially after he missed the evolution of personal computing because of his disability.
...
Another problem: While voice-activated devices do understand natural language pretty well, the way most of us speak has been shaped by the syntax of digital searches. Dad’s speech hasn’t. He talks in an old-fashioned manner—one now dotted with the staccato march of time. “Alexa, tell us the origin and, uh, well, the significance, I suppose, of Christmas,” for example.
>One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question.
There was a similar thing going around the Japanese Web a few years ago, where you'd try to sneak in soundalikes into conversation without anybody noticing, but the example I remember vividly was replacing "irasshaimase!" ("welcome!") with "ikakusaimase!" ("it stinks of squid!").
Humans are really good at handling 'fuzzy' input. Something that is supposed to have meaning, and should if everything were heard perfectly, but which has some of the context garbled or wrong.
At least, assuming they have the proper context. Humans mostly fill in the blanks based on what they /expect/ to hear.
Maybe someday we'll get to a point where a device we classify as a 'computer' hosts something we would classify as an 'intelligent life form'; irrespective of origin. Something that isn't an analog organic processor inside of a 'bag of mostly salty water'. At that point persistent awareness and context(s) might provide that life form with a human-like meaning to exchanges with humans.
People usually judge AI systems based on superhuman performance criteria with almost no human baseline.
For example, both Google Translate and Facebook's translation system could reasonably be considered superhuman in performance because the singular systems can translate into dozens of languages immediately more accurately and better than any single human could. Unfortunately people compare these to a collection of the best translators in the world.
So you're exactly on track, that humans are heavily prejudiced against even simple mistakes that computers make, yet let consistent continuous mistakes slide for humans.
>Unfortunately people compare these to a collection of the best translators in the world.
I don't think that's really true. They're comparing them to the performance of a person who is a native speaker of both of the languages in question. That seems like a pretty reasonable comparison point, since it's basically the ceiling for performance on a translation task (leaving aside literary aspects of translation). If you know what the sentence in the source language means, and if you know the target language, then you can successfully translate. Translation isn't some kind of super difficult task that only specially trained translators can do well. Any human who speaks two languages fluently (which is not rare, looking at the world as a whole) can translate between them well.
> Any human who speaks two languages fluently (which is not rare, looking at the world as a whole) can translate between them well.
No, sadly, that's not the case. A bilingual person may create a passable translation, in that they can keep the main gist and that all the important words in the original text will appear in the translated text in some form, but that does not automatically make it a "well-translated" text. They are frequently contorted, using syntaxes that hardly appear in any natural sentences, and riddled with useless or wrong pronouns.
A good translation requires more skill than just knowing the two languages.
Right, but I don't think many people expect machine translation to do better than that. And I don't think the ability to do good translations is quite as rare among bilinguals as you're making out. Even kids tend to be pretty good at it according to some research: https://web.stanford.edu/~hakuta/www/research/publications/(...
> They are frequently contorted, using syntaxes that hardly appear in any natural sentences, and riddled with useless or wrong pronouns.
I find it hard to believe that this is the case if the translator is a native speaker of the target language. I mean, I might do a bad job of translating a Spanish text to English (because my Spanish sucks), but my translation isn't going to be bad English, it's just going to be inaccurate.
Consider yourself lucky, then. You are speaking English, with its thousands (if not millions) of competent translators translating everything remotely interesting from other languages. The bar for "good translation" is kept reasonably high for English.
As a Korean speaker, if I walk into a bookstore and pick up any book translated from English, and read a page or two, chances are that I will find at least one sentence where I can see what the original English expression must have been, because the translator chose a wrong Korean word which sticks out like a sore thumb. Like, using a word for "(economic/technological) development" to describe "advanced cancer". Or translating "it may seem excessive [to the reader] ..." into "we can consider it as excessive ..."
And yes, these translators don't think twice about making sentences that no native speaker would be caught speaking. Some even defends the practice by saying they are faithful to the European syntax of the original text! Gah.
> Like, using a word for "(economic/technological) development" to describe "advanced cancer"
That sounds like a mistake caused by the translator having a (relatively) poor knowledge of English. A bilingual English/Korean speaker wouldn't make that mistake. I mean, I don't know your linguistic background, but you clearly know enough English and Korean to know that that's a bad translation, and you presumably wouldn't have made the same mistake if you'd been translating the book.
>Some even defends the practice by saying they are faithful to the European syntax of the original text!
I think there's always a tension between making the translation faithful to the original text and making it idiomatic. That's partly a matter of taste, especially in literature.
> A bilingual English/Korean speaker wouldn't make that mistake.
Well, "bilingual" is not black and white. I think you have a point here, but considering that people who are paid to translate can't get these stuff right, the argument veers into the territory of "no true bilingual person".
Anyway, my pet theory is that it is surprisingly hard to translate from language A to B, even when you are reasonably good at both A and B. Our brain is wired to spontaneously generate sentences: given a situation, it effortlessly generates a sentence that perfectly matches it. Unfortunately, it is not trained at all for "Given this sentence in language A, re-create the same situation in your mind and generate a sentence in language B that conveys the same meaning." In a sense, it is like acting. Everybody can laugh on their own: to convincingly portray someone else laughing is quite another matter.
Not entirely, but it is definitely possible for someone to be a native speaker of two languages, and they wouldn't make those kinds of mistakes if they were.
>They're comparing them to the performance of a person who is a native speaker of both of the languages in question.
Which is synonymous with the best translators in the world. Those people are relatively few and far between honestly - I've traveled a lot and I'd argue that native bi-lingual people are quite rare.
Depends on which part of the world you're in. Have you been to the USA? English/Spanish bilingualism is pretty common there. And there are lots of places where it's completely unremarkable for children to grow up speaking two languages.
This is well said, but one reason this double-standard is rational is that current AI systems are far worse at recovery-from-error than humans are. A great example of this is Alexa: if a workflow requires more than one statement to complete, and Alexa fails to understand what you say, you are at the mercy of brittle conversation logic (not AI) to recover. More often than not you have to start over or worse yet execute an incorrect action, then do something new to cancel. In contrasts, even humans who can barely understand each other can build understanding gradually because of the context and knowledge they share.
Our best AIs are superhuman only at tightly scoped tasks, and our prejudice encodes the utility we get from the flexibility and resilience of general intelligence.
> our prejudice encodes the utility we get from the flexibility and resilience of general intelligence
I don't think any particular human is so general in intelligence. We can do stuff related to day to day survival (walk, talk, eat, etc) and then we have one or a few skills to earn a living. Someone is good at programming, another at sales, and so on - nobody is best at all tasks.
We're lousy at general tasks. General intelligence includes tasks we can't even imagine yet.
For thousands of years the whole of humanity survived with a mythical / naive understanding of the world. We can't even understand the world in one lifetime. Similarly, we don't even understand our bodies well enough, even with today's technology. I think human intelligence remained the same, what evolved was the culture, which is a different beast. During the evolution of computers, CPUs remain basically the same (just faster, they are all Turing machines) - what evolved was the software, culminating in current day AI/ML.
What you're talking about is better explained by prejudice against computers based on past experience, but we're bad at predicting the evolution of computing and our prejudices are lagging.
I might use this in future critical discussions of AI. “It’s not really intelligent.” Yeah, well, neither am I. On a more serious note, it seems obvious to me that technology is incremental, and we are where we are. Given 20 more years of peacetime, we’ll be further along. When VGA 320x200x256 arrived it was dubbed photorealistic. I wonder what HN would have had to say about that.
Being able to do many things at a level below what trained humans can isn't what any reasonable person would call superhuman performance. If machine translation could perform at the level of human translators at even one pair of languages (like English-Mandarin), that would be impressive. That would be the standard people apply. But they very cleary can't.
Generally people think superhuman = better than the best humans. I understand this and it's an obvious choice, but it assumes that humans are measured on a objective scale of quality for a task, which is rarely the case. Being on the front line of deploying ML systems, I think it's the wrong way to measure it.
I think Superhuman should be considered relative to the competence level of average person who has the average amount of training on the task. This is because from the "business decision" level, if I am evaluating between hiring a human with a few months or a year of training and a tensorflow docker container that is reliably good/bad, then I am going to pick the container every time.
That's what is relevant today - and the container will get better.
Well not explicitly or in any measurable terms [1]. The term 'Superhuman' lacks technical depth in the sense of measurement. So for the purposes of measuring systems we build vs human capability, it's a pretty terrible measure.
> I imagine we'll probably develop some AI pidgin language that provides more formality that as long as we follow it AI voice interactions become less annoying (but you have to spend a bit of time before to formulate it).
There's been some tentative discussion of using Lojban as a basis for just that!
>> I imagine we'll probably develop some AI pidgin language that provides more formality that as long as we follow it AI voice interactions become less annoying (but you have to spend a bit of time before to formulate it).
That's what Roger Schank is advocating against in his article: artificial intelligence that makes humans think like computers, rather than computers that think like humans.
Personally, I think it's a terrible idea and the path to becoming like the Borg: a species of people who "enhanced" themselves until they lost everything that made them people.
Also: people may get stuff wrong a lot of the time (way more than 10%). But, you can always make a human understand what you mean with enough effort. Not so with a "computer" (i.e. an NLP system). Currently, if a computer can't process an utterance, trying to explain it will confuse it even further.
What's bad about it is that it leads to a "market for lemons" for anyone working in the A.I. field in either engineering or sales. People see so much bullshit that they come to the conclusion that it is all bullshit.
Sure, but Watson isn’t really a product and IBM isn’t really a merchant here. There’s never going to be a Watson review anywhere because (AFAICT) Watson is just a marketing name for a wide range of technologies, none of which do any of the stuff shown in the TV commercials. That’s the problem.
I don’t even know who those ads are targeting. Anyone who knows anything at all about this stuff will be dismayed at the sheer BS of it all. Everyone else seeing the ads isn’t in a position to steer customers towards IBM for all their AI needs. They really need to kill the whole ad campaign, surely it’s doing IBM more harm than good.
No, it's bad because it drives down the potential market value for good products that could become available in the market because the market expects that all AI is inferior to expectations.
Temporarily. Once past (and possibly during) the "Trough of Disillusionment"[1], products that really do solve problems that people are willing to pay to have solved will do very well.
The "market for lemons" theory proved that used car dealerships can't exist. And yet they do. People understand that ads are just a hint and a tease, not a precise value proposition.
That isn't the conclusion of the market for lemons. It predicted that the price of used cars would be substantially depressed due to unknown quality, especially when the seller had little reputation. Considering the huge depreciation a new car experiences as soon as it is driven off the lot and that prices are lower when buying from individual sellers rather than used car dealerships, there is plenty of evidence that the theory was correct.
What I find interesting is that free Carfax reports are becoming pretty common, which seems like a mechanism to deal with the problem of a market with only lemons. But I was browsing cars online and started looking for "good" ones, defined as no accidents and regular oil changes at the dealer, and interestingly they were almost non-existent. So in fact, the proposition that cars for sale will be lemons seems to be true even though the compensatory mechanisms exist. It would appear maybe buyers don't utilize information to their advantage, which reminds me of a recent article about how investors don't seem to digest all the information in SEC filings, even though orthodox market theory assumes prices incorporate all public data.
This is not just Watson and IBM. Many, many people in AI make grandiose claims and throw around big words like "Natural Language Understanding" "scene understanding" or "object recognition" etc.
And it is a very old problem, at least from the time of Drew McDermot and "Artificial Intelligence meets Natural Stupidity":
However, in AI, our programs to a great degree are problems rather than solutions. If a researcher tries to write an "understanding" program, it isn't because he has thought of a better way of implementing this well-understood task, but because he thinks he can come closer to writing the _first_ implementation. If he calls the main loop of his program UNDERSTAND, he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself, and enrage a lot of others.
This is Marketing 101 though. It's easier to sell someone a dream or some emotional state than it is to sell an actual product/thing that you have. I'd give people a little more credit. Adults know what advertisements are and that they're all phony.
I don't disagree, although I'd be a bit more nuanced than that. It is like herbal supplements, you can say how great they make you feel but you have to avoid making medical claims that you can't back up.
I don't think anyone would have flinched at setting the stage like "At IBM we're working on technology so that some day your interaction with a computer can be as simple as sitting down and having a conversation."
Setting expectations you cannot meet is something you try to avoid in Marketing. Because you get angry op eds like the one that kicked off this conversation.
>I don't think anyone would have flinched at setting the stage like "At IBM we're working on technology so that some day your interaction with a computer can be as simple as sitting down and having a conversation."
Like those old AT&T commercials (available on YouTube) where AT&T made all kinds of predictions about the future that seemed crazy in the 1980's, but are reality 35 years later. Video calls, instant electronic toll booth payment, etc...
A lot of non-technical decision makers don't think this is phony and are making life very difficult for engineers in their companies we will all remember this though. For the future.
There was a time when I'd introduce people to 'Eliza'-type programs. Those programs stashed phrases typed in by users. When new users typed in a phrase, the programs would parrot back the stashed phrases ... based on really crude algo's, or even random selection.
Nonetheless, I watched people get really worked up about what the computer was 'saying'. Partly because of the sarcastic stuff people would say, partly because of their expectations about 'cybernetic brains'.
Now the 'Cyc'le is back. And people actually working on this really hard problem are not helped by dumb marketing.
You guys couldn't have picked a more perfect example: Red Bull settled a class action lawsuit for false advertising, not because of "gives you wings", but because their advertising falsely suggested scientific support for their product being better than caffiene-only products.
How many is “many” though? I’m assuming slightly more than thought Star Wars was real, and slightly fewer than think Iron Man was real. That’s not ignorance, that’s mental illness.
It’s a pretty serious problem if public company executives who choose how to spend investors dollars actually believe SpaceX’s famous people are on Mars. Then they pay SpaceX to start carrying goods to Mars to sell to the Martians. SpaceX obliges with multi-million dollar contracts, knowing they don’t have a way to get to Mars but have some rockets that can put satellites into orbit (not the same).
We’d all see the problem with such a situation. People with a 6th grade knowledge on space travel understand that the commercials were BS marketing and we should fire the executives for not knowing the same or doing their due diligence. We’d be mad at SpaceX for taking advantage of companies and failing to reach Mars as promised.
In my limited understanding, isn't this just the state of A.I. in general? I don't know of anyone trying to solve the general intelligence problem. Everyone's just finding formula for specific applications and using M.L. to curve fit.
Creating an impression of a capability without legally promising it is the essence of marketing, especially in the corporate IT consulting world which is 90% garbage and hugely expense.
Google Assistant, Apple Siri and Amazon Echo are all quite fancy natural language parsers. After parsing, it's just a group of "dumb" (wo)man-made widgets.
There not entirely true. Google Search will answer a question with an automatically extracted excerpt from a website page. Watson did the similar with its Jeopardy system
Is that not parsing and keyword searching? Once again there's no "cognitive computing" going on.
Which is a really weird and nebulous thing to define, by the way. I think we aren't going to have super convincing cognitive computing until we have something approaching AGI. Which of course is waaay different from the AI and machine learning that is popular today, despite most laypeople conflating any mention of AI with AGI. Of course, when IBM is making an ad, they are largely aiming at laypeople.
Search and command interfaces are doable (within reasonable expectations) dialogs just aren't without eye popping engineering challenges (like 100k branches hand engineered onto trees + lots of deep learning.)
Also we (science) know shockingly little about real dialogs and their challenges as far as I can find out - but I am not a linguist and am angling here fore some killer references !
Kinda how Tesla advertises autopilot as a self driving car that's safer than human drivers?
> Full Self-Driving Hardware on All Cars
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
Am I extreme in feeling someone should go to jail for that? It was bad enough when they were originally advertising it, but now that they're defending it even after people died...ugh.
Musk's statements, and Tesla in general, have gone to cultivating an impression that Tesla almost has self-driving. An impression that, okay, it's not perfect, but it's good enough that you can act like it is for most of the time. This impression is far enough from the truth that it's fundamentally dishonest, particularly when the response from Tesla after incidents amounts to "the driver is completely at fault here, we don't need to change anything" (in other words, their marketing says it's basically self-driving, their legal defense says self-driving is only at your own risk).
In the moralistic sense, yes, Tesla needs to be reprimanded for its actions. However, lying in this manner is often well-protected by law, if you manage to include enough legalese disclaimers somewhere, and I suspect Tesla has enough lawyers to make sure that legalese is sufficient to disclaim all liability.
While I don't necessarily agree with how they've advertised it I think they are legally safe. All Model 3's DO have the HARDWARE to support full-self driving, even if the software is not there. And regarding the software, it has been shown to be about 40% safer than humans where it is used, which is what they've claimed.
Because humans can do driving with comparable (worse, actually) hardware. Superhuman reflexes mean that software with superhuman driving abilities can exist.
This is the part that can be emulated in software. All you have to prove is that what ever platform/language you're using for the programming is Turing Complete, which is the case for almost all of the most popular languages.
So can I run Crysis on my TI calculator? I'm sure whatever platform/language is running on it is Turing Complete. I think you missed the point that the brain is also hardware.
> All Model 3's DO have the HARDWARE to support full-self driving, even if the software is not there.
Please provide evidence.
Given that the only full-self driving system out there is Waymo's, which uses completely different hardware than Tesla's, it is impossible to back your claim, unless you develop a fully-self-driving system on top of Tesla's hardware.
So until that is done, your claim is false, no matter how many caps you use.
Sure, they have cameras to see and a computer to do rigorous processing. We can compare this to a human, who uses their vision to perceive the outside world while driving. (You could also argue humans use their hearing, well the Tesla's have mic's if they really wanted to use that).
> they have cameras to see and a computer to do rigorous processing. We can compare this to a human, who uses their vision to perceive the outside world while driving
My phone also has cameras and a processor. Are you implying that my phone is as equipped to drive a car as a human?
A monkey has eyes, a brain, arms and legs. Are you implying that a monkey is as equipped to drive a car as a human?
I'd say the monkey has most of the hardware but not the software, analogously. As for the phone, right I was assuming the cars have enough processing power and memory to do the necessary processing. But granted they haven't exemplified full self-driving on their current hardware, I will have to concede that we do not know how much processing and power are needed to be truly self-driving. For all we know, that last 10% of self-driving capability may require an exponential increase in the amount of processing required.
if we ask what the reasonable expectation is after an advertisement tells a customer that the capability "exceeds human safety" I would say that the average customer thinks of a fully automated system.
This couldn't be further from the truth as automated vehicles still suffer from edge cases (remember the lethal accident involving a concrete pillar) where humans can easily make better decisions.
A system that is advertised as superior to human judgement ought to strictly improve on human performance. Nobody expects that to mean that the car drives perfect 95% of the time but accidentally kills you in a weird situation. This 'idiot savant' characteristic of ML algorithms is what makes them still dangerous in day-to-day situations.
Yes I totally agree, I think there should be some regulation regarding this area. At least in terms of being clear when advertising. I think it's ok to deploy such a system where in some/most cases the AI will help, but it needs to be made apparent that it can and will fail in some seemingly simple cases.
That’s probably a gambit to avoiding being easily sued but it’s a really bad faith attempt to mislead. Most people are going to read that and assume that a Tesla is self-driving, as evidenced by the multiple accidents caused by someone trusting the car too much. Until that’s real they shouldn’t be allowed to advertise something which hasn’t shipped.
I hand you a gun and say "you can point this gun at anyone, pull the trigger, and it wont harm them." You point the gun at a crowd, pull the trigger, it goes off and kills someone.
Who do you blame?
No one is saying guns, or cars, aren't dangerous. However this kind of false advertising leads people to use cars in dangerous ways while believing they are safe because they've been lied to.
(Side note, there is some fine print you never read that says the safe-gun technology only works when you point the gun at a single person and doesn't work on crowds.)
It means this will kill people but fewer people than humans would and they have actual data that backs this assertion up.
The benchmark is not an alert, cautious, and sober driver because that's often not how actual people drive. So right now it's often safer to drive other times it really is safer to use autopilot, net result fewer people die.
Autopilot should not be accessible to drivers until their driving habits have been assessed and baselined. If autopilot is known to drive more safely than the individual human driver in environments X, Y, and Z then it should be made available in those environments, if not encouraged. That might not make an easy sell since a major value prop is inaccessible to many of the people who really want to use it, but it's the most reasonable, safest path.
I also imagine that cars will learn from both human and each others' driving patterns over time, which (under effective guidance) should enable an enormous improvement in a relatively short period of time.
I can grab the data, but the surprising thing about fatal crashes are the typical circumstance. It's an experienced driver driving during the day, in good weather, on a highway, in a sedan style car, sober. There are two ways to interpret this data. The first is we assume that crashes are semi-randomly distributed, so since this is probably the most typical driving condition it therefore naturally follows that that's where we'd expect to see the most fatalities.
However, I take the other interpretation. I don't think crashes are randomly distributed. That that scenario is absolutely perfect is really the problem because of humans. All a crash takes is a second of attention lapse at a really bad time. And in such perfect circumstances we get bored and take road safety for granted as opposed to driving at night or navigating around some tricky curves. And that's a perfect scenario to end up getting yourself killed in. And more importantly here, that's also the absolutely perfect scenario for self driving vehicles who will drive under such conditions far better than humans simply because it's extremely trivial, but they will never suffer for boredome or lack of attention.
Just about every expert agrees though that their hardware is NOT capable without LiDAR (which is probably why they churn through people running their Autopilot program), although proving that in court is a whole thing.
Or maybe there is no basis for making such a claim.
Or well, I just put four cameras on my car and inserted their USB wires into an orange. I declare that this is enough for FSD now. I don't have to prove anything for such an absurd statement?
And how do you know that the present hardware is enough to deliver full self-driving capability? I don't think anyone knows what precise hardware is required at this point as it wasn't achieved yet.
So, do you think it may be possible that people do understand the distinction and are STILL not convinced?
I, and the law would blame the person. They should have no particular reason to believe your claim and absolutely no reason to perform a potentially lethal experiment on people based on your claim.
You may be breaking the law as well, depending on the gun laws in your area, but as I understand it (IANAL) the manslaughter charge of entirely on the shooter.
I would blame the person, because the best case scenario of shooting an allegedly harmless gun into a crowd is equivalent in effect to the worst case scenario of doing nothing in the first place.
One person died from that accident. There are now at least 4 deaths where Tesla's autopilot was involved (most of the deaths in China don't get much publicity, and I wouldn't be surprised if there are more). And the statistics do not back up your claim that Tesla is safer (despite their attempts to spin it that way).
No, the NHTSA report says nothing about how Tesla's autopilot compares to human driving. Here are two comments I made last week about this:
In that study there are two buckets, one which is total Tesla miles in TACC-enabled cars and then after the update total Tesla miles in cars with TACC + Autosteer and they calculated on airbag deployments. Human driven miles are going to dominate both of those buckets and there's a reason the NHTSA report makes zero claims about Tesla's safety relative to human drivers. It's totally outside the scope of the study. Then add in that some researchers who are skeptical of the methodology and have been asking for the raw data from NHTSA/Tesla have yet to receive it.
Autosteer, however, is relatively unique to Tesla. That’s what makes singling out Autosteer as the source of a 40 percent drop so curious. Forward collision warning and automatic emergency braking were introduced just months before the introduction of Autosteer in October 2015. A previous IIHS study shows that both the collision warning and auto emergency braking can deliver a similar reduction in crashes.
I'm not sure what "safer" means. Not sure what posting your insistence in other threads has to do with this. 10x or 100x the number of deaths would be small price to pay to get everyone in autonomous vehicles. It's air travel, all over again and there's a price to pay.
You can define it however you want. By any common sense definition of safety Tesla has not proven that their autopilot is 'safer' than a human driver.
>10x or 100x the number of deaths would be small price to pay to get everyone in autonomous vehicles.
This supposes a couple things. Mainly that autonomous vehicles will become safer than human drivers, that you know roughly how many humans will have to die to achieve that and that those humans have to die to achieve it. Those are all unknown at this point and even if you disagree about the first one (which I expect you might) you still have to grant me two and three.
Ignoring Tesla, self driving cars will almost definitely be safer. People die in cars constantly. People don't have to die if companies don't rush the tech to market like Tesla plans to. To be fair, I blame the drivers, but I still think the aggressive marketing pretty much guaranteed someone would be stupid like that, so Tesla should share some of the blame as well.
I don't think we are opposed to autonomous vehicles.
Can Autopilot not run passively and prevent human errors? Why are the two options presented seem to be only "only a human behind the wheel, not assisted by even AEB" and "Full autopilot with no human in the car at all".
> Do you know how many people die in normal, non-self-driving cars
false equivalency. you need at least compare per mile, at best by driver demographic since most tesla drivers are in the high income and maybe in the safety conscious bracket
If those people die because the car fails catastrophically, and predictably, rather than the usual reasons it would be news. This is not about 100% or “the perfect is the enemy of the good” or any bullshit Utopianism. This is about a company marketing a flawed product deceptively for money, while letting useful idiots cover their asses with dreams of level 5 automation that aren’t even close to the horizon.
I think if anything you’re overly conservative for thinking somone rather than many people need jail time for it. I would look up the line of people who made and supported the decision for that kind of fraudulent marketing and drag them all into court.
Financial ruin for the company that was willing to put that BS below it's letterhead? Yeah, sure, in proportion with the harm caused. That said, human life isn't sacred. It and everything else gets trades for dollars all the time. A death or three shouldn't cripple a big company like Tesla unless they were playing so fast and loose that there's significant punitive damages.
In a large company like Tesla it shouldn't be marketing's job to restrain itself. That's not safe at that scale. There should be someone or a group who's job it is to prevent marketing from getting ahead of reality just like it's security's job to spend all day playing whack-a-mole with the stupid ideas devs (especially the web ones) dream up. Efficiently mediating conflicting interests like that is what the corporate control structure is for.
People using your product in accordance with your marketing should be considered almost the same as in accordance with your documentation. While "people dying while using the product in accordance with TFM" is not treated as strict liability it's pushing awfully close.
I see it as a simple case of the company owning up to its actions. It failed to manage itself properly, marketing got ahead of reality. You screw up and someone gets hurt then you pay up.
Are you saying that you see human life as something that can be bought and sold?
It's one thing to talk about punitive damages and liability, these are factual mechanisms of our legal system. But just because damages can be paid and are on a regular basis they do not imply that there is some socially acceptable let alone codified price for a human life. And we should hope for our own sake there never is.
I agree that marketing should not be allowed to let their imagination run wild to the detriment of the company.
In the case of the liability bit IANAL but that's likely to differ between industries. Some sectors like aviation are highly regulated and require certification of the aircraft and the airplane flight manual is tied to that serial number and is expected to be, correct and free of gross errors for normal operation. So liability can vary. Are you suggesting from experience that there is no liability in the case of Tesla taking into account their industry's context? I don't know enough about their industry to judge, just looking for clarification.
OP is probably referring to the fact that in wrongful death suits, society has put a very tangible financial number on the value of human life. This made it possible for corporations to make trade-off between profit and liability, giving the potential that someone could get enough profit to justify risking others’ lives.
Punitive damages go part way to help prevent this, but not far enough to guarantee that it never happens.
Had the society truly believed life to be sacred, I suspect we’d have very different business practices and penalties that are not limited to financial ruin.
Well, unfortunately we also believe that corporations are sacred, so when bad things happen we shake our fists and collect a couple dollars. But the guilty corporation is never put to death. (Well, rarely ever..)
It's not that cut and clear. Yeah killing and maiming is bad but those things always (and will for the foreseeable future) happen at large scale and you have to be able to have a reasonable discussion about the trade-offs. "Well we can't guarantee we won't kill anyone in an edge case so let's all just go home" isn't an option.
You can build a highway overpass with a center support on the median for X and there will by a small chance of someone crashing into it and getting killed. You could design one without a support on the median but it will cost Y (Y is substantially more than X). Now scale that decision to all overpasses and you've got a substantial body count. At the end of the day it's a trade-off between lives/injury and dollars.
Agreed that their are trade-offs made, of course. But this society spends a ton of time trying to prevent all kinds of death. That's because life is sacred.
It seems odd to argue that. Yeah of course we can't stop doing things, but it doesn't mean we don't try really hard to avoid killing people.
Yes, and both Elon Musk as an individual and Tesla Motors as an organization agree. How they approach that idea is somewhat different from what we're used to though.
Their basic assertions are (in my words):
1. Vision and radar based technology, along with current generation GPUs and related hardware, along with sufficiently developed software, will be able to make cars 10x safer.
2. How quickly total deaths are reduced is tied directly to how quickly and widely such technology is rolled out and used.
3. Running in 'shadow' mode is a good source of data collection to inform improvements in the software.
4. Having the software/hardware actually control cars is an even better source of data collection to accelerate development.
5. There is additional, incremental risk created when the software/hardware is used in an early state.
6. This is key: the total risk over time is lessened with fast, aggressive rollouts of incomplete software and hardware, because it will allow a larger group of people to have access to more robust, safer software sooner than otherwise would be possible.
That last point is the balance: is the small additional risk Tesla is subjecting early participants to outweighed by how much more quickly the collected data will allow Tesla to produce a more complete safety solution?
We don't know for sure yet, but I think the odds are pretty good that pushing hard now will produce more total safety over time.
> life is sacred
This is my background as well, and its an opinion I personally hold.
At the same time, larger decisions, made by society, by individuals, and by companies, must put some sort of value on life. And different values on different lives. Talking about how much a life is worth is a taboo topic, but it's something that is considered, consciously or otherwise, all day, every day, by many people, myself included.
Most every big company, Tesla Motors included,make decisions based on these calculations all the time. Being a 'different kind of company' in many ways, Tesla makes these calculations somewhat differently.
That's a pretty cynical calculation to make. And no, we don't typically accept untested additional risk in the name of saving untold later. We test first. There's a reason why drugs are tested on animals first, then trials, then broad availability, but still with scrutiny and standards. This is a well trod philosophical argument, but we seem to have accepted that we don't kill a few to save others. We don't fly with untested jet engines. We don't even sell cars without crashing a few to test them. The other companies involved in self driving technology have been in testing mode. They have not skipped a step and headed straight for broad availability.
Why then does Tesla have a pass? There's no evidence it's actually safer. And there's no evidence that the company is truthful. We don't accept when a pharmaceutical company says, "no, it's good. Trust us." That would be crazy. We should not accept Tesla's assurances with blind faith simply because they have better marketing and a questionable ethical standard.
Are you for real? Statistics prove that Tesla is right on this. I’m not saying human life has no value, and I do empathise with the people who lost someone due to a Tesla crash, but we can see what is the percentage. It IS lower compared to human error, and yes that includes DUI and every other time it was a human error.
Tesla is safer for the community based on the statistical data. Also it is safer than a human driving. I’m not saying it will never crash just that it will crash less than a human.
Maybe another way to compare it would be to an autopilot in airplanes. Sometimes it fails so we might as well be angry at the airline companies if they advertise it. But a pilot would make more mistakes than the autopilot with the current workload.
Side note: if you are going to downvote you can comment on why. Negative feedback is very useful and helps build better argumentative skills.
What is lower and compared to what? I am tired of this vague misleading statements by Elon musk, Tesla and their supporters.
They claimed Autopilot + driver is 40% safer than a driver alone. And that includes Automatic emergency braking. Is it a surprise that AEB reduces accidents?
Why is that used as proof that Tesla's software is FSD. FSD implies that the car is capable of driving without a driver.
Is Autopilot alone safer than a driver? There is no proof of that
But what you should compare is Autopilot alone compared with a driver assisted by Autopilot. So, is Autopilot better than Driver+Autopilot? I can assure you its not, the driver still adds value.
One of the big issues here is the concept of "human error", as though a statistical average is some sort of monolith that can be applied to all humans. The truth is that many people lack the basic skills and attributes needed to drive safely (concentration, attention to detail, the ability to multitask effectively). This is true even before factoring the millions of elderly people who are unable to tie their own shoes or operate a self-checkout kiosk who are out on the road making things more dangerous for everyone (including themselves). Performing virtually any task better than "the average human" is a very, very low bar to hurdle. The question is, what standard should we be using to determine who (or what, in the case of self-driving cars) should be allowed to pilot a multi-ton vehicle on public streets? The truth is that we favor convenience over safety (and I'm not even arguing that this is a bad thing - just that it is an often ignored fact) in many facets of society, and in order to have an intelligent debate about self-driving cars we ought to recognize and acknowledge this fact.
Claiming that the hardware is ready and we're just waiting on the software is at best a vacuous claim and worse, a fraudulent one.
I can hammer two web-cams onto a rotating plank and claim that's "full self-driving capability". Just look at the eyes in my head! Now I just need to iron out some kinks in the software...
It's not just them. A lot of people think that because NNs can do object identification that self driving cars are almost here. It's similar to thinking that word recognition and pattern matching are going to give us conversational AI. Things like that are prerequisites for AI, but they are hardly the whole thing. Companies are spending billions on this stuff and I hope they're just hedging in case it pans out rather than planning their future around it.
Many people seem to think the arrival of fully autonomous cars is right around the corner; when I say that it will likely take at least 10 years, but probably more like 20, until there will be significant numbers of those on the road they are very surprised.
I admit I recently upped that number from "5-10 years minimum" after seeing the media reaction to recent driver-assist systems related accidents — it became clear to me that the old adage "the autonomous car does not need to be perfect, it only needs to be better than an {average, good, skilled} human driver" is simply wrong, because autonomous car brand X will be regarded - and judged - as if all X cars are one driver. Thus, these cars actually need to be much better than humans in essentially any traffic situation, because it simply won't be acceptable for "autonomous cars of brand X" to kill a few thousand people each year in every country. People would equate this to a single human driver mowing down thousands.
This effect is not necessarily bad, I don't have the inside information, but it's likely (or at least probable) that manufacturers invested in this technology observed the same and drew a similar conclusion.
The question now is whether the car makers make autonomous cars that good, or invest into massive PR campaigning to change public perception towards "each autonomous car is like an individual driver and it doesn't need to do much better than that".
We've seen the latter before. Jaywalking and avenues are just two examples.
Gill Pratt, the head of auto-car R&D at Toyota, has been agreeing with you since he joined in 2015, essentially saying that auto-cars must be better than very good human drivers in all parts of driving, even those that occur rarely. But mastery of those rare events that will be hardest to acquire, since few useful examples are available from which to train. Thus those last few yards to the goal will add years to the development time in subtle ways that will be maddeningly invisible to the public -- until seemingly clueless crashes arise on public roads, perhaps like the one last month which terminated all of Toyota's auto-car testing on public roads until further notice.
Well Jaywalking is only illegal in 10 countries. So PR campaigns won't be enough for that. The benefits of self driving cars as so massive that it will trump anything like that - but we are still a long way away from practical self driving cars.
Self driving cars are not almost there, they are already a reality.
Waymo is testing self driving cars hailing passengers without a safety driver since last October and soon it will start a commercial service.
But it’s technically correct: the hardware is present. The software isn’t there yet, that’s true. So is this a “lie by omission”? Reading the whole page, there’s this quote: “It is not possible to know exactly when each element of the functionality described above will be available.” What else needs to be said?
At some point, critical thinking is required to navigate the world. The customer should ask “and when will the software catch up?” And I’ll concede that the overwhelming majority of the population doesn’t think like that. Should Tesla update the site with statements to confirm the negative cases? e.g. “Our software can’t yet do X, Y, Z, etc”
Since we have yet to build a system that is capable of "full self-driving ... at a safety level substantially greater than that of a human driver" I'm not even sure they can make the claim that their existing hardware supports this. They might have high hopes, but there's no way to know what combination of hardware and software will be necessary until the problem is actually solved.
Until tesla can provide a real world demonstration of an existing vehicle running at L5 automation after only a software upgrade, it's unproven whether the hardware required is present or not.
At some point, they will have to provide said demo or they will be facing a class action lawsuit that will bankrupt the company.
I don't even think a lot of people would even consider the software side when reading that sentence. To the layperson, "Full self-driving hardware on all cars" sounds exactly like "These cars are self-driving".
that's false too. how would one prove that the current shipped hardware is sufficient for autopilot? some hardware is present for sure, but is it all needed as they claim? they cannot know, since they don't have an autopilot using such hardware to achieve the stated goal. they believe it is sufficient, which is quite not the same as the claim and it is also being vocally challenged by experts that criticize the decision of omitting full lidar coverage
I would argue even if something is technically correct but deliberately misleading companies should get in trouble for it. Nobody has time or energy to learn about every single thing they may buy.
Also, how can we even know the hardware is sufficient unless said software exists and shows that what we think is possible is indeed possible?
> But it’s technically correct: the hardware is present. The software isn’t there yet, that’s true. So is this a “lie by omission”?
Some hardware is present, but that it is sufficient for full self-driving is simple speculation for which Tesla has no reasonable basis. The only way for them to know it is sufficient is to have self-driving implemented on it, which they obviously don't.
Its at best empty puffery and at most outright fraud, but in either case it's a knowingly unjustified claim, even if not knowingly false.
It's not a lie by omission, it's a lie. If you don't have the software then you don't know whether or not you have the hardware. It's only proven when it works.
Hardware. That’s the keyword, and it might very well be true. Tesla doesn’t claim their software is capable of full self-driving yet and in fact the system will scream at you and stop the car if you take your hands off the steering wheel for too long.
It's deliberately deceptive, and I'm real tired of tech people tripping over each other to defend it. Technically correct? Sure. But deliberately phrased to obscure the fact that this is a hypothetical future capability that the product does not possess today. And please spare me your "well actually, the showroom attendants will clarify exactly what it means" -- they are framing this in a misleading way to generate public interest.
so how do anyone actually know what the hardware is capable of if the software is not ready? I mean there is no way to verify that hardware is ready for anything on its own.
Or are you suggesting that there are Tesla owners who are (a) sitting in their driveway wondering why their car isn't driving them to work or (b) returning their cars upon finding out many of the features on the Autopilot page require regulatory approval?
Tesla driver caught turning on autopilot and leaving driver’s seat on motorway
Culprit admits he knew what he had done was ‘silly’ but that the car was capable of something ‘amazing’
It's exceedingly clear to me that they're talking about hardware that can support future capabilities -- additional features via software upgrades is something they talk about a lot. And they're very explicit about what their Autopilot system does and doesn't do on their site. There's basically no situation in which someone is buying a $70,000+ car without knowing how the Autopilot works.
It's really not that deceptive. People would have to be a lot dumber than they actually are to be confused about it.
I'd say that if they're claiming to be able to deliver full self-driving autonomy to existing cars via purely a SOFTWARE upgrade, then that's pretty deceptive, since I am pretty sure not even Tesla knows exactly what hardware might be required for such capability. They'll know when they build it, but not before.
That ad is just deliberately deceptive. Not dishonest, it's misleading, and quite some work has been put in the wording, because it's nothing that anyone would ever say in a serious manner. Hardware alone has nothing to do with it, and anyone who's ever done some relevant bit of engineering understands that. If I shove four DSP boards and a laptop in my trunk, then hook them to the CAN bus and to four LIDAR sensors, my car would also have hardware that's fully capable of safe self-driving.
That's one level past dishonest. Honest advertising would just be "none of our cars have full self-driving capability at a safety level substantially greater than that of a human driver".
But this is on another level. Have they ever actually ran software that does, indeed, have this capability? Or do they have a detailed enough specification of that software, so that they can verify that it will, indeed, run on that hardware once it's written?
It's probably a safe bet that they haven't, and they don't, and in fact there is no way to verify that the hardware is in fact capable of running such software. This is likely a good engineering bet (i.e. the hardware is probably enough to support their current software approach, and they'll hit the software's limitations before they hit the hardware's), but advertising engineering bets as safety-related selling points is really irresponsible. Bet with your own money all you want, but not with people's lives.
What about "safety level substantially greater than that of a human driver" being the keyword?
Isn't Tesla's autopilot doing better than humans do in the same number of miles? We just all hear about it every time a Tesla is in a crash. If we heard about every human driver car accident, it would be overwhelming.
In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.
The numbers are wrong since it compares expensive new cars versus everything else(old cars with no safety features, bikes, buses) if you compare with similar cars you get numbers that are not in Tesla favor.
So try not to spread this false numbers, also the Tesla numbers are not open to be examined by third parties.
Why is applied physicist in quotes? Elon has a ivy league physics degree, was accepted at Stanford in applied physics, and has clearly applied a knowledge of physics at numerous companies.
This is again deceptive. They are comparing against total miles driven by Tesla vehicles with Autopilot hardware, not necessarily in Autopilot mode.
Since a Tesla is often owned by people with better economic means, their statistics is not representative of all vehicles. The average vehicular fatality rate of this demographics and vehicle type could be well below that of Tesla with Autopilot.
The most ingenious trick that the IBM marketing department pulled was to get non-technical (and probably even technical people, judging by this thread) to think that Watson is some kind of singular thing. Like that it’s a single big neural network with different APIs on it, or something. I honestly think that’s what most people think Watson refers to.
Watson is like Google Cloud Platform. It’s just a name for a platform with a bunch of technologies.
E.g. Watson Natural Language Understanding was previously AlchemyLanguage. It was just rebranded.
It’s very clever though, I’ll give them that. Use a human name so it has all the anthropomorphic connotations and let people think it’s some kind of AI learning things.
I'm not even convinced Watson is a platform. My impression is that it's just a consulting division of the company that deploys teams to build solutions that are in some way related to AI, with each solution or implementation potentially being completely unique from the ground up. Perhaps someone from IBM can correct me though.
I'm currently sitting in a meeting about implementing the Watson Enterprise Search product in my company and that is more or less the impression I've gotten. They sell it as a platform that is easy to customize and then once you're in they bill you tons of hours to help you because the system is indecipherable and poorly documented.
They sell it as a platform that is easy to customize and then once you're in they bill you tons of hours to help you because the system is indecipherable and poorly documented.
So pretty much like any major enterprise system from the likes of IBM, SAP, Oracle, ...
Sounds like every IBM product: WebSphere, Tivoli, WSSR, RAD, fuck even AIX. Many of those can be replaced with open source tools at a fraction of the cost and at a huge increase in performance.
Watson is a brand name. Specifically It's the Machine Learning brand name. Watson Developer Cloud is the product suite and it's just a set of pre-trained classical, machine learning, and deep learning based APIs for a variety of tasks. NLU(UIMA) text identification, NLC(Fuzzy String Matching), Visual Recognition, Tone Analysis(VADER), Discovery (Document Database + NLU + Knowledge Graph), Speech (STT/TTS), Text Translation (Literal not Semantic), Assistant (Conversational State Engine with embedded linguistic neural net). We're ahead in some aspects and a bit behind in others. There is also a generic Machine Learning Service which allows you to train Classical or Deep Learning algorithms and push them to a rest endpoint for production use. Ultimately the "Watson" from jeopardy was sliced up and pieces stuffed into various products. Anything with a smattering of AI/ML gets the Watson brand on it. I personally hate the Watson commercials as people who don't know anything about the subject think Watson is this singular sentient entity. Those who do know about AI/ML know we have the same general tech as everyone else. One benefit we do have though is petabytes of training data and expertise in just about every line of business on the planet.
I only know specifically about the NLP stuff, e.g. Natural Language Understanding (AlchemyLanguage), Natural Language Classification (it's just a multi-label text classifier) and Watson Knowledge Studio (Basically allows you to create your own named entity recognition classifier (NERC), also supports relations and co-reference resolution. You manually hand-annotate examples through a Web UI).
So by platform I mean, lets say you train a NERC model using Watson Knowledge Studio. Obviously this model has to be "deployed" somewhere so you can call it using an API. They host it for you and they bill you per API call. Anyone can go create their own entity type system and manually annotate a training dataset. So it's definitely a re-usable platform, you don't need to pay for any IBM consultancy to use it. I found that the NLP offerings have many problems, and that the documentation alone is not enough to help resolve all of them. So eventually, IBM will just tell your employer you're stupid and that's why it's not working as it should and you should pay IBM to come in.
But make no mistake, these are all just standard machine learning tools that have been "packaged" so end-users can use them through a web front end. It is in no way, whatsoever, getting any input from any AI/Neural Network/Database/whatever you want to call it/ thing called "Watson".
I personally think it's disingenuous because when people hear Watson they think Jeopardy and they think that somehow that technology is involved when they use any of the Watson.* products.
> I personally think it's disingenuous because when people hear Watson they think Jeopardy and they think that somehow that technology is involved when they use any of the Watson.* products.
The use of the Watson name is a deliberate attempt to take advantage of the Jeopardy game. It's a name that has cachet, and I've seen just enough of the marketing perspective to know that marketing will push very hard to reuse a successful name.
I used to work for IBM, on a backend service used by various Watson (and non Watson) branded projects.
I think federation would be a better term. There was a core set of APIs and hardware that might be called "Watson proper" but each market segment would be handled by a different organization. And then there was the proliferation of odd ball things out of research or little groups looking for growth/stability that get Watson branded.
Sometimes we'd be the first time a team relaizes there is already something doing what they've been building.
At my previous firm I worked with a pre-sales engineer who was formerly at IBM Watson before working at the firm. This is essentially it. Implementations of Watson were no different than doing an ERP project.
I knew people who had their division pay for Watson as a way to get their business AI and data science needs fulfilled without hiring developers outside their price range.
Eventually they scrapped the project because it not only took a ton of employee time to talk with IBM's team to get it set up and working, it also cost a significant chunk of money and wasn't as good as what the people who already worked at the company thought they could do themselves.
With all due respect to the people that work at IBM, I just can't imagine IBM's sales and consulting cultures to work well with deploying AI. I don't know firsthand, but from what I've heard and what I would guess anyway, a lot of the people selling Watson and actually on the front lines working with it probably aren't that knowledgeable about AI/ML/whatever. I just don't see how you could determine a project's feasibility or effectiveness without having a sharp conceptual knowledge of the actual AI algorithms and what potentially what kind of data is needed to make them shine.
Suppose a university admissions department offers paper surveys to prospective students at the end of on-campus tours. In an effort to improve admitted student yield (the percentage of students that actually attend the university after being accepted), the university wants to be able to scan these surveys' text digitally and then perform sentiment analysis to determine how excited the student is about attending the university, or more directly, how likely the student is to matriculate. The university doesn't have any people capable of doing this, so they get into contact with Watson.
How much will the salespeople at Watson pry into the questions of the survey, demographics and culture of the school, or the sample size? Will they ask about statistics such as acceptance rate, yield, and which students are most likely to matriculate (based on quantifiable metrics)? Even the type or color of paper and text field sizes on the surveys on could affect the feasibility of the project regarding OCR, or bias the responses toward short answers. I would argue that a lot of knowledge about the project would be necessary before a sales quote or even the feasibility of the project itself could be considered, but would a salesperson know to ask these question? Would they even be incentivized to ask? Would the consultants know that certain questions could make OCR hard or sentiment analysis a wash? Would a statistician be consulted to see if the same or better results could be obtained from simple analysis of GPA, ZIP, and test scores?
I'm sure everybody at Watson is pretty technically competent - and to be sure, I'm sure for most consulting and sales that IBM does, I wouldn't have to make the following qualification. But to be brutally honest, I think the type of people who are familiar enough with AI to be the person you want working at Watson in consulting and sales probably are using those skills as developers and data scientists. And even then, again with all due respect to IBM employees (and I know IBM puts out a lot of great research), those people might not also be at IBM either.
at the beginning there was the one true watson. watson was a way to process, correlate and provide indsight on a corpus of knowledge expressed in natural language. the technology was good but had one major weakness: the knowledge extraction part had a large bulk of manual labor needed to weed out the noise from the relevant part, because to a processing engine each bit of information is equal to every other bit of information. so you needed domain expert to proviede an initial tuning and after that watson was a good solution for the problem statement.
this process however required non technical domain expert to be working closely with the watson analysts at the tuning for an unspecified and quite long amount of time, comopounding the already astronomical costs of the solution itself
now as you might imagine like any other company ibm has a lot divisions - cloud, services, intelligence etc. the watson division due the large research costs and the few clients that were able to afford and make use of the tech was scoring too much quarters in red.
ibm is also a financial company, so they did what they usually do when one division needs padding: they started moving everything remotedly related to intelligence and analytics under the watson moniker, to drive up quarterly reports. this had the side effect that the watson marketing is a clusterfuck of overlapping and unrelated solutions that more pften than not don't even work togheter natively, but are presented as a whole ecosistem.
now, of course anyone trying to make sense of the whole thing is going to be faced with all kind of claims against all kind of slutions, without any idea of what does what.
but originally the only omission from marketing was what watson actually required and how and what it could give back to a company. but the problem is... it's the kind of solution that you have to build to see where it goes. you cannot be sure of the results from the beginning.
Same thing Salesforce is doing with Einstein right now. Means that internally, when someone says a customer wants to talk about Einstein, people are all left wondering which one.
This is not ingenious in my opinion ... I’ve been at a company for a year that had bought into “Watson” — when I suggested alternatives to the specific apis being used was told in no uncertain terms that they had been consulted and they were going with “Watson” ...
Now a year later I’m finally being asked to clarify “what is Watson” so that the decision makers can better understand the techniques being used rather than the fantasies about what was being used that they were encouraged to develop through misleading marketing and consultants ...
I remember when Microsoft .NET came out and I hated it because they named it something that had no relation to what it was. The product had nothing to do with the Internet, but the Internet was a big new fad back then and marketing wanted to latch onto that.
Today .NET is a great product ecosystem and a huge success, except for the horribly awkward name.
I don't know if they'll ever regret naming it Watson, but latching onto the AI craze isn't necessarily a losing strategy as long as the products are good and successful. Even if they have nothing to do with AI.
It seems to be a common theme with projects in big companies: somebody comes up with a good project idea and a catchy name. As it gets resources and management attention, other departments re-brand their long-time toy projects with being a substantial part of the 'catchy-name' project's vision. At some point nobody knows anymore what it was all about. 'SDN', 'Cloud', 'Watson', '.NET' are all examples of this.
I attended the IBM Connections conference in Vegas shortly after the Jeopardy! thing and just after IBM started using Watson as a brand under which it lumped a bunch of analytics products. From questions and comments made, during some of the sessions I attended, it became clear that large portion of the attendees (mostly the business people) wrongly assumed that the technology that won Jeopardy! was now being used inside everything labeled with "Watson". People were very excited by this. I never heard anyone from IBM making any attempt to try and rectify this misconception, they just smiled, nodded and played along.
I disliked IBM and their corporate marketing BS even more after this.
Totally. Especially because it was on Jeopardy, further reinforcing the idea that it is a single box. Maybe it was then, but that's definitely an impression that's stuck with me since.
I briefly worked with a Watson team on a cool idea to map a person's 'knowledge space' (or probable knowledge space given their background) against Watson's knowledge space and guide them to relevant learning materials and journal articles and the like.
The idea was to save people time so they aren't rehashing stuff they know down pat or jumping ahead into material they cannot understand but, instead, find that next step into what they almost know. The idea from there would be to let them specify where they want to go and guide them, step by step, exposure by exposure, to that summit.
In a few days, it turned into Just Another News Article Recommendation Engine based on interest and similar profiles with other clients. Yawn.
In a similar vein, I find a machine beating a grand master fairly boring. Show me a machine that can teach someone to become a master (or even a very good amateur) and we can talk.
That is, don’t beat the game, beat the opponent, understand why, and then model an adaptation strategy for the opponent (teach).
> In a similar vein, I find a machine beating a grand master fairly boring.
I know a lot of people felt that way even before it happened. It feels like an accounting problem rather than intelligence. But, it's fun to remember when it was thought by some very smart people that chess required more accounting than was ever possible by a machine, so beating a grand master would have to be a demonstration of intelligence.
> don’t beat the game, beat the opponent, understand why, and then model an adaptation strategy for the opponent (teach).
Yeah that would be closer. My favorite Turing test, if you will, is whether the AI can tell you you're asking it the wrong question. If Watson got bored of beating grand masters at chess and started refusing to play, maybe then a case could be made it was reasoning.
A lot of AI "milestones" were actually caused by improvements in computing itself, IMO. That is, they weren't facilitated by novel algorithms or engineering solutions specific to the milestone itself. They kind of just happened, either because computers got faster (e.g. disk I/O, ram size, processor speeds and caching) or because someone finally decided to throw enough distributed computation at the problem.
People knew about the main algorithms that Deep Blue used (alpha-beta minimax with heuristic evaluation) for decades before Deep Blue existed. It was simply a matter of waiting until IBM found marketing value in dedicating a top 300 world super computer to beating Gary Kasparov
Well, the hope was that Watson, having explored and built a connected knowledge graph from various sources, could ask probing, adaptive questions to find out where a person landed.
So, say I'm an undergrad at a good university and I tell the system "I'm interested in Computer Science. I am particularly interested in Scientific Computing and would like to get to a graduate level of knowledge."
The system might ask "Sort the following operations by their worst-case run-time...". Then, if they do well there, maybe "Which of these two examples of auto-parallelization using Matlab's parfor would fail to parallize the code.." or something like that. Over the course of so many questions, the system would start to paint a more and more reliable picture of where contours of a person's knowledge.
This is time-consuming, of course, but over time it would get easier and faster to find contours by using the 'average' of people with similar backgrounds as a starting point.
Once a fairly good mapping is done of the person to Watson's cognitive model, Watson would need to trace back to the source(s) of nearby concepts and offer them to the user, which, ideally, are then rated by the user for relevance and perceived difficulty to further refine the person's model and rank the material offered for that particular profile.
Now imagine a Grad Student asking a similar question. Or a middle school student. What would those interactions look like? The mappings? The suggestions?
Don't get me wrong...mapping a person's knowledge space is a Very Hard Problem. Watson takes a kitchen sink approach that just isn't possible for a human being. And maybe it wouldn't be possible to tease apart the resulting cognitive model into tidy nodes enough to map anything to. These were questions I'd hoped that IBM could help answer. Instead, it was on to the easy, well-understood problem and solution.
This already exists, it's called adaptive testing. I guess the new part would be modelling somebody's ability, not in one topic, but many related topics.
Sure. Existing adaptive tests were an inspiration for the idea, of course. But putting together a good adaptive test is, itself, very time-consuming. And the test itself doesn't do more than measure ability. The crux of the idea is to use the adaptive test to suggest learning material from a wide swathe of sources to the individual.
I'm assuming they break things down into traditional objective units within a standard curriculum and material that is relevant.
That's great but it is a very manual process.
Remember Yahoo curated keyword recommendations from the 90s? This Knewton approach is more like that. What I'd like to see is more like a Google Search in this analogy...something more flexible and comprehensive.
The basic idea is that you might start with a manual linkage between concepts, derived from linkages between content developed by subject matter experts, but over time it organically morphs into a knowledge graph that describes how certain concepts relate to and build on each other, and for which scenarios of learners.
That knowledge graph combined with a concrete goal (mastery level) and time to get there (deadline) can then be used to recommend to a specific learner what material to study or activities to do next.
It's theoretically possible to do this even more organically, but you need clearly tagged educational "stuff" (written content, lectures, activities, etc.) and, perhaps more importantly, clear ways to measure outcomes as a result of interacting with that "stuff" (typically quizzes which themselves have been clearly calibrated).
Watson is the IBM marketing department going mad about ways in which IBM can continue to remain relevant in a world that increasingly doesn't care about what hardware a particular computer program runs on.
If there is going to be a 'second AI winter' I fully expect Watson and other such efforts to be the cause.
IBM hasn't been about the hardware for a long time, instead it has been about the consulting services contract. And when we, as a startup, first engaged with the Watson folks it was clearly a sales funnel for their consulting services.
That said, IBM has a tremendous amount of research they have done in AI over the years. It is not that they don't have a lot of interesting technology they can throw at different business problems, it seems like they are having a hard time getting invited to the party if they don't track the same hype buzz that the current ML/AI craze has embraced.
The Watson stuff is so oversold it is almost comical.
And yes, sure IBM hasn't been about hardware for a long time, they've been a services company for decades now. But as far as AI/ML is concerned Google and Facebook are attracting the top talent these days, Apple and Microsoft much further down the line.
What would be nice is if they would take the opposite tack, rather than marketing the hell out of it quietly solve lots of problems that are hard to solve in a traditional way. Every time I hear about Watson it is in the context of something where I ask myself "What's the point of being able to do that?". If all there is to hype is the hype itself then it is hollow.
Yes. IBM should be best positioned among AI-aware companies to augment and extend "deep" knowledge-bases into the enterprise. The information infrastructure of Jeopardy Watson was impressive and ought to open doors for IBM to partner productively with other info management vendors to modernize and advance that corporate infrastructure which is driven by deep information. But instead it appears the short-term ROI-think of their non-technical SVPs is what's led them astray.
IBM continues to make the mistake that the flash bang uber-sexiness of ML (esp deep learning) matters more to the enterprise than deep info management (something which IBM can proffer while most competitors can't). If IBM were smart, they'd leverage their deep experience in databases and IR and promote that side of AI -- smarter info management. IMHO, this could do much more for their bottom line than the stupid pretense that Watson really is HAL 9000.
Here I am doing a reasonably good impression of Watson, by totally missing your point and just adding that we have already had (at least) two AI winters, this would be (at least) the third coming up, if it does.
One thing about Watson that I remember is this presentation by a very senior guy. He had just come back from the US and was presenting what he learned there about Watson Healthcare (IIRC, that's what it was called), which I assumed was a division of the Watson team that was focused on cancer and stuff like that.
I'm paraphrasing, but during the presentation he said something like: "The project was not originally called Watson Healthcare, it was called X (I can't remember exactly), but potential customers were like 'No, no, leave X, we want Watson', so we had to change the name to Watson Healthcare for the sake of our customers. Watson Healthcare actually doesn't have anything to do with Watson."
I couldn't believe, at the time, how much respect I lost for IBM in about 20s. First of all, he thought we're idiots. You have to be brain dead in order to believe that he renamed X to Watson Healthcare in order to help customers. They just wanted to ride the hype train of the Watson brand and were lying to everybody about it.
When your customer is an executive who needs to sell their decision to a board or C-level then pitching "IBM Whatever Health Stuff" is a bit harder than "Watson Healthcare" because of marketing.
The way IBM talks about it is completely bs. However, this round of AI is definitely better than the last one. Specifically, whats different this time around is that previously, expert based systems and many machine learning techniques require that you specifically hand code things like:
1. Parsing and providing the input dataset into 'features'
2. Hand coding the logic and rules for many different cases (Expert systems)
Now, it has become easier to train a model such as a neural net where you can provide much 'rawer' data; similarly you just provide it a 'goal' in the form of a loss function which it tries to optimize over the dataset.
By 'true' AI, I think most people mean 'how a human learns' - which is actually a very biased thing, since we humans have goals of things like the need to survive, etc. I do believe it would be possible to encode these into goals, although doing that properly and more generically seems a little bit in the future.
One of the neat parts of expert systems was that you could get surprising and bizarre interactions between rules, leading to entertainingly nonsensical answers.
One of the neat parts of neural networks is that you have no idea what rules it's using, but it still manages to produce answers that are sometimes not entertainingly nonsensical.
The bigger problem is that they produce that are most of the time making perfect sense and then every now and then they spectacularly fail on input that is indistinguishable from inputs that gave correct answers.
I think this also happens in the human brain quite a bit. There’s a lot of times where you see something out of the corner of your eye that isn’t there, or you duck because you think something was coming towards you, or you wave because you think you recognize someone.
I bet at a lower level, various systems in the mind fail constantly but we have enough redundant error correction to filter it out.
Well, as referenced in my above comment - you could spend the time to figure out how the prediction was made. Specifically, decoding the training process and what data points were used to influence the prediction - but it would be complicated and take a lot of your time. It might give you more clues why something like random noise was classified as a hot dog.
Ha - the thing to think about here is, pretend that you were asked a question of say, why you labeled a horse picture as a horse.
You might say something like "well, it has hoofs, and a mane, and a long snout, and 4 legs, etc". However, that sort of answer from a computer would probably be labeled as too generic since that could be any number of animals.
The issue is, that to you a horse is some complicated 'average' of all the examples of horses you've seen in your lifetime, that go through a complicated mechanism for you to classify it. Specifically, probabilities of different features that your eyes have extracted from seeing them.
Similarly, understanding why a neural network is doing something is very possible, however, it fails to mean much when the answer it could give is a list of, say, hundreds of vectors and weightings that contributed to its prediction.
It's funny that we are willing to disqualify an AI as an I simply because it comes to some startlingly nonsensical result, yet history is full of humans doing the same thing (from placing a plastic cup on a hot stovetop through to gas chambers).
Actually, what makes us human is the lack of fixed goals. As McCarthy wrote circa 1958, one of the properties of human level intelligence is that “everything is improvable”. That means innate knowledge that nothing is ever good enough, and an uneasy tension between settling on an answer and the drive to keep going.
> Now, it has become easier to train a model such as a neural net where you can provide much 'rawer' data; similarly you just provide it a 'goal' in the form of a loss function which it tries to optimize over the dataset.
"just".
Where is the mention of all of decisions on network size, node topologies, regularization, optimization methods, etc.. And you still have to do your first step.
Deep learning is way more complex than just choosing a loss function.
Current Ai doesn't have any ideas about existing. That's something we'd have to add in to future AI, which goes to the point of the article and over-exaggerated claims about current AI.
Caring about one's existence (or even having the notion of a self) isn't something that comes about from crunching large data sets. ML isn't going to result in existential feelings or consciousness.
> Current Ai doesn't have any ideas about existing. That's something we'd have to add in to future AI ... isn't something that comes about from crunching large data sets
(note: I'm a roboticist who spends a lot of time thinking about long-term autonomy (how can a robot go for months and years without human intervention) so that gives you a sense of where I'm coming from. Since my original comment was more of a question, I'll expound a little more here.)
I wasn't really asking in the context of current AI, but 'true' AI as the parent was talking about. They drew a distinction between how humans learn and how a "true" AI might learn based on the goals of each, and I question the notion that a "true" AI wouldn't have a notion of survival.
I think you agree with me to some extent, based on your other comment:
> A robot might someday be a self. It would certainly be advantageous for robots to avoid damaging themselves, and being safe around other robots and humans (treating them as selves). But how we go about making robots self-aware, and more problematically, how they have experiences is something nobody knows how to do.
I agree. I believe a robot not only would benefit from the idea of "self", but that it's fundamental to their design. We've built algorithms for a long time that operate in a very limited way. Robots break that mold in a number of ways, foremost being they can move in their world, and even physically change it. Yet we're still insisting on building them the same way we build webapps or trading software.
There needs to be a notion of "inside" and "outside" just to implement things like collision avoidance. I think there will also be notions of "group" and "individual" as well as "self" and "other" and "community" as well if we're going to call robots intelligent, as we see these notions in other species we regard as intelligent (whales, dolphins, chimps, etc).
The question is, how do they get these concepts and what do they look like? I don't know. But I think they're essential, and I think they will be a barrier to AI if we keep treating them optional instead of innate parts of an intelligent creature.
I think we're huge technological leaps and bounds--I mean light-years--away from AGI so advanced it requires some form of computer psychology. But I do think it might eventually happen. And I don't think people are going to do a good job of preparing for it proactively. It's going to require a huge fiasco to motivate people to take that idea seriously.
So I agree that one day we may have to be careful how we treat AGI and maintain a system of ethics for the use of artificial, sensate, sapient beings. But that day is hella not today. Yet when it does come, it will probably still sneak up and surprise us. Like most black swans are wont to do.
Right now IBM is using people's confusion of AI with AGI to make sensationalist ads. But hey, most advertising around trending tech is sensationalist.
> I think we're huge technological leaps and bounds--I mean light-years--away from AGI so advanced it requires some form of computer psychology.
I think you're right about this. I think the current wave of anxiety about the capabilities of AI are fueled by non-practitioners who have no idea really how hard it's going to be.
I think it has with my conflation of intelligence and life. Life is another one of those things we have trouble defining, but I think you've hinted that one of the key characteristics of life is a desire to extend it.
Once we have a thing that seems to value its own self, then we can start talking about motivations for doing so, and I think that's one of the things that can elevate life to intelligence.
I think that consciousness is a collection of experiences and observations ('data'), nothing more. I am curious what lead you here:
> Caring about one's existence (or even having the notion of a self) isn't something that comes about from crunching large data sets. ML isn't going to result in existential feelings or consciousness.
My opinion is that 'consciousness' by the human definition will be a spontaneous and emergent property, when the computers become complex enough.
> I think that consciousness is a collection of experiences and observations ('data'), nothing more. I am curious what lead you here:
Consciousness is part of having an experience. Observations don't require consciousness, since they can be performed by measuring equipment, although one can get deep in the woods on the whether observation needs a mind to make it meaningful, and thus an observation, not simply a physical interaction.
> I am curious what lead you here:
Being a self isn't about crunching data, it's about having a body and needing to be able to distinguish your body from the environment for survival and reproductive purposes.
An algorithm has no body and thus no reason to be conscious or self-aware. That it's even an algorithm and not just electricity is an interpretive act on our part (related to the deep semantic debate over what counts as an observation).
A robot might someday be a self. It would certainly be advantageous for robots to avoid damaging themselves, and being safe around other robots and humans (treating them as selves). But how we go about making robots self-aware, and more problematically, how they have experiences is something nobody knows how to do. Saying it will emerge once the robot is sophisticated enough is the same as saying nobody knows how to make a machine conscious.
Having had some level of access to inside IBM the whole cognitive initiative has just been this bizarre self-feeding marketing sales escalation where the real engineering has to 'bring the cognitive' in the most Dilbert pointy-haired boss sort of way.
Dan Klein shared in a graduate NLP class at Berkeley a few years back a AAAI article on Watson back in 2010 (when it actually was a distinct technology stack and not just marketing nonsense). At that time IBM was focused on question answering in Jeopardy. It was pretty clearly incremental rather than novel— Dan used the example to show that 1) ensemble techniques can be effective if done properly and 2) hyper parameters matter, a lot 3) there's human intelligence and then there's Ken Jennings intelligence: looking at precision and percent answered, he's in his own separate league. It made me think a lot about individual differences in terms of declarative knowledge.
It was also unclear to me when they did the contest as to whether Watson only had access to the analog audio and/or image of the questions asked. So did they have to parse the question the same as Ken.
Also, it was clearly optimized for a specific use case. If the questions were reworded with more clues that were puns or needed inference, I think Ken would have done about the same but Watson would have faired much more poorly.
They had the text of the question (the sentence), and they had to parse that and then the resulting question was then sent through a text to speech engine obviously, but there was no speech to text.
"Machine learning" was a pretty good buzz word, but "Artificial Intelligence" is even better. And in a way, ML is part of AI so it isn't really lying.
IBM tries to sell into c-suites of companies that are less technically-adept than the average HN reader. Their marketing seems to be pretty effective, at least in getting proof of concept projects signed with big names.
Watson is simply IBM's ML product, but they call it AI and wrap it in marketing for all the reasons every AI startup does the same thing.
I disagree that companies implementing "AI" are less technically-adept than the average HN reader. This sort of comment has happened on other similar discussions.
It is untrue. They are very technically adept, with teams of people who are also aware of their problem spaces, and technology.
I understand there are some cases of lack of technical teams making these sort of decisions, but it isn't the norm, and even smaller companies often have incredibly technical teams.
I don't understand where this idea comes from. I've done consulting, and implementation of these type of projects for most of my life. My experience says it's false. What is the feeling that this is true?
Is it from people who aren't a part of the process theorizing that some unknown force must not be as intelligent as they are? Is it from looking at the decision making in general (why did they buy an ERP?), and making correlations?
I'm honestly unsure how there's this widespread idea that there aren't brilliant people everywhere doing the same work they are. Yes there are problems, and challenges all over the place, but I find I am amazed all across the country, and world at the level of expertise in companies.
> I disagree that companies implementing "AI" are less technically-adept than the average HN reader.
He didn't say that. He said that the c-suite of those companies was less adept than our fellow readers. This is almost certainly true. I'm not sure that it matters.
Yep, matches my experience. We invited IBM to our company to pitch Watson, there was very little that was impressive about it. "Watson" is mostly just a coding services integration team, who will assign a team to add some basic NLP to your web services. Someone with a free weekend and a book on TensorFlow or NLTK can replicate most of what the IBM sales engineers pitch for Watson.
Oh wow, Roger Schank [1]! Haven't heard that name in a while - he was quite famous in the early days of AI. I wonder if he has figured out a good way to marry ML to his theory of Conceptual Dependency (CD) [2] - because that would could be ground-breaking for hard NLP problems.
Interestingly I started reading the article without paying much attention to who the author is. A few lines in I began to wonder if this is going to be unproductive rant, and if the author has heard of things like CD etc ... It became funny right about then because that's also when I happened to glance at the URL and saw Schanks name.
In the 1980s I spent too much time trying to use Conceptual Dependency in a few small R&D projects. Looked promising, but I had little success with it.
I think one of the challenges with using CD in the real world is the "unclean" input that need to be mapped to the various primitives and structures of CD, or stuff in the same vein that came after. Without a way to do that automatically, in a scalable fashion and with minimal human assistance, its utility is limited. Which is why I feel that if we had a way to leverage the current breed of ML techniques to automatically (or even semi-automatically) define this mapping, it would be a big step forward.
I'd like to make a plug for my company (http://www.cyc.com) whose "AI" is not machine-learning based, does actual cognition and generalized symbolic reasoning, and lived through the AI winter of the 80s. We've gotten some contracts as a direct result of companies being disenchanted with Watson's capabilities.
I would like to ask you a question: over the years I experimented quite a bit with OpenCyc that you stopped distributing last year. Is ResearchCyc reasonable to experiment with on a small server of powerful laptop? Is an OWL version available?
OT from the headline, but I take issue with the author's claim that Bob Dylan's work doesn't relate to the theme "love fades". Dylan has had a vast career beyond his protest song days, and I'd argue that one of his best albums, "Blood on the Tracks" would be accurately summed up as "love fades".
Dylan didn't win the Nobel for his songs about faded love, and he won't be remembered for them especially.
Schank is 72 years old. He came to know Dylan when when his protest lyrics gave voice to 20-somethings like Schank in the 60's. Like the rest of us, Schank likely remembers Dylan less for his retrospective years thereafter.
What you've called Dylan's "retrospective" years include an earth-shattering comeback with three top-charting albums in a row starting with 1997's "Time Out Of Mind" (which, incidentally, has quite a few 'love fades' lyrics). Dylan himself has singled out "Time Out Of Mind" as the only one of his own records that he himself goes back and listens to.
Thank you. I don't know much about AI, but I like to think I know a little about Bob Dylan. The bittersweet love song is a Bob Dylan staple that lasted well beyond his protest phase. I think Watson actually a pretty good job of identifying it as a theme. From "Girl from North Country" to "Don't Think Twice, It's Alright" to "It Aint Me Babe" to "Tangled Up in Blue" to all the way into the 90's with "Make You Feel My Love", it's a theme that spans his entire body of work.
I also think that even with advanced cognitive computing it would be difficult for a computer to interpret those songs meaningfully without having lived through those times. I don't think a competent human could interpret those songs without huge reams of context. Which is probably the point the author is making about counting words vs. cognition.
This isn't a slight against Dylan, he's amazing, but perhaps it's an unfair example to judge an AI by how it figures out those lyrics. There's some very intentionally abstract content, some inspired by cut-up poetry of the beat movement which literally uses randomized words to induce pareidolia and free-association. Even with cognition, you can't correctly interpret those ideas--you form your own unique and subjective perception of them.
Would we even be able to tell if we invented a computer with proper appreciation for absurdity?
But on the subject of IBM: yes, they are making sensationalist ads.
Dear engineers, merit is useless when you're trying to sell something. Authority is king, and people remember emotion and hyperbole. Marketing and sales is almost always about representing authority regardless of merit. The only thing that matters after a Watson sale is if Watson can help solve the problems the customers have.
True, it's well known that engineers don't often have appreciation for the emotionality of marketing. But there's a limit here. We can't just have no-holds-barred hyperbole and outright lying to the point of unfairly deceiving customers.
Obviously it takes a legal professional to judge where that line falls, but for something as specialized as this it's hard for laypeople to appreciate the distinction between stretching the truth with enthusiastic self-promotion and full-on false advertising. It's interesting to think about.
Well, it might be argued that with the exception of this author (who apparently coined the terms they're using) neither the IBM marketers, nor any of the potential consumers understand what these words mean. It is pure abstraction to most people. If the marketers were pressed, I'd guess that they'd just define these terms differently than the people making claims of fraud.
I'm imagining the glossy eyes of a judge or jury trying to grasp the nuance and just throwing in the towel.
Well, there's also the opportunity cost of capex leaked away from areas that it could be productively focused on, also the opex of running the new implementation. And the missed opportunity from building skills in implementing opensource alts - like Tensorflow, for example.
Since we're thinking like CTOs we need to account for risk, not cost. IBM sells Watson to companies with money, and for those clients the factors driving decisions are more often risk and liability. A CTO with budget who chooses to build in-house is taking a big risk. If IBM fails, they can cover their ass with a contract. If they fail in-house there is no such safety net. Easier to save face with the board.
I heard someone say that A.I. is just what we call technology that doesn't work yet. Once it works, we give it a specific name, like "natural speech recognition".
However, if a robot from scifi were to walk out of the lab, like Data or Ava from Ex Machina, or we had access to HAL or Samantha from Her, we wouldn't just give it a specific technical name. We would consider those to be genuine AIs, in that they exhibit human-level cognitive abilities in the generalized sense.
It's true that in Her, Samantha was just an OS at the start, kind of like how the holographic doctor was just a hologram at the beginning of Voyager, but as both stories progress, it becomes clear they are more than that. By the end of Her, Samntha and the other OSes have clearly surpassed human intelligence.
Those are fictional examples, but they illustrate what we would consider to be genuine artificial intelligence and not just NLP or ML. The reason people always downplay current AI is because it's always limited and narrow, and not on the level of human general intelligence, like fictional AIs are.
I like that definition too (I know it from Seth Godin). It’s honest, in the sense that we just don’t know yet how to that stuff instead of labeling every single code of line as AI.
I think a reason for this is that in the early days of computing and AI research, strong AI / artificial general intelligence (AI possessing equivalent cognitive abilities to humans) was considered both to be within reach, and the most obvious solution to many problem domains. We now realise that things such as computer vision and natural language translation can be approximated with solutions falling far short of strong AI.
Personally I define AI as software that you "train" rather than "program". In the sense that neural nets and other ML tools function as black boxes rather than explicit logic.
By that definition, AI is a real thing—it's built on top of programming that uses compilers and languages and ones and zeros—but it's different and it's valuable.
To say it's all bullshit, I feel, is to cut yourself off from new skills. Kind of like "compilers are all bullshit—it's opcodes at the bottom anyway."
AI carries a set of connotations in popular imagination that a) don't comport with the actual capabilities of what we term 'AI' in the computer science world, and b) are being exploited by marketing teams at IBM and plenty of other companies to sell technologies that aren't particularly new or interesting. The kernel of truth in 'AI is bullshit' is really that the discourse around AI is bullshit, which I think is a pretty fair assessment, and this is coming from someone who's work gets labeled as AI on a regular basis.
>I define AI as software that you "train" rather than "program".
I like this definition. It covers things that are AI but not ML, like DSS / rules engines. I've built two fairly sophisticated DSS before but haven't messed with ML much. It seems interesting, but I haven't had the time.
Not sure what you mean by that. Is there some industry standard around the term "Artificial Intelligence"? I agree that its become a bit of a buzzword, but I'm not sure that its being misused.
I will say that when my company uses AI, they almost always just mean LSTM-based content generators - usually chatbots or "advisory"-style outputs - but the key idea is that it's generative and not just an evaluator.
I think that's probably the most helpful definition because your ML output has to go into some larger intelligence system (human or otherwise) to produce some decision / activity. So your choices are:
* Human
* Expert system with rules of interpretation that include ML output as input
* AI system which relies solely on inference and reinforcement / goal-seeking to produce output
As someone who is currently working on successful commercial product with a strong a expert system component, I agree with the sentiment. The funny thing about this project is that the product owners nor marketers and not even the coders ever use the term "expert system". It just doesn't sell any licenses nor garner any attention.
My view is that expert systems are a ubiquitous part of many products to the degree it's hard to even recognize them as such. They're not the main focus of anyone's marketing budget, because that makes about as much sense as promoting your "revolutionary axle technology" to sell a car.
A bit off topic, but I've been wondering about what expert systems are a lot lately, and I hope you don't mind me asking a few things about them.
I started studying machine learning long after statistical models was the absolute standard, and all I really know about expert system, are passing phrases in textbooks about how the world has moved away from it.
How does one go about building one? MNIST character recognition is often called the "Hello world of machine learning" ... what is the "Hello world of expert systems?"
Heh. Of course that’s a space with a pretty constrained and well defined set of rules. Until they aren’t of course. Which is why we still have accountants.
It's a pretty open secret in the community that what IBM pitches that Watson can do, versus what it (or any state of the art system) really does is pretty much bunk. This author calls it fraud, but a more charitable interpretation would be extreme marketing. We've seen a lot of failures with Watson, particularly in the medical space - MD Anderson's Cancer work for example (https://www.forbes.com/sites/matthewherper/2017/02/19/md-and...) where MD Anderson payed around $40 million (on a original contract deal somewhere near $4 million) and eventually abandoned it.
I do think Watson may be a fake it until you make it thing - in particular, they still have access to a incredible amount of data, and data determines destiny on a lot of AI.
but their message is for outside the community, so I don't think they deserve any charitable interpretation, they lost this privilege a long time ago. I have to say I'm biased, I wish I can see them disappear soon, but I think I have the right motivations.
Had IBM sales people present on Watson as a security solution recently. The stench of BS was so bad I nearly had to leave the room. It wouldn't bother me if they kept things generic, but they deliberately sprinkle the presentations with specific terms referencing hyped technology (deep learning, etc), with the clear objective of deceiving the audience into thinking they are using those technologies when they clearly aren't. It was unethical IMHO.
Chomsky calls it a sophisticated form of computational behaviorism. Just like the research program of behaviorism died out, this will eventually too. There are other respectable criticisms of AI, like Hubert Dreyfus' 'What computers can't do'.
Neither Chomsky nor Dreyfus claimed that machine learning and/or AI won't solve any problems, but rather that the kind of problems these solve are not relevant in terms of aspiring to be humans.
Chomsky's "Where AI went wrong" is often ignored by the mainstream AI community or dismissed. Peter Norvig's retort was poorly constructed, and showed that he didn't comprehend Chomsky's argument.
Machine Learning and AI are stuck in a rut and apparently these so called ML/AI experts know better. The Deep Learning (along with ML) craze is a hindrance to a true scientific theory of intelligence.
I'm no expert in the field of AI or Machine Learning, so I have a question for the experts here? Has there been any theoretical breakthrough in the AI in the recent years? I knew we had neural networks and different types of classifiers, etc. for more than a few of decades now, so apart from better marketing, has there been a significant breakthrough that explains this sudden surge in the interest in AI?
There have been a number of breakthroughs in specific fields recently. In fact it seems like there is a paper every week that pushes the state of the art forward in some branch of AI/ML. I think the big one that triggered the current excitement was the success of convolutional networks in image recognition tasks. You can read about that one (in 2012) here: https://www.technologyreview.com/s/530561/the-revolutionary-...
EDIT: This paper refers to the algorithm as SuperVision which was the team name, but it is more commonly called AlexNet. Here is another article discussing it:
Before 2005 experts estimated that AIs that could beat humans at Go would take many years still. In October AlphaGo came out and in March 2016 beat everyone. In 2017 AlphaGo Zero came out, which was only trained on self-play. It beats all previous versions.
But it is not so much these big breakthroughs that cause current optimism, as much as the large amounts of continuous improvements and refinements.
Recent breakthroughs in ML from a commercial perspective are achieving super-human performance in some computer vision tasks and end-to-end methods beating traditional ones in machine translation and speech recognition.
However, these breakthroughs have not much to do with a general notion of intelligence as they are very limited to a specific task. In the media, the term AI is used very inflationary these days.
> has there been a significant breakthrough that explains this sudden surge in the interest in AI?
Yes. Some tasks like language translation, face recognition etc. have reached human level performance. Why has this happened? Large data and increase in computing power (Thanks gamers and GPUs!).
Is this translation service available online? I tried different services recently while travelling, none of them were better than basic knowledge of the language and a dictionary. Full sentences sometimes work well but anything else is problematic (e.g. restaurant menus).
deepl.com is pretty good. This translation of your question into German is flawless:
"Ist dieser Übersetzungsservice online verfügbar? Ich habe in letzter Zeit auf Reisen verschiedene Dienste ausprobiert, keiner von ihnen war besser als Grundkenntnisse der Sprache und ein Wörterbuch. Ganze Sätze funktionieren manchmal gut, aber alles andere ist problematisch (z.B. Restaurantmenüs)."
However
- MCMC for scalable bayesian reasoning.
- Answer Sets for reasoning in FOL.
- Big data sets for training lots of classifiers. Including deep networks. A big breakthrough has been crowdsourcing the labelling.
It’s an unusually beautiful written article, well worth the read just for the prose. As for the main sentiment that we have a new AI winter, I’m not so sure. My lay person view is that we see quite a lot of commercial success with these systems so the current wave will be well funded for at least a decade.
What are the big successes? Speech recognition? Which still seems rather bad to me.
Language translation (which for the languages I’m interested in (Japanese) is still almost totally unusable?
Self driving cars? Which are not yet in production (and where the social issues are probably far harder than the technical ones, and likely have been since the 90s).
Is there some big application if ML that I’m missing that is a clear win?
A lot of ML behind the scenes. Search, knowledge systems, map routing, etc. You know, the boring stuff that isn’t AI yet. I don’t really disagree with your broader point. I expect a lot of things many think are just around the corner are going to be many years away.
Amazon's system for finding related products is eerily accurate. The more you buy, the better it gets at anticipating what you want to buy.
... but the larger wins they've had are behind the scenes, in the predictive modeling that helps them get product into warehouses in front of demand spikes (and into geographically-relevant warehouses to satisfy the spikes).
> Amazon's system for finding related products is eerily accurate.
Amazon's system for finding related products fucking sucks. After I buy an X, I see ads for more X for months, which is completely useless. Sometimes I'll see ads for the exact thing I bought... what?!
Yep, I see this all the time too. Buy a printer? Amazon thinks you want to buy a new printer every day for months.
Google too. I bought a Pixel 2 online, from Google, signed in to Google. Now I see ads on Google ad networks all day long to buy a Pixel 2. It has been 5 months of daily ads from Google to buy the phone I just bought. Ridiculous.
Ad networks are a clear financial win for AI - but they also show the ridiculousness and are clear windows into the failures so far.
This. For all the talk about how Facebook is mining data to hyper-target customers, it still seems like they're still trying to throw stuff at the wall to see if it sticks.
That is amazing too me. When I did inside sales for a VAR in college in the 90s, we wrote a little app that would note the customer's account notes a few months after a printer installation to ask about toner.
It had a high conversion rate and better still got us referrals to the head executive assistant (who was usually the boss's assistant).
I find that particular, oft-repeated bug fascinating. It's a known issue, but one that seems to spread across many vendors and ad networks. I have to assume there's something very stubborn about the system design that makes it harder to fix than it feels like it should be.
(One hypothesis: Perhaps for privacy reasons or Amazon-not-wanting-to-give-away-the-whole-farm-on-sale-conversion reasons, ad networks only have visibility onto what you've seen, not whether you actually closed the purchase).
What I normally do is searching on Google on monday morning a "cool" thing, like "drone" or "camera" and then I enjoy all the week these "targeted" ads, thus avoiding quite a few about "random" things I am not interested in like toasters, diapers, and similar.
Not that I am actually interested in drones or cameras.
I'm not sure that things like Amazon's "Customers who bought this also bought that" is particularly amazing or clever; that ain't rocket surgery, you know! Sure, every once in a while I'll go "Yep, I probably need me some of that, too", but mostly it's "Nope - I've already got plenty of that, thank you. Moving on ..." And like everybody else, I too get sick of being pitched even more of this, or different variations of this, after I've already made my purchase. I'm sure that all of it does lead to at least an incremental increase in sales, though, so it serves its purpose.
A while back I saw an article on the success of an "A.I." that was being used for marketing purposes. And as I read through all of its supposed "breakthrough suggestions" I thought to myself "You know, you could have saved yourself an awful lot of time and trouble and money if you'd just read pretty much any sales and marketing book from, say, the last 100 years or so." Because there was really nothing there that wasn't already pretty much common knowledge within that realm.
I work on Amelia at IPSoft, things seem to be going well.
Customer Service is a huge, overlooked industry mostly based around classifying requests and then performing a rote action, which is a great use case for AI.
I used to work at IPsoft as well. I think the big difference is between "ML as a tool to solve a 80%+ of relatively standard customer and employee service requests" vs "we've created self-reasoning AI that will solve all your problems".
In my opinion, sales at IPsoft faced some of the same challenges initially (selling the idea of AI vs selling a well working solution for a specific problem), but has become more focused now that they have proven use cases in the field.
As you say, service desks are a great opportunity for ML. They're seen as cost centers, typically plagued by high attrition of employees, provide inconsistent service levels, and spend the majority of their time on requests that are relatively standard. Still, it is not an automatic effort: integrating with back-end systems, creating a dataset to start classifying incoming requests, and defining the processes to handle them are (mostly) still manual work.
Not sure what you are using, but speech recognition is amazing on my android device and has been for years. I rarely ever type anything into my phone anymore. It's a usability gamechanger.
anyway, that tangent aside, considering how many of the individual elements in a phone tend to be sourced from Asian countries. I guess not not using my phone, im actively subverting and appropriating Asian culture(s because yes i know there's more than one and I was getting too into character—
anyway i know that totally isn't your implication, I was just taking the expression that I perceived and the meaning I inferred from to the logical extreme to demonstrate my distaste with this whole white male native speaker archetype. sure I would certainly count myself as one, but those are just a bunch of adjectives serving to do nothing more than to elucidate what it means about you to think these things. i can certainly tell you that I and the majority of white male native speaking individuals are nothing like the stereotype you're trying to evoke. we aren't all scientists, aren't all engineers, we aren't all beauticians, and no single one of us lives in a world that is solely their own. Every part of society and the corpus of human knowledge has really just been a super long drawn out game of telephone where the telephone line never terminates and just loops back around. That and there are several telephones attached to it in such a way that physically proximate ones will simultaneously reproduce the vibration traveling across the telephone line. perhaps a bit abstruse
but I think it brings home an important point. namely, i don't think a white person had their hands on this device between it being sealed and being opened at the store.
ML is a "big" application at the tools level. Categorization in particular.
You're already seeing it in product. Consumer level security video equipment with human detection is pretty accessible now. I can ask my iPhone for pictures of my kids in snow in 2014 and get a pretty good output. Enterprise level categorization of photo and video is a thing.
"Smarter" machines are just like smart people, by themselves not very exciting. But give them a purpose or application, and things get exciting.
For safety reasons, self-driving cars on the road don't rely on ML as much as you may think. It's much better to run algorithms with provable guarantees rather than black-box neural nets on a multi-ton vehicle. When the judge asks your company if you did everything in your power to prevent the death at hand, it helps to point to decades-old well proven techniques, rather than band new unproven, untested, trends.
> Self driving cars? Which are not yet in production
Yes they are. What do you mean, not in production? Maybe there aren't millions of fully autonomous cars on the road today, but there will be soon. They sure are in production. Companies like Tesla are making production hardware self-driving cars right now - full autonomy coming soon, of course.
> Speech recognition? Which still seems rather bad to me.
Speech recognition is 100% amazing. The phones never ever hear me wrong any more, not ever, not once. The main problem I have with it is interpretation, which can be shocking, shocking bad. (Ask Google for a joke, it'll tell you one and then prompt to you ask "one more". As soon as you do, it'll say "one more what? I don't understand", which is pretty funny.)
> Is there some big application if ML that I’m missing that is a clear win?
Well yeah. I would say millions. ML has been embedded into the world now and makes nearly every daily interaction with technology better than it was before.
> Yes they are. What do you mean, not in production? Maybe there aren't millions of fully autonomous cars on the road today, but there will be soon. They sure are in production.
Huh? They are in production but not yet in production, but they sure are in production? Can I go to the store and buy one? No. So they are still building the things!
> Speech recognition is 100% amazing. The phones never ever hear me wrong any more, not ever, not once. The main problem I have with it is interpretation, which can be shocking, shocking bad.
I simply do not believe your first point, based on my own experience. 100%? Really? Moreover, your second point, that the interpretation is off, is most of the problem. That's where 99% of the work for the foreseeable future will be spent on this problem.
Perhaps it is beautifully written but unusual? It seems pretty baseline to me. Certainly doesn't seem to meet the bar of worth reading, regardless of its argument. I mean if you only read twitter all day, then ok.
> People learn from conversation and Google can’t have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.
At first blush he sounds like my technologically semi-literate grandma who would definitely conflate Siri and Google as being part of the same grand internet program. I had to read this twice to understand that in saying "it can pretend to have one using Siri", he meant that asking Siri a question sometimes redirects to a Google search, but wrote it in a way that personified Google as the actor with intent in that transaction. What an odd and paradigm-breaking way to look at that.
The problem is that even internally, there is this notion of "sprinkling a little Watson to do the hard jobs" and then 'poof': problem solved. Marketing reflects internally, too.
Is it really? I don't see that at all. Rather I see AI as finally being entrenched in normal, everyday products and services. AI is here to stay and the hype hasn't even started yet, at least not compared to what's coming.
Billions of people own devices that they can talk to, that can talk back, that can translate between more languages than humans can, that know facts about you that you didn't even know yourself, etc.
Basic AI has come to be an expectation of many consumer products now. And real AI is coming faster than ever.
Fake audio and fake video, generated from computers/AI is here. Self-driving cars may be just a domain-specific expertise, but it is still AI by any traditional definition.
AI is not reaching any kind of trough of disillusionment that I can tell. We're still obviously just getting started with what can be done.
While there are domains where AI is having successes, the current general expectation is far ahead of where the technology is. People think (and in part due to the marketing like IBM is putting out) that ML/AI can do things that aren't possible yet.
There may be a reset in expectation soon which will lead to become more pessimistic at claims being made and the marketing. Once we are through that, it will be easier to get people to understand what use cases align with the available technology and implementing ML/AI will become more productive. Aka the Trough of Disillusionment followed by the plateau of productivity.
Yeah, I guess you’re right, too :-) It’s definitely true for ML. However, I was mainly thinking about the overhyped “AI can do anything” sort of nonsense that is mostly spread by people who don’t have any clue about CS
oh man you can say that again. Look ma i can perform linear regression on a bunch of data , i'm a data scientist!!! Even better i just heard a new term 'IA', local tech meetup has a presenter with headline
"discussing an Intelligence Augmentation (IA) approach to human-assisted artificial intelligence."
I disagree. I read this article a few years ago. It is true about IBM, but other companies have significantly more interesting and useful applications for ML right now.
I would only refer to Episode 4-5 of Silicon Valley in Season 5.
This will be the mistake we will make when we introduce a technology we think it is smart enough to make decision for us and turns out all it does is read words faster than us and interpret them literally.
Even with the closing statement of "AI winter is coming soon." I can see Watson having a problem understanding that statement even with context.
Something feels wrong about the assessment of the second block of text being written by a human.
I feel some conflict in my head that someone would talk about Bob Dylan as "overstepping" a claim about his prominence and then conceding that he does belong in a "Top 10 Bob Dylan Protest Songs list." Of course he belongs in a Bob Dylan list he is Bob Dylan.
Watson's marketing is obnoxious, but it's not just Watson. There is plenty of bullshit, ignorance, and pseudo-intellectualism to go around. Mind you, many of the technical fruits of AI itself, properly understood, are not bullshit (the name "AI" is misleading IMO; I wouldn't be able to tell you what distinguishes AI from non-AI because it seems largely a matter of convention rather than a substantive difference). The field offers plenty of useful techniques for mechanizing things people have had to do preiously. However, the very idea of a "thinking computer" is unjustifiable and superstitious. There's too much sloppy, superficial thinking.
The author of the article mentions concepts and indicates a distinction between them and word counting. Certainly, there is a difference between word counting and conceptualization, and it is patently obvious computers don't do the latter. But it's worse than that. Technically, computers aren't even counting words. They aren't even counting, nor do they have any concept of a word (we count words by first knowing what it is that we should be counting, i.e., words). What we call word counting when a computer does it is a process which produces, only incidentally, a final machine configuation that, if read by a human being, corresponds to a number. The algorithm is a proxy for actual counting. It is a process produced by thinking humans to produce an effect that can consistently be interpreted as the number of words (tokens) in a string. That's not thinking. There is zero semantic content and zero comprehension in that process, and no number of tortured metaphors or twisted definitions can change that. AI, as it becomes more sophisticated, is at best a composition of processes of the same essential nature. No degree of composition -- no matter how sophisticated or complex -- magically produces thought anymore than taking sums of ever more composed and expansive sequences of integers ever gets you the square root of two. It's not a mystery.
IBM's customers probably understand that Watson (the jeopardy-playing bot) isn't really relevant, and that what they're buying isn't a pre-written software package so much as software consulting services. But there's still a serious problem, which is that a customer of Watson would reasonably believe that they're getting the team of engineers that solved Jeopardy. In reality there is no overlap whatsoever in personnel between Watson-the-PR-project and Watson-the-thing-you-hire.
Watson Services are very much a product you buy/subscribe to. Just like the Machine Learning services you get from Microsoft or Google. You can use them pretty easily with absolutely no consulting at all.
I always feel that this is one step away from that famous quote, "The greatest trick the Devil ever pulled was convincing the world he didn’t exist," attributed to various (https://quoteinvestigator.com/2018/03/20/devil/). In this case, the greatest trick is convincing that it does exist... and maybe is the harder one.
I don't have gripes with this marketing approach and I read it more as "expect to be surprised by what is possible" rather than harp on what isn't .. at least not yet. For a comparison, it is like apple branding their display tech as "retina display" to communicate the intention (you can't tell pixels apart), possibilities (you can now use any font) and quality rather than any claims about mimicking the eye.
Watson is for helping decisions at large corporate customers: The CEO has heard of Watson. He saw it on a screen in the VIP tent at the golf tournament, and thinks it's neat. The CIO feels safe with "Watson" in an RFP response from IBM because the CEO thinks it's neat. IBM is happy with this pettifoggery because it keeps the SOW vague and open to maximizing revenue from the project.
I think the biggest concern is that the general public outside of this industry think that AI and machine learning is Hey Google not understanding them and their bank working out you’ve run out of money – which they already know. When AI _actually_ arrives, they’ll be bored, ignore it, and then, well, I guess it’ll know and take over the world. We’re all doomed.
>A point of view helps too. What is Watson’s view on ISIS for example?
>Dumb question? Actual thinking entities have a point of view about ISIS. Dogs don’t but Watson isn't as smart as a dog either. (The dog knows how to get my attention for example.)
Ouch. But at the same time, for something that didn't grab his attention he sure had a lot of words to say about it.
>...counting words, which is what data analytics and machine learning are really all about
The piece is welcome anti-hype, but, what? How can a true expert in the field say something like this? Or, maybe I should tell my colleague who is working on ML for diagnostic radiology to think of voxels as, uh, words?
> I started a company called Cognitive Systems in 1981.
Ahh. So just from the article, his gripe is of the “begs the question” sort — he’s not pleased with the evolution of idiom. Since he was doing “real AI” back then, who are these frauds to claim they are doing AI?
His point may or may not be valid, but his specific argument is quite weak. He notes that even a person, an actual intelligence, wouldn’t know what Dylan was singing about without context. He goes on to presume that Watson doesn’t have context, but who’s to say? Watson could certainly read all the articles about Dylan that he so helpfully cites, and come to “understand” the songs. And maybe Watson has.
If you follow the links to his academy, you divine a bit more of the motivation.
His software development course provides “a unique automated mentor, employing natural-language processing technology derived from our decades of artificial intelligence research”. He is desperate to stay away from calling it actual AI, yet can’t resist implying that it is. This is probably most irksome to him.
He specifically addresses that he _wasn’t_ doing “real AI” then, and given they’re doing fundamentally the same or similar things, IBM aren’t now either.
Yes, that is my point (sorry that I mis-stated it a bit). He is offended by the claims of these charlatans when he knows the technology hasn't advanced to the point of being AI. How dare they claim otherwise.
Also, he knows that they know it's a lie. Unforgivable.
But he's fixated on the literal (once-true) meaning of "AI". It doesn't mean that anymore, not to the lay person and not to the average technologist either.
Like "begs the question", one has to just get over it.
I remember seeing "Watson" mentioned in the news like five years ago, but besides a couple HN threads, I've not seen it mentioned since then. Am I missing something (besides TV, which I don't watch)?
Isn't it more or less common knowledge now that Watson is all marketing buzz? The Watson that IBM is selling CIOs on is a very different thing from what was seen on Jeopardy.
I am not even convinced that understanding happens at a biological level. I think there is still a lot that can be done to model aspects of human reasoning in software to produce useful results, especially if it is married with machine learning, but I don't think we will get there by looking at biology.
Parent links to Yann LeCunns's "Text understanding from Scratch" paper, from 2015, where the authors uses a conv-net, originally build for image recognition to do text categorisation.
The NN techniques falls squarely in the "counting words"-bracket, although this one is actually counting characters.
It is a great paper, with great results, but none of those models therein have an opinion on ISIS, an ability to converse or anything the author of TFA calls cognition.
I do agree that Watson seems oversold, but the evidence in this article (Watson's shallow opinion of Bob Dylan, compared to the author's opinion of Bob Dylan) seems kind of weak. I was hoping for some insider information on the implementation of Watson, but unfortunately there is none.
that's just auto generated... I saved the bookmark in 2016... could be even earlier. I don't see any dates. Wayback machine has its first saved copy in May 2016 which matches my bookmark.
Normally my comments are very sceptic, but this article is just spot on.
The problem is not only context or subtext, it is even worse from an entirely predictable standpoint:
Consider a large corpus of text (books, articles, ...).
1) Concepts that are WELL UNDERSTOOD BY HUMANS will not be explained by humans when they reference them: the word or concept "pet" (as a verb) will show up in many sentences refering to the petting of cats, dogs, horses,... and ML will correctly predict the conjunction of "pet" with any of these words in the sentence. It will even be able to train ML to confabulate realistic sentences in the sense of echolalia. Consider then the sentence:
"The lady pets the cat"
The computer will recognize that the presence of cat is not surprising (bingo!). The computer will have no idea that this probably involves one or more cycles of the lady's hand gently pressing down on the fur belonging to the cat, then while still preessing down, moving the hand in the 'natural direction of the hairs' (NOT the other way around) probably from closer to the head towards the tail, and probably lifting the hand before repeating the cycle so as to massage the cat or perhaps so as to remind the cat of its time as a kitten being licked by the mother cat.
No book or conversation in the corpus will give this detailed description exactly because humans expect each other to understand this.
2) Concepts that are POORLY UNDERSTOOD BY HUMANS will be vigorously (but often erroneously) explained by humans communicating to each other what they think is going on: endless texts about religion, sexuality, perpetuum mobiles, economy, ...
How do we even expect the computer to produce a sane result, even if it correctly guesses the context?
That said, I do believe relatively helpful natural language processors to be possible, but they will have to be vigorously trained by multiple human curators individually analyzing a sentence and trying to find (probably true) statements about what a sentence implies:
Starting again with:
"The lady pets the cat"
One curator might mention hands touching fur while moving.
Another curator might add that one can also conclude the hand probably presses down on the fur.
Yet another notes the sentence also implies the lady is still alive, for else she would not be able to pet.
The first curator now adds that the cat as well is probably alive, for else the lady would probably not want to pet the cat, since massaging is useless to a dead cat.
The third curator now mentions that the sentence implies one or more cycles of an individual pet stroke.
Etc...
As you can see this quickly becomes an expensive operation.
Now one might train an adversarial neural network to look at a sentence (or sentence with context) and a list of probable conclusions, to predict if the list of valid conclusions is complete or incomplete. And then only send the incomplete ones to humans?
What is the point of changing the headline of an article like this? The headline was an accurate summary of the contents ("THE FRAUDULENT CLAIMS MADE BY IBM ABOUT WATSON AND AI"). Maybe you agree, maybe you don't. But the current headline is just wrong.
From the article:
> I will say it clearly: Watson is a fraud. I am not saying that it can’t crunch words, and there may well be value in that to some people. But the ads are fraudulent.
That's what this is about. Not "Claims made by IBM about Watson and AI."
I changed it in a bit of a rush earlier. Note the word "unless" in the site guideline: Please use the original title, unless it is misleading or linkbait. "Fraudulent" in the title was guaranteed to provoke objections—you might not consider it misleading/linkbait but other readers would.
Normally we look for more accurate and neutral language in the article text to use as a replacement title. Now that I'm not in a rush, it's clear that the subtitle contains such language, so we'll use that instead.
I wish click bait was more well defined. In this case, I feel the original title summarizes the entire claim made in the article. Using the title "Claims made by..." sounds like it'll be an article summarizing the claims made by AI and Watson, rather than a push back on those claims.
I'll go ahead and define clickbait using a simplified and possibly anthropomorphized version of information theory.
First:
- The more improbable a message is, the more information it contains, assuming the message is true
- If the message is untrue it contains no information
- If a message is already known by everyone it contains no information
Not Clickbait:
If a message is surprising (seemingly unlikely or previously unknown), and true, then it contains high information, and will be very likely to be clicked. This is not just a good thing, it's the most optimal thing!
Clickbait:
If a message is surprising and untrue, it will also very likely be clicked if the user cannot easily determine that the message is untrue. The user may then be disappointed when they discover the message actually contained no information because it was false. Incidentally, false messages will always have a high probability of appearing to be high information messages, because they will often appear to have the least likelihood.
Incidentally, this is (in my opinion) the theoretical problem of fake news. It will always appear to be high information to those unable to determine if it's true or false. In other words, it will appear to be of the highest value, when really it has no value (or even negative value if you look at the system level rather than just information level).
I disagree, and do not believe the title is linkbaity/sensational at all, and find it problematic that the title was changed. It is an accurate description of the author’s position, which is that the claims made by IBM are fraudulent. Holding a position isn’t linkbaity or sensationalism.
A linkbaity or sensational title would have been “you won’t believe what IBM claims”.
Exactly this. I've been uncomfortable with this trend for a while now, where everything gets labeled "clickbait" and there's a thread of resentment if an article does anything other than lay out dry facts as efficiently as possible.
The pendulum has swung too far. This wasn't clickbait, it was a fair and accurate summary of the full piece. Flag the entire post if you think it's inherently inflammatory, downvote, ban it if you're an admin, but this title had no business being edited.
Perhaps a more NPOV [neutral point-of-view] way to reframe the original would be "Claims made by IBM about Watson and AI considered fraudulent" - thus it is no longer advertising an opinion as implicitly true in the headline, but simply representing the article for what it is.
I'm sure you're right about the why, but I don't believe this was a judgement call, I believe it was wrong. The current headline is misleading (if the purpose of a headline is to provide information about the contents of the full post).
Questions about whether the entire article is misleading should be addressed via discussion in the comments or, if it's egregiously wrong, flagged/downvoted/removed.
You should the CEO’s of big companies bragging with big buzz words like “AI”, “cognitive”.
The truth is very few people understand the current state of AI and it’s limits. Most of the fancy papers in Arxiv are based on NN’s which are just essentially vector algebraic math.
Yes, we made huge leaps in image labeling and speech to text but current AI is a far cry from the full abilities of human brain reasoning and comprehension.
The reality is people will say whatever to make a sale / get VC funding.
The current economy is running on greed. When the bubble bursts, people will get hurt. Then the economy runs on fear. Fear that AI is hype.
IBM has always been about hype. They are riding on the jeopardy wave and exploiting the greed.
You can also argue that even the leaps we've managed to make in machine learning (a phrase I prefer over 'AI' not least because it includes the reminder-word 'machine') are mostly quantitative rather than qualitative leaps. Almost anything you can think of that's 'better' lately seems to be mostly thanks to 'more' -- more data collected in one place available for training/processing (thanks to large centralized surveillance companies), and the ability to process more data faster (thanks to faster hardware, and more of it, owned by those selfsame large centralized surveillance companies). We've seen a lot of optimizations and interesting research within the existing paradigm but we haven't seen a serious qualitative breakthrough the likes of which would justify this kind of hype.
"just essentially vector algebraic math" -- True, but only in the sense that anything done by a computer is just iterated Boolean logic. Yet, Wikipedia and Grand Theft Auto and Mathematica have emergent properties beyond the obvious things that Boolean logic can do. So the fact that neural networks are based on vector algebra doesn't prove that they aren't capable of impressive things.
I mean to say the there are tons of problems NNs aren’t good for. If we are on the journey to discover GAI super algorithm that is smart enough to learn very quickly without a shit ton of data, I’d say we still have a long way to go.
Anyone claiming they solved cognitive reasoning with a bunch of NNs stacked together in a black box is probably lying to themselves.
Everything IBM says about Watson should be taken with a few tons of salt.
I don't know how much I'm allowed to say, but they couldn't even get it to work acceptably internally for some of the basic datamining and natural language processing that are among the things so highly touted in some of their TV advertising. This is with a gargantuan dataset compiled from years of relevant interactions to train on in the particular area of interest.
Re: Everything IBM says about Watson should be taken with a few tons of salt.
How is this different from nearly all other "AI" companies? IBM is not the only one guilty of hype. Neural nets can do specific things well, such as mirror a specific training set well, but still have major gaps in terms of what many call "common sense". It does smell like AI Bubble 2.0 brewing out there.
Question: how can the author know what Watson really is doing unless the author worked on Watson?
If you're going to try and explain the above to me, don't do it by explaining what Watson "really" is doing (unless you worked on Watson yourself). Explain exactly how you can tell the difference from the outside.
As far as I can tell all the author did was point out that Watson made a mistake (saying that Dylan's songs is about love fading) and this is not enough, since humans make mistakes too.
I am not sure if the author did this (he certainly doesn't mention it), but a lot of what constituted Watson at the time of Jeopardy was published as a set of 17 papers called "This is Watson" in the IBM Journal Of Research and Development.
Other than that too, I think you can make a reasonable assessment of any NLP system at this level of abstraction by looking at what is cutting edge in NLP today (by following conference proceedings etc).
"Recently they ran an ad featuring Bob Dylan which made laugh, or would have, if had made not me so angry." Wait, is Roger Schank an AI trying to convince humans that AIs are impotent and harmless? Pretty sneaky, Schank-bot.
I worked on data systems that fed weather data into Watson.
IBM's technology, and IBM's marketing are very different beasts. The marketing is somewhat trivial, but the reality of ML and the Watson data systems is incredible. They are pumping a huge amount of data into it and they have data scientists doing incredible things with the data already. It is the largest growing segment of the company (and maybe the only growing segment of the company.)
The marketing of AI, and the realities of ML are always going to be disconnected. Sure, you will likely just get an email that says routine maintenance has been ordered on your elevator, not a box in the corner that tells you in a sexy, clear, non-robotic voice. As for Bob Dylan's songs... Christ. AI winter isn't coming unless the term AI gets squashed and people start calling it ML. This article is pretty FUD.
The problem is that all of this AI and ML is being used to target Ads better. You don't need AI and ML to do this, it is a solution looking for a problem.
Your example about routine elevator maintenance does't need AI at all. Whoever logs the elevator maintenance can just press a button to send a notification message. Hell, it can be wired up such that when you submit a notification notice, it automatically sends an email. They probably have condo management software that does this already.
The idea of the elevator maintenance from the IBM commercial is that it can use ML to detect that there will be downtime that could be solved by preemptively calling for some maintenance. I guess "routine" was a bad word choice.
Speaking for OP - I think the idea was that ML would determine that your elevator was in need of maintenance and place the order without human intervention.
Exactly. We already know when the elevator needs to be maintained. If it isn't being maintained, it's not because people don't know, it's because they are intentionally NOT doing the maintenance, maybe because it costs $120/hr minimum 6 hours.
If SpaceX were to advertise like that, they would have famous people sitting in their living room, on mars, and talking about what they liked about the Martian way of life. In that case I believe that most people would understand that SpaceX wasn't already hosting people on Mars.
Unfortunately many, many people think that talking to your computer in actually already possible, they just haven't experienced it yet. Not sure how we fix that.