Hacker News new | past | comments | ask | show | jobs | submit login
‘The discourse is unhinged’: how the media gets AI wrong (theguardian.com)
126 points by jonbaer on July 28, 2018 | hide | past | favorite | 72 comments



This sort of sensationalism, in general, is so completely ingrained into reporting that it's hard to separate from journalism itself. It's part of the journalistic style, an inevitability of the medium.

We see an article, click a link, by a paper, but we are never committing to more than a second or two of attention at a time. The headline makes you read the byline. The byline gets you reading the second sentence. At every step, a journalist loses most of her readers. The winning strategy is to increase "story," play to existing opinions, be sensational, use bait... There are flavours and degrees, but by and large writing like this is n inevitability of the medium, of journalism.

If you are reporting on programs that "developed a type of machine-English patois to communicate between themselves.." you are reporting on whether or not the skynet singularity is coming, almost certainly.

If we want something different, I think we'll need a medium change.

Personally, I was hoping e-readers would catalyze a new medium subtype: 30-100 page mini books. A lot of "news cycle.." the drip, drip, sensation of the day journalism just isn't a good way of understanding anything.

What happened between North Korea and the US this year? What's been happening with the Syrian war? Brexit as of July 2018...? I would be very happy to exchange the daily/weekly news bulletins for monthly/quarterly mini books.


Have you tried reading the 2-3 "meat" articles in the middle of the New Yorker each week? I find that these 8-15 page articles are hugely informative in comparison to the Associated Press funnel format and clickbait drivel that dominates the web these days; you'll notice a lot of them make the front page here.

Of course the New Yorker has restaurant reviews, events, and other sections that are New York-specific, but these are remarkably easy to skip regardless of medium.

https://www.newyorker.com/magazine/2017/10/23/welcoming-our-... (title is in jest) https://www.newyorker.com/magazine/2018/06/25/the-reputation... (astroturfing and politics) https://www.newyorker.com/magazine/2018/07/23/how-e-commerce...


I started reading The Economist last year for this reason. Their Special Reports are similar to what you're looking for. The New Yorker also puts out minibook-length articles.

Nautilus is another mag that I've been happy with for more science-oriented stuff. They put out monthly issues with a cohesive theme.


But most people don't want that, sadly.

Instant mass media has shown us that people want to consume media about other real people -- that they admire in some fashion -- going through life. Not reporting, explanation, facts, or science.

So reporters have responded with tweets and instant feedback on anything that's happening. Many times even when nothing is happening, they know they have to put out some kind of personal emotional response to keep the audience.

Newspaper and in-depth reporting worked for the most part because nobody really cared about the author. They cared about the material. That's flipped upside down now. The material doesn't matter as much as the author. Even the reall good occasional long-form material you might find is all geared to get you into a personal, minute-by-minute "relationship" with the author. To become part of the tribe.


I have a streak of journalism about me, and I when I found malware on a cheap, mass-market tablet a year ago I used my soapbox to great effect, forcing a promise from a CTO to rectify the problem. Search on "Barnes Noble malware" if you're curious.

Anyway, that was my only writing that really went viral, and it was not about a person. It was about a corporation, a device, and a set of security policies.

Anyway, I've written incendiary things, informative things, and I guess some bad things.

I now understand the basic math behind "stochastic gradient descent." Maybe after some practice, I can help.


Oh I don't know... there are wants and there are wants.

I wouldn't put it down to some moral failing of humanity. The problem is more of a systemic one than a fundamental one.. I think.

The way we use our phones/computers, our time and such... it adds up to a system where decisions are instant and reflexive. It adds up to a medium, with all its trappings.


How true is this really? So many times we hear "people are stupid now they don't want to do science or read fact." I'm sure that sentiment has always been said throughout history.

Yet a large deal of my millennial colleagues are the most progressively scientific, rejecting religion, embracing global warming legislation, very strongly against anti-vax movements. How many people really read the newspaper back in the day? Every period piece shows the adult male reading the paper. No one else. Do you think 26 year olds back then just read the news all the time and now it's different and no one does?

> So reporters have responded with tweets and instant feedback on anything that's happening.

It has more to do with managers and companies pushing reporters and journalists to feed page views. That's it. It's not reporters responding to the end user. It's reporters responding to the fact they don't have good jobs anymore and have to generate ad revenue.


For global political analysis, read Foreign Affairs. Long-form analysis as far removed from click-bait journalism I think is possible.

https://www.foreignaffairs.com.


Then read the Economist.


‘The Gell-Mann amnesia effect is a theoretical psychological phenomenon, the term itself being coined by author, film producer and academic Michael Crichton after discussions with Nobel-Prize winning physicist Murray Gell-Mann.

Originally described in Crichton's "Why Speculate?" speech, the Gell-Mann amnesia effect labels a commonly observed problem in modern media, where one will believe everything they read from a journalist even after they come across an article about something they know well that is completely incorrect.

The conclusions found and perspectives portrayed by the author are entirely erroneous, often times flipping the cause and the effect. Crichton notes these as "wet streets cause rain" stories.

In short, most eloquently put by Thomas L. McDonald, the Gell-Mann amnesia effect defines the idea that "I believe everything the media tells me except for anything for which I have direct personal knowledge, which they always get wrong."’

https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect


I think this generalizes as well. People believe what they're told, even if it's coming from people with a poor history of speaking the truth, whether it be misunderstanding or lies. Further, when they realize they've been lied to, they're often bad at going back and invalidating their cache and fixing assumptions based on that bad data.

It's really hard - you are told or assume, you internalize it, build on top of it, and then when you realize that you've built on falsehood it's easy to remove that piece without understanding what load it was bearing in your tower of thought. I've found myself doing this all the time when doing designs - I've had to train myself to go back frequently and retrace the logic I followed, being careful not to fall into the same pitfalls and re-introduce the same flaws -that the tower still holds up to inspection when those components were never introduced.


I'm not sure it's true though. Look at any polls about trust in the media. It's been falling for a long time. People are becoming aware that journalists are systematically unreliable and it's having an impact.


And unfortunately the alternative people have found is random hearsay on the web. The dieticians lost our trust and now we're just eating junk food instead.


They've been finding random hearsay for a while now. Tabloids had a huge audience, and they're the kind of newspapers many were reading for years. Is the Daily Mail or Sun or Mirror or Express more accurate than some clickbait farm? Probably not.

Most people were't exactly reading [insert name of reputable publication] for their daily news, it was low quality tabloids with a helping of TV and radio pundits.


> Tabloids had a huge audience, and they're the kind of newspapers many were reading for years.

Outside of NYC, the US has basically no daily tabloids.

> Most people were't exactly reading [insert name of reputable publication] for their daily news

Most people in the US that were reading any daily news were probably reading the dominant local newspaper or one of the latter reputable metropolitan or national papers, because that covers pretty much all the daily written news outlets they would have had access to.


Hmm, you may have raised an interesting point there, namely that the quality of the media is very location specific and some countries had better media outlets than others as a rule.

Unfortunately, it seems coverage of journalism and its decline and what not is very US centric, which likely confuses the hell out of people from other countries where their media outlets got worse much earlier on/didn't have a golden age in general. Kind of like how US media outlets and YouTubers assume the Great Video Game Crash was some worldwide thing, whereas it really only affected the US market and left the European and Japanese ones almost untouched.

It may also explain some of the comments on local journalism, since that stuff over here in the UK isn't exactly fantastic itself. Maybe there's more ambition in local US newspapers than in UK ones, in that they actually did some journalism at some point rather than merely covering what local businesses were up to or what not.

Huh, guess there's a cultural divide in how the news is presented too.


I'm not sure how often the National Enquirer and it's supermarket checkout line cousins publish but they are certainly available and fairly popular (very popular histically) outside of NYC.

Our disagreement could be a dialect thing though, when I say tabloid I mean a printed paper that consists entirely of stories about UFOs and lurid celebrity gossip. I believe in other parts of the world tabloid has something to do with the format of the paper and the content is considered somewhat but not totally disreputable.

I'm not sure which meaning the other posters are using.


> I'm not sure how often the National Enquirer and it's supermarket checkout line cousins publish

Both common local tabloids (which are often have basic journalistic standards, though they frequently have...interesting...editorial viewpoints) and the supermarket tabloids like the Enquirer are weeklies.


Yes but newspapers are basically random hearsay on paper. The web is not necessarily less reliable and can often be more reliable.

Dieticians lost trust because they kept changing their advice about what to eat and clearly aren't reliable either. That's unrelated to newspapers.


FWIW, I think a new alternative has been brewing for a while: long form interviews on YouTube and podcasts. Whether you’re a fan or not, I think Jordan Peterson has some pretty insightful things to say about this on his last interview with Joe Rogan.


I think the internet may have shattered the Gell-Mann amnesia effect. People can now hear how inaccurate the media are about every topic, since experts in various fields are coming forward and calling them out online. The big thing that kept the effect going was that people didn't have an easy way to find out the truth about various specialist fields outside of the media.


People tend to accept what other people say as true on some level. Journalists, Con men, and Religions benifit from that same bias. Consider the odds of any one random thing say a Tiger in a top hat sitting on the other side of a random door being true is very very low.

However, even if I think someone is lying 99.99% of the time when they say a Tiger in a Top hat is behind the door that still raises the odds immensely.


Or they're being told that all news is fake news in order to convince them to switch to the official state apparatus for information disbursement.


The media made its bed and now it can lie in it.


NPR?


“A lie told often enough becomes the truth.” Lenin


Ironically that is a misattribution, see https://skeptics.stackexchange.com/a/32944


No wonder influencers get paid so well ...


Success in developing working AI is not so much of a big step in progress for AI in general.

It would be more interesting if we could understand what happens in those black boxes, and synthesize it. All we see is showcase projects and services, but never something you can run on a client computer.

It really looks like machine learning is just brute forcing problems until you have a partial solution key to a problem, without understanding how it works internally. Granted, it's progress, but why isn't it possible to use the data of a trained ML network for further analysis? I see many models of learning, but not a lot of analysis or simplification of the resulting data.

I would have thought AI would help science understand what intelligence is, but obviously it's always money first, science later. You often see a lot of tools and models, but not a lot of good insights.


AI doesent do latteral thinking that well yet. Its a meta problem, thinking about thinking requires selfawareness.

But saying its just bruteforce is missing the larger point, much of evolution is bruteforce.

Of course its money or power first thats how you pay for these things to begin with, with utillity we wil get more and more conscious ai but its obviously going to take time but a blip compared the time evolution have used.


There's this continual refrain from AI people: "well evolution is just brute force too", or "the brain is just doing complex statistics too" as if that explains anything at all. There are different ways of implementing "brute force" and "complex statistics" - what matters is the implementation, not the terminology. Clearly AI is not even remotely close to "thinking" or "awareness" in the human sense. We don't even understand the implementation of those things in humans. It is ignorant conceit to think we will just magically stumble upon the answer with zeitgeist machine learning techniques.


We don't even understand the implementation of those things in humans.

And we don't have to, unless you're insisting on something that might be referred to more aptly as "Artificial Human Intelligence". Generally speaking though, in the AI field, there's no particular belief that AI must work the same way human intelligence works. If AI research leads to discoveries that help us better understand human intelligence, that's a nice perk, but nobody treats that as the ultimate goal.

That said, of course it makes sense to try and model after human intelligence to the extent we can, since we are currently the best example of intelligence we have available to use as a template.

It is ignorant conceit to think we will just magically stumble upon the answer with zeitgeist machine learning techniques.

Who is out there suggesting that we will "just magically stumble upon the answer with zeitgeist machine learning techniques"? From where I sit, it seems that most contemporary researchers who are focused on Deep Learning / Deep Reinforcement Learning / etc. are not talking much at all about "Artificial Intelligence" in the general sense. And the people out there talking specifically about "Artificial General Intelligence" (Ben Goertzel, Marcus Hutter, Pei Wang, etc.) certainly aren't claiming that all we need is the currently faddish ML techniques. See, for reference:

http://agi-conf.org/2017/?page_id=20

http://agi-conf.org/2016/schedule/

http://agi-conf.org/2015/schedule/

etc..


> but nobody treats that as the ultimate goal.

I remember watching Andrew Ng's course on ML and he was often talking about "AI dream".

I think that the goal of AI is to build machines that are progressively more intelligent. To build those machine you have to build an artificial form of intelligence, to further analyze and research what intelligence really means.

I don't think humans are really able to visualize a form of intelligence other than human or mammal/earthly life forms. Our intelligence comes from an evolutionary need to figure out things in order to survive, but it's an earthly version of how we evolved. With that said, I think we can already say that our definition of intelligence will always biased because we put our human intelligence on a pedestal, and worst than that, we won't be able to detect other forms of intelligence because of those reasons.


This quote is nice: “We’ve told stories about inanimate things coming to life for thousands of years, and these narratives influence how we interpret what is going on now,” Bell says. “Experts can be really quick to dismiss how their research makes people feel, but these utopian hopes and dystopian fears have to be part of the conversations. Hype is ultimately a cultural expression that has its own important place in the discourse.”

Hypes are not necessarily wrong. I also think in the end you won't get AI winters anymore. If it is advanced enough AI will be used and you won't get the genie back in the bottle.


>The result is dangerous

The complaints in the article seem to apply to crapy sensationalist journalism in general and there is much more danger from that applying to immigrants, politics war and the like. If the Sun prints some nonsense Facebook's bots does it really matter? Nonsense about the enemies WMDs on the other hand can costs thousands of lives and billions of dollars.


It does matter yes. Journalists have inordinate influence on politicians, if an outlet (much more likely the guardian in this case) is making nonsense claims about bots it can easily lead to legislation, especially if it lets politicians feel they're doing something.


Printing nonsense is dangerous in general. By lying to people, you're screwing with their minds, and this has unpredictable consequences.


Media does not get AI wrong... media wants to create sensationalism that drives clicks so it publishes stories to that effect. It happens for every topic people fear which these days includes terror and AI...


Also, for the most part, media is merely parroting the sensationalism from within the AI industry. It's not the media that is actually allocating millions of dollars to the possibility AI might become self aware and start attacking people.


The AI hype also seems to lead to sub-hypes in certain fields. E.g. in the legal profession there is a new buzzword called "Legal Tech". The hope is that AI-driven programs will eventually transform the field. While I think this is certainly possible and will eventually happen, it is astonishing how few programs there are admit the huge hype and countless workshops and conferences about the topic. And while it is possible to compile a list of programs and companies on the market, from my experience most of them aren't actually really used in business.


I am a lawyer and a coder and I have to disappoint you that that hype is very real indeed.

The problem is two fold a) Lawyers completely underestimated the decision making abilities of software even without AI b) People who are not lawyers that completely overestimate the complexity of legal resolution.

(a) Happens because the software we lawyers use is basically... well... crap and most lawyers are clueless when it comes to technology, even ones that specialize on it. (b) Happens because of TV and Movies have created this fantasy legal world where lawyer, especially expensive ones can prove that they are elephants because of their virtue to win arguments.

Surprisingly and this was also a surprise to me when I started to study law, law and coding is much closer than people think because there is a lot more logic than there is tv drama in a real court room.

Not only AI should not have problem resolving legal issues , judging from its current achievements, but it can help with the most valuable skill for a lawyer which is pattern recognition. Our profession bombards us daily with tons of data that is extremely hard to organize and keep track of. This applies more on civil than criminal law , but most of law is civil law anyway (economical issues), and this data is documents used as evidence, case law (court cases that have created a precedence) and of course legislation.

Also there is no much of an option really, AI is pretty much unavoidable because the ever evolving immense complexity of modern society has made legal resolution so complex that court cases take up to decades to be fully resolved which of course is not a viable solution.

An example is the IT law which has been a huge suffering for courts and legislators to keep up with its rapid evolution in a profession where court cases and legislations take decades to move forward. In IT decades are in legal terms , centuries.

AI will replace lawyers , for that I have no doubt, cause law is a dying profession anyway for the reason I explained above. Obviously lawyers will still be around for a long long time but yes AI will fundamentally change the profession. The profession is in desperate need of modernization as it has barely evolved that last few thousands of years.

The problem was never what AI can do, it can do amazing thing, the problem is supply and demand. AI is a field of huge demand and minimal supply but then this is a problem that has rampaged the coding profession which is why freelancer coders make more money than lawyers.


> Also there is no much of an option really, AI is pretty much unavoidable because the ever evolving immense complexity of modern society has made legal resolution so complex that court cases take up to decades to be fully resolved which of course is not a viable solution.

Instead of turning law into a computer game where the company with the most TPUs wins, why not simplify or reform the system so that human beings can understand it? Isn't the point of the legal system to resolve conflicts between people, not computers?

I'm worried that we're driving off a cliff of incomprehensibility, where things happen but nobody can understand why. Or even if they do, they don't have the authority to override the system which is making the decision. Reforming the system outright is always too risky -- it's been working OK so far, right? But what happens when it stops working? How can you fix a system you don't understand?


Simplification is an illusion. Simplification works great for understanding and learning , I completely agree but is terrible on problem resolution. Mainly because problems don't get simple just because you want them to. Secondly because the nature of knowledge and the world we live in is of immense complexity.

You are absolutely correct though that we are indeed driving off a cliff of incomprehensibility , I cannot count the times when I have caught , including myself, lawyers and coder not understanding even basic concepts like OOP or legal responsibility under the influence of drugs and alcohol. It's not that the concepts are hard to understand but they are so numerous it becomes so easy to get lose track of where you are , where you were and most importantly where you are going.

When I started coding back in the end of 80s coding in Assembly was not that hard. After 30 years of coding I decided to go back to Assembly I just got blown away how much more complex it became, though obviously not surprised, and of course I discovered that even Assembly coder mostly use C libraries cause well, otherwise it gets insane really fast.

My solution to this may sound insane but in life I have learned that when I have a crazy idea ,usually, I end up being correct. I do believe that AI wont replace us but rather augment us. I am not talking about cyborgs , singularity and these nonsense I am talking about software that helps you navigate through the chaos of information. And when I say AI I mean it in the most vague way possible obviously the technology will change in the future in so many ways.

The only viable solution for the human being is to either find news ways to take advantage of the potential of his own intelligence or augment himself in some way.

Afterall its not a secret that AI is already used to construct AI and this opens the door to a ton of potential.Afterall is it not coding all about automated decisions making ? It's not as if we have not being trusting automated machines for thousands of years. But nonetheless humans are terrified of technology. The marvel of the human condition.


I guess the issue is the definition of problem. If the law reaches a point where you need AI augmentation to understand it, that's a good sign it's being applied to problems it can't actually address or which may not even exist at all.

Look at GDPR. It's impossible to know what it really means. Huge efforts are put into action with no idea of whether it will be considered good enough or not. That's not a problem you can fix with ai. You need better law (in this case, no law would be better)


I'm OK with a computer assisted legal code so long as our policy makers recognize it for the public good that it is. If there is a standardized "legal robot" then everyone who is eligible to votr should have free access to it.


Humans cant understand the Law because of the sheer volume of it. The legal corpus is so large than even lawyers need to specialize into smaller subsets of expertise. Software could at least make the Law more readily available for citizens.


It's interesting, because as a coder and lawyer I have the exact opposite view. Legal technology is a lot like coding technology -- there has been stuff invented since the mid 1990s, but it's all of debatable utility (see Paul Graham's articles on Lisp). What are these tools the capacity of which lawyers are underestimating? To me, legal tech seems a lot like the talk about how visual programming tools were going to make coders unnecessary.

The law (at least, litigation) involves marshalling precedent and facts to achieve persuasion. Circa 1990s tools like WestLaw are still the gold standard for legal research. And technological tools for collecting, organizing, and synthesizing facts are basically non-existent.

There is, in fact, market validation of the idea that legal technology is not particularly useful. You might argue that defense lawyers have disincentives to adopt technology that would reduce billable hours, but what about the other side of the v.? Plaintiffs' lawyers working on a contingency basis have enormous incentives to minimize effort invested in each case. Yet they don't.


> You might argue that defense lawyers have disincentives to adopt technology that would reduce billable hours, but what about the other side of the v.?

The US has a vast oversupply of lawyers (sure, star litigators make lots of money because they are in short supply, but they aren't the people whose labor legal technology aims to replace), which makes the value of reducing legal gruntwork low, plus to replace legal grunts using WestLaw or similar tools, you've got to either build on top of one the handful of such tools or duplicate it's corpus of annotated data and the massive infrastructure dedicated to keeping it current before you even get to the novel part of your tool, or you won't have anything usable.

So, it's a super high barrier to entry for a market where you are competing with cheap, abundant human labor.


law and coding is much closer than people think because there is a lot more logic than there is tv drama in a real court room.

I've a friend who abandoned her engineering career at 30 to become an IP lawyer (she is incredibly senior in her firm now) who says the law is just a program that you run on a judge.


I have 10 years experience in law and father is a lawyer and even though her remark has obviously a comedic basis , she is very much correct. Objectivity is a huge deal in the legal profession, the ability to separate emotion from critical thinking. Of course its easier said than done which is what creates the laughing part :)


Isn't it more like ROP attack? You present data meant to cause the judge to follow a desired execution path through the relevant laws.


Can you name a legal tech product that is currently used and really improves productivity?

Generally I agree with many of your points, especially that tech w i l l transform law. However, I'm not so sure how soon that will be.

I also studied law and I am a coder and I see similarities between both. However, I think it will be a long time until a computer will truly replace a lawyer. There is one thing that I think is especially hard: Not all rules are clear cut (like "If the company has a turnover >5 Million, x applies"). Many rules have an element of judgement (in Germany we call it "unbestimmte Rechtsbegriffe", which roughly translates as vague legal concepts) that require context knowledge, which is something that has not yet been achieved by machines (or at least I wouldn't know about it). While logic is undoubtly an important part of law, judgement (especially one that requires broad context knowledge) and interpretation it another one, and from what I experienced, the latter can unfortunately often not to be solved with the rules of logic and thus it will be hard to bring a computer to do it.


My fuzzy mind when from AI and legal to google for Stuart Sierra (just to see how is clojure evolving nowadays), the link "is clojure dying" was irresistible, then appcanary from clojure to ruby transition and happiness for programmers, then Stuart "Do not" about combining lazy evaluation with side effects and the comments there seems to justify that clojure "Simplicity over easyness" is a problem. When the AI hype expand from deep learning to other techniques I think we'll see a better scenario to apply AI to real world. Yesterday the "How is Haskell in 2017 and 70-100 full time programmers in Haskell in the US" give us an idea of how is the tech world evolving. The velocity of the expanding radius of technology and IA is decreasing.


It's rather disingenuous of AI researchers to complain of overhype when they are the ones claiming that their tech should be used to drive cars and hence, as we've seen, kill people.

AI winter will be caused, once again, by the failure of the technology to do what the researchers and practitioners claim it can do. This time, tragically, with fatalities.


It seems reasonable enough to argue both that 1. AI should take over certain human roles like driving, which causes millions of fatalities due to human error, and 2. it's silly to frame every new step in AI as part of a grand road to SkyNet. The first is proposing AI for a discrete task, the second is extending this way way out to consciousness or something.


Yes, but it's my argument that claim 1 is incorrect and overhype. AI cannot drive better than humans, and that was an hubristic claim.


What probabilistic generative model of language generates ``Balls have zero to me to me to me to me to me to me to me to''? Is this a bigram model taking the highest probability continuation? Certainly nothing modern, right?


Neural networks as usual. What I make of it is that the programs noticed that some words had more effect than others, and just started spamming those for maximum value. Source: https://code.fb.com/ml-applications/deal-or-no-deal-training...


It is fascinating how the whole thing is following the classical hype cycle. This and blockchain will tank, and, then come back at a later stage, hopefully, matured.

I don't think that is necessarily a bad thing. I did my masters' thesis in 2004 using evolutionary algorithms. No ordinary person would talk to me about that, even worse, I was easily labeled an outsider and a nerd. Today, people having AI skills are the cool kids on the block. That's the good side of the hype: it's bringing all these nerd topics into the mainstream.


This and blockchain will tank, and, then come back at a later stage, hopefully, matured

I think it's a mistake to conflate the two, they are unrelated and orthogonal to each other. AI has a whole raft of problems waiting to be solved as soon as it develops enough but blockchain is very much a solution in search of a problem that only it can solve. Every proposed application of it eventually requires a "trusted third party" and then the whole house of cards comes tumbling down. In a former role I sat in on many fintech pitches where painfully earnest people from outside the industry proposed solutions to problems that they only imagined existed... I mean at the level of claiming that only blockchain can do something that actually the Medicis were doing in the 15th century...


I don't think it's right to say they will tank and then come back. In terms of opinion they went 'This is an idea' -> 'This will solve everything' -> 'Hang on that's rubbish, they won't solve anything!' -> 'Oh, yeah that's just another thing that exists in the world'.

Meanwhile, in reality, they went from 'This is an idea' -> 'Hey, we can solve a few interesting problems with this' and then split. The people who know how to develop technology built some interesting limited applications, and then scaled into more powerful stuff. The people who don't really know anything about technology will have bought into overhyped projects, they'll never go anywhere and get canned.

The second one is a pretty reasonable way for technology to evolve, the problem is that people aren't looking at what's happening, they're looking at 'influencers' and 'commentators' who are more interested in selling their opinions than talking about the technology.


The Musk AI hype is quoted extensively - but these articles never get into Musk's line largely being reheated Yudkowsky. I had one of these discussions just a couple of days ago, where I had to point out that Musk was an accomplished businessman and engineering manager, but not actually a working engineer at any point ...


They may well be partly true but when setting up SpaceX he immersed himself in rocket physics and basically became a self taught rocket scientist. He isn't just a manager.


Instead of documentation for the press on how to cover scientific topics, don't patronize these rags. What exactly is the point of correcting the press?


Good old capitalism cannot stop mixing reality and sale-speak, only if you start taking the "magic sauce" pitch for technical facts, it's no wonder you end up scaring yourself. On that same note, I kind of understand that AI researchers have this need to sell their craft as magic sauce. That's how selling things works, after all. Here we go in another cycle of debunking the same magic sauce as common snake oil.


Real AI researchers shouldn't need to be "selling" their craft. You're confusing "data scientists" at places like Facebook and Google with genuine scientists.


"Real AI researchers" also need to, if they want to get funded for their research.


There's a certain amount of puffery that goes into a grant proposal, but it's nothing like marketing-speak. For one thing, "magic secret sauce" will never get you funded.


Optimization and statistical algorithms known as "AI" are far far far away from surpassing humans. Philosophically, it will never happen, we can create intelligence at best equal to human being's, but creating that kind of "AI" would be equal to creation of life which is beyond scope of technology.

AI is as dangerous as we decide it can be, for example we can create gun with camera and shoot to people based on their looks. Law and common sense should not allow that, that's criminal activity.

I think AI should grow, and take over boring and repetitive tasks. This will free up many people form dull jobs and new jobs on top of that will be created, i.e. jobs to tune and organize AI units of computations and talk to other people about results.


The mistake is viewing AI as somehow on the same curve as human intelligence. As if AI gets better and better and edges closer on the curve to human intelligence.

It’s more a different kind of intelligence which happens to be able to do some of the same tasks. It already far exceeds human ability, at the tasks that particular flavor of intelligence is good at. We don’t need to be looking at things people do and asking how deep neural nets can do those as well, we need to be looking at deep neural nets and asking which tasks they’re uniquely capable of doing.


> We don’t need to be looking at things people do and asking how deep neural nets can do those as well, we need to be looking at deep neural nets and asking which tasks they’re uniquely capable of doing.

I remember seeing a post here a while ago, talking about essentially this process. The poster was disappointed that AI researchers tend to get to the point where a new approach starts bearing fruit, and then get sidetracked finding applications for the new approach and forget about the search for 'real' AI. IIRC this was also suggested as an explanation for the "once we can do it it doesn't count as AI" phenomenon.


There is a comment up this thread saying that the blockchain technology is a solution looking for a problem. You comment here sounds very similar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: