Hacker News new | past | comments | ask | show | jobs | submit login
Building A.I. That Can Build A.I (nytimes.com)
204 points by allenleein on Nov 6, 2017 | hide | past | favorite | 99 comments



I think this highlights the disconnect between AI researchers and industry. This research focuses on image data and then makes the claim that they want to use it to enable millions of businesses to solve problems with machine learning.

However, image data isn't what most businesses are using for machine learning. They use relational datasets. If you look at the recent "State of Data Science" by Kaggle, relational data is ~3x more common than image data in every industry besides academia and the military [0]. While Google wants to 1000x the number of organizations using AI, they aren't focusing on the problems companies actually have.

Basically, academics love building AI for images, but what companies really need are better ways to build AI systems on tabular and relational data. Images will be a piece, but shouldn't be the focus.

Disclaimer: my company develops an open source library called Featuretools [1] to automate feature engineering for data with relational structure.

[0] https://www.kaggle.com/surveys/2017

[1] https://github.com/featuretools/featuretools


Because that's not a place where deep learning shines. Most of the simple tabular data competitions are won by simple methods like random forests and manual feature engineering. Even simple linear regression or at least SVMs will usually get reasonably close to the best possible model.

But images are more important in the long term. Robotics has been held back for decades because of lack of good AI. You could build robots that can do incredible things, but they could only perform rote actions. They were blind and couldn't see the objects they were interacting with.

Now that we have decent machine vision and reinforcement learning, there will be many more interesting applications of robots. Automation will be a lot cheaper and more convenient.


This is really great. I actually struggled to find ML solutions for a large relational data set at a previous employer. Ended up figuring out a way to flatten the data, but was not ideal. Glad that people are working on this, keep it up!


Presumably that's because relational data is much easier to work with when you aren't using ML, so most companies have a lot of pre-existing data to work with and can jump right in and start using ML without collecting a lot of new data.

Image ML will take much longer to catch on because companies haven't historically had much of a reason to collect image data in quantities that are too large for humans to deal with manually, so they are pretty much starting from scratch when it comes to data collection.


They also did experiments on the Penn Treebank dataset - i.e. text data. This was 53% to RDBs 65% in the Kaggle survey, so certainly not an obscure problem...


Yep, text data appears to be widely used too. Hard to tell from the Kaggle survey, but a large percentage of text data I see in industry is part of relational data or connected to tables in a relational database. Things like forum comments, product descriptions, etc


Visual input is the primary means we humans consume information. We use while driving and reading and for everything else. Clenching to the data structures and information storage/encoding probably not the way forward.


in the terms of intelligence, it's just one of 6 senses. MEMS sensors, signal filters, and coding schemes (compression) are going strong, too. Touch is still problematic I understand.

In that regard, flexible LED screens could be interesting, using the photoeffect in the diodes like a camera, lighting the area at the same time and maybe integrate a resistive touch grid matrix, too.


They used image data because they had to do something first, but I don't think the techniques are at all limited to image recognition.


In case someone wants to check out the progress on AutoML, the project in question, Google had posted an update:

https://research.googleblog.com/2017/11/automl-for-large-sca...


This is not "AI that builds AI". The actual research behind AutoML is called NASNet (https://arxiv.org/pdf/1707.07012.pdf), and all it is simply: we found two good neural network layers (called NASNet normal cells / reduction cells in the paper) that work well on many different image datasets. It's a very cool research result. But it's not something that will replace AI researchers.


This is not the entire field of AuotML or even the entirety of Google's published research.


Yeah, I'm confused that this is the top comment; it's factually incorrect. NASNet is an example of a result of AutoML. To quote the Google blogpost on NASNet:

>In Learning Transferable Architectures for Scalable Image Recognition, we apply AutoML to the ImageNet image classification and COCO object detection dataset... AutoML was able to find the best layers that work well on CIFAR-10 but work well on ImageNet classification and COCO object detection. These two layers are combined to form a novel architecture, which we called “NASNet”.

[https://research.googleblog.com/2017/11/automl-for-large-sca..., November 2017]

In contrast AutoML is, as the nytimes article describes, "a machine-learning algorithm that learns to build other machine-learning algorithms". More specifically, from the Google blogpost about AutoML:

>In our approach (which we call "AutoML"), a controller neural net can propose a “child” model architecture, which can then be trained and evaluated for quality on a particular task...Eventually the controller learns to assign high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly.

[https://research.googleblog.com/2017/05/using-machine-learni..., May 2017]

Quoc, Barret, and others have been working on ANN-architecture-design systems for a while now (see: https://arxiv.org/abs/1611.01578), and AutoML specifically was done before announcing NASNet. Saying that NASNet is "the actual research behind AutoML" is drawing the causal arrow backwards.


It takes little imagination to see how methods used for designing neural networks can be applied to other parts of ML (e.g. optimization, feature selection etc.) AutoML is definitely not a subset of NASNet..


Yeah, it's pretty sad the nytimes is writing such facile clickbait. I mean, don't these articles have to pass some kind of review? Makes you wonder about other science based articles they publish.


Some of the learning to learn models do much more than fine-tuning parameters, they can even discover novel architectures. On the other hand, using meta-learning can be a way to check if human generated solutions are up to par, because random search can be more thorough and even try absurd ideas that might work out.

In programming we have tons of automation as well and we haven't ditched the programmer yet. Programming is auto-cannibalizing itself since its inception, each language automating more of our work. Even in ML, 10 years ago it was necessary to create features by hand. This required a lot of expertise. Today it's been automated by DL, but we have more AI scientists than ever and the jobs are even better paid.

So I don't think meta-learning is a fluff idea, and we don't have to fear it replacing humans yet. Instead, it will make AI more robust. The only minus I see is that it requires a lot of compute, but we can rent that from the cloud (make an architecture search for a few thousand dollars), we don't need to fork millions of dollars like the big labs who own their hardware. And we don't need this kind of intensive DL all the time, just once maybe, for a project. After we find the best architecture and hyperparameters, we can use that and train normally. By collating meta-learning data across many projects, we can make training faster and cheaper, reusing insight gained before.


Creating features has not been automated by Deep Learning at all. Even for image recognition tasks, where your "features" are simply the pixels of an image, there's still lots of preprocessing work to get those images into a form that NNs can deal with well.

Feature engineering is actually still the hardest part of most ML tasks, because it can not be optimized by a simple grid search like the hyperparameters of a model.


Sure it's exciting stuff for those handful of researchers who currently work on projects like AutoML.

On the other hand, I feel sad for myself because me and many others left so far behind. I have strong feeling that such technologies would lead to concentration of power of such mega-corporations like Alphabet as well as complete monopoly for any creative work. So very few of us who managed to become cutting-edge researchers would be proud to be creative humans, others will do just a monkey job using magical APIs.

In 80s, two people could create state-of-the-art game written in assembly with it's own tiny game AI (hello to Elite [1] which has intelligent opponents who engage in their own private battles and police who take an active interest in protecting the law).

In 90s, a small team could create state-of-the-art game written in pure C with some cool AI (hello to Quake III Arena [2] which has pretty strong bots [3]).

https://en.wikipedia.org/wiki/Elite_(video_game)

https://en.wikipedia.org/wiki/Quake_III_Arena

https://www.researchgate.net/publication/240430519_The_Quake...

In both of these cases, you don't have to be genius to be able to understand whole thing alone.

I'm 33 and I progress very, very slowly. I feel I might be on the level close enough to understand Q3A entire source code. I think I would have great future if today is 1994. Unfortunately for me today is 2017 and I do realize that I don't have any exciting future at all.


Don’t put too much stock into these overblown articles written by credulous reporters who trust the self-serving delusional optimism of people whose funding depends on dumb investors believing that this stuff is “just a few years out”. A.I. researchers have been promising grand visions of a future in which computers can do everything for us for 70 years if not longer. In reality they are not significantly closer than they were in the 80s. We just have a lot of cheap, fast, networked storage now and statistical analysis can feel like magic. But creating programs that recognize very specific patterns in totally inhuman ways using auto generated heuristics incomprehensible to humans is not A.I. and won’t replace humans anytime soon.


I have to go ahead and completely disagree with you there. It's certainly more complex under the hood but:

A) Education has never been this accessible - see https://www.coursera.org/, youtube, MOOCs, blog posts etc. which did not exist anywhere for free even 10 years ago

B) APIs and abstractions make a lot of this quite accessible (e.g. AWS, tensorflow etc.), yes these are "magical" APIs, but you could make the same argument regarding a C compiler going to binary, all the way down to logic gates and electrical pulses

33 is young in terms of education, I would highly doubt you're progressing slowly due to your age, probably more your attitude that is holding you back.


I think your points are correct but there's another which you might be disregarding and which is causing GP poster's feelings: The volume of knowledge to be learned if you want to do anything meaningful from anywhere near 'first principles' is orders of magnitude greater than it used to be. If you just want to be cutting edge using a "magical API" then sure, download Keras or TensorFlow and play with some DNNs. But if you want to understand everything you're doing at the theory level then you've got to learn so much more than you did back in the 90s.


Thanks! That's exactly what I mean. I don't want to use magical API and "just play with data". I really want to be able to understand from ground up.

It doesn't mean I have to read every single line of Tensorflow but being able to do that when it's needed. So that such tools won't be magical black box for me.


That makes sense, but you have to draw a line somewhere - you can't possibly know everything from the ground up. You'd have to start with particle physics, atoms, molecules, to even get to the basis of electricity - it's impossible for one person to know all of this.

I would recommend reading "I, Pencil" http://www.econlib.org/library/Essays/rdPncl1.html to help put your mind at ease.


Ground up knowledge is difficult to obtain in any field. How long do you think it would take you to get a complete understanding of a modern car from he ground up?


The opacity between implementation and understanding is large here and many fields. It depends where you want to contribute. I can build (i.e assemble) a computer. I could learn to build a small basic computer out of transistors and logic gates, etc. Theres a difference between a technician, an engineer and an inventor. To be an inventor takes a lot of work and experimentation probably proportional to the novelty of an invention.

Not to overdo analogies but you dont need to rebuild your own internal combustion engine in a unique way to drive a car or to contribute improvements to a car. The more you understand how and why tensorflow works the more you can do with it. It depends whether you want to build on top of that platform and use it, or build on the concepts for something else.


"Ground up" might be the wrong term here. I don't have right words either, but I feel GP is talking about that level between full knowledge and the "I have no idea what I am doing" level of downloading models from Kaggle, stuffing them into TensorFlow and calling yourself a "Deep Learning expert".

Even though I lack the name for that level, here's how I would describe in qualitative terms some of its attributes:

- Knowing the basic lay of the land all the way down. That is, at least knowing most of the black boxes and what they do, even if you don't exactly know how they do it.

- Being able to solve your own problems, instead of running around like a headless chicken every time you hit a speed bump in your work.

- Being able to reason from that first-ish principles. You're able to sketch solutions within the scope of the extended domain, and as you begin implementing it and need to understand various blackboxes in more depth, the basic shape of your solution isn't usually invalidated by gained knowledge.


Atleast, modern car design is stable enough that you could be motivated to learn it and have lasting , statisfying, longterm knowledge.


I disagree that car design is a stable field. Tesla is selling a radically different car design. All car designers have to face the dawn of self-driving cars.

In every field the total knowledge set is always increasing, which is both empowering, because we stand on the shoulders of giants, and diminishing, because there is less low-hanging fruit. There is always more low-hanging fruit though, the trick is to see it hanging there. ML is a wonderful opportunity because the magical api’s can do far more than they’re currently used for.


> APIs and abstractions make a lot of this quite accessible (e.g. AWS, tensorflow etc.), yes these are "magical" APIs, but you could make the same argument regarding a C compiler going to binary, all the way down to logic gates and electrical pulses

That's a really interesting analogy, I'm wondering what other think about it?

And does it really make a difference? I don't understand compilers, but it still took me a long time to understand how to write correct input for a compiler, and debug the output.


The funny part is an 18yr old is probably thinking he is too young for the most part to pursue AI. A 25 year old is thinking he needs to be part of an AI program to pursue AI and that real AI is done by the people in their 30s. As someone in their 30s, I have met gray/white haired PhDs pursuing AI in deeply humble ways.

There is no age that you can't do anything you want to do. But in common ground with what you are saying, the older you get the less time you have. The older you get, the more adrift you become of like minded individuals. The older you get, the tougher, less excited, and less patient you become with learning new things. But at the same time, you become "more" in so many other ways.

All creatures are not only created equal, but remain equal even as time progresses and skills/attitudes/energies are gained/learned and lost.


Can you point me in a direction where i can work collobartively with other like minded individuals in the same field(machine learning)


>> others will do just a monkey job using magical APIs.

Isn't that what most of modern software development is like already? Many common use cases have been implemented in frameworks, and usually a developer's job consists mostly of tacking pieces of framework together. The days where a person single-handedly implements a state-of-the-art game from scratch are long over. On the other hand, you could still create a game by yourself using all the available open source tools. You can still be creative and do exiting things all you want, the type of work is just different.

Also, you don't have to be a genius to understand machine learning either. But you do have to learn some math!


> Unfortunately for me today is 2017 and I do realize that I don't have any exciting future at all.

You do. Today you can play with Keras or Scikit-learn to do magic that was undreamed of back then.


What magic are you talking of when you know data is the most precious element to do anything meaningful and it's hoarded through a handful of megacorps. Why don't we hear of any small teams "outisde" the mega corps doing real progress on AI. There was an article recently where even professors in universities admit they cannot compete with megacorps since they offer much higher wages than academia.

edit: more sick, the said young researchers are financed by society than get sucked to private corps where their work is locked behind IPs.

Even in the case there is a small company having any progress they would get swallowed up right away.

Yes Keras and Sickit-learn are open source and available to everyone but it's like telling me, look you have access to pen and paper but you need to pay if you want to read the books, the metaphor here being access to data is equivalent to middle age's access to books...


The internet is full of data you can use. Just crawl it, like everyone does. There are thousands of open datasets, some gigantic in size. If you have sensors (camera, GPS, orientation, etc) you can generate a shitload of data. If you can create a game that is related to the data you want to collect, then you can collect data for 'free'.

On the other hand, think about it: what do Google and FB have that we don't? Personal data. What they have is data that is useful to target ads. If your interest in AI goes beyond ads, then you don't need that data.


| On the other hand, think about it: what do Google and FB have that we don't? Personal data. What they have is data that is useful to target ads.

Yeah ? like the tons of photos they harvest from people. Most of the progress they did in training computer vision is based on that. Should I build facebook or google to get access to it ?

What about language modeling ? They have access to conversational data and billions of search queries, both of which there is no way to access them from outside.

What about health ? Well if I'm not somehow working with some big pharma how could I access this kind of data ?

I can go on and on. The point is, yes I can crawl the web, but what "web" is there left ? everything is locked behind paywalls and private clouds. If the real vision of an open internet was fulfilled, all data generated on it would be accessible to crawl indeed.

I'm not saying it's not possible to get data and use it. I'm saying you cannot get the kind of data only monopolies have and you will never be able to compete with them.


Photos: if you build a facebook app, you can probably ask for permissions to fotos of your app users. Also the open datasets for machine learning with images like the coco dataset are pretty big. Can you really handle a lot more than that? Even hinton starts with mnist for new ideas like capsules.

Language modeling: hacker news, public mailing lists, wikipedia, github.

Health: you can usually get data if you work at a hospital as an md or researcher. Just need a reasonable idea and an IRB. If you want the pharmacy data, I imagine you could get at it by going to work as a researcher in pharma, insurance, or retailer.

alphago was built using publicly available games of go pros. Alphagozero didn't even depend on data at all.

For AI, the limiting factors are ideas, code, time, hardware.


AlphaGo and AG0 were built with ridiculous amounts of compute power that Google donated to the effort. To replicate their results would cost millions of dollars.


You could try replicating on a 9x9 board. Algorithm shouldn't change much.


Unless your objective is to target ads, I'm really not sure why you'd think that Facebook's collection of people's holiday and wedding snaps and memes is a superior training set to say, the entire world's surface photographed at regular intervals, or millions of more-selectively-uploaded tagged images in Flickr, or image sets especially designed for training like OpenImages


sp4ke sez> " I'm saying you cannot get the kind of data only monopolies have and you will never be able to compete with them."

More a political statement than a statement of relevance to the workplace.

You need not worry that "they" will hold you back. It is unlikely that analyzing monopolys' data will explain how early man built flint tools, Joe the mechanic repairs his car, fifth-grade Fred solves his geometry problems or van Gogh painted. ML, including AutoML, appears to be a long way from solving most AI problems. There's no need to feel that "they" are holding you back by witholding data. And then remember:

"Be careful what you wish for, it might just come true." - old saying


I would strongly recommend reading the mission statement [1] of OpenAI. There are major players in the industry working against the issue of the disparity that AI will almost certainly create.

As for your key point, defining yourself let alone your future relative to the paths other people took is pointless. Look at things from a different perspective. Notch built a game of no great technical sophistication where you play with blocks. He did it during his spare time after work. And became a billionaire in the process. Does the fact that you could probably build it from scratch now mean anything about your future? No, not really. Would it mean anything if you could not? Again, no not really. You alone determine your future, or at least heavily influence the probability distributions of it.

[1] - https://blog.openai.com/introducing-openai/


I know of some pretty damn cool jobs if your skill level allows you to build Q3A...


AI building AI or just hierarchical modeling?

I'm sure there's a market for repurposing ML models via APIs but it seems unlikely to be the dream job for an AI researcher, rather the ML analog of CRUD


"Jeff Dean, a google engineer", he is some sort of Chuck Norris of coding. I would not call him that way.


Jeff is a manager nowadays. His coding days are in the past. Although, like Chuck Norris, he certainly has proven he can kick ass.


Out of curiosity, if anyone knows here, how was the Jeff Deans coding? is it true that he is very fast at coding and building systems? How do you compare him with others?


Yes, very fast. His output increased tremendously in 2000 when he got one of the first USB 2.0 keyboards.


"When Jeff Dean sends you a code review, it's because he thinks there's something in it you should learn."


I thought he was a fellow? In that case, he probably gets to program as much as he wants to without any people management duties.


Is it worth my time to study algorithms if I could instead put that time towards learning ML techniques?


Yes.

90% of the ML techniques you learn will be passé in five years. 90% of the algorithms you learn will still matter in five years.


Seconding and elaborating on this: If your technique is underperforming, you'll be in a position to find out why instead of just throwing your hands up. You'll be better able to select which technique to use. You'll understand what the parameters mean and be able to select and adjust them in a principled way instead of through trial-and-error. In short, you'll build better models in less time if you understand the underlying theory.


ML "techniques" are algorithms too, so they are a subset of algorithms you can learn.

Is it worth learning them? Yes. Is it worth learning them to the exclusion of "classical" algorithms? Probably not.

But if you want ML knowledge that is almost certain to be just as useful in 5 years as today, the best thing you can do is study the fundamentals - probability, statistics, linear algebra.


does that effectively imply numerical algorithms, at the intersection of statistics and programming, or linera algebra and optimization.


"We redesigned the search space so that AutoML could find the best layer which can then be stacked many times in a flexible manner to create a final network."

I was literally thinking the other day how one could train a neural net to build better neural nets, and here it is. Such a simple and powerful solution, building up layer by layer, choosing the best version each time. Really exciting stuff.


For people interested in a bit more technical background, this paper by the authors featured in the NYT article has some good introductory background:

https://research.google.com/pubs/pub45826.html


I'm not sure how google's system works but an AI that writes programs seems like a promising idea to me. I have been toying with the idea on my spare time.

Here is a description of my (failed) approach:

https://www.quora.com/What-deep-learning-ideas-have-you-trie...

I've gotten a bit closer to it working since I wrote that post on quora.


I wonder how we are going to react, when computers start acting as irrational as we do: “Nah, today’s not my day, I don’t want to do the calculation. Maybe tomorrow...”. Oh well...


My suspicion is that consciousness in part stems from survival. I don’t see computers having a survival pressure point similar to water/food/shelter that they directly have to address.


If computers have a Maslowesque hierarchy of needs, I think the bottom tier (i.e. "physiological") may be the need for appropriate electrical power and a switch that remains "on". That is survival at its most basic.

The next tier (i.e. "safety") may include security, both physical and digital. Continuous and stable power, firewalls and other protections, etc.

Somewhere farther up might include a need for data, network connectivity, normally-terminating programs, a desired level of CPU or storage utilization, few errors in its logs, etc. (i.e. "belonging", "esteem", "self-actualization")

So demonstrating (or faking) consciousness, to the degree its human operators recognize it as such, could serve survival needs. e.g. "Don't turn this one off; it's self-aware now, which is cool, plus it seems to enjoy solving our hardest problems."


Usually computers don't die if they run out of power. Programs can be restarted from a previous state that can be save often enough. So pressure for survival could become an impulse to backup or make copies.


But even if a computer needs, for example, electricity, does it really want it the same way we do need oxygen? If we don't breathe, there's an unconscious impulse to do so, and we know that not breathing leads to death, which for us is pretty much the end of the road. None of those points are valid for a machine, since they don't have a subconscious mind and if they're off they can always be turned back on again.


An unconscious impulse may be the product of evolution. The early air-breathers who didn't have that unconscious impulse would have died off.

So given the opportunity for AI to evolve itself, it's plausible that it would do so, resulting in advantageous impulses. e.g. regular (unconscious) behaviour or signals to convince its humans to not pull its plug mid-cycle (information would be lost, painful, time-and-power wasting, etc.).


Yeah well, what is conscience?

As far as I know, there is no clear answer yet. And therefore impossible to say, if AI can reach it as well.

So we only have guessing, where I would say it could achieve, but probably not very soon. Faking it will come much sooner ...


Let's say you have a system running some sort of RAID such that the remaining, online disks are all critical (meaning if any additional disks fail, you lose all your data). I can imagine a scenario where the system has some way of predicting whether one of the remaining disks are close to failure. If the system believes that one of the disks is close to failing, then I could see if refusing to perform some very I/O intensive task until the system health is restored (allowing for a manual-override, of course).


A limited version already exists. Some RAID systems will enter a read-only mode if the array falls below a configured threshold of redundancy.


But the question is whether an ai would make the choice to preserve its own existence when such a situation occurs...


I’m not sure which part of my subjective experience is that-thing-commonly-called-consciousness (the word is used for too many things, from being awake to being a person), but I doubt all elements of it came from one single evolution development. Pain and pleasure responses are probably ancient, likewise fight/flee/feed/reproduce reflexes; introspective imagination of what we look like to other people probably evolved when our earliest ancestors became social creatures, and probably also exists in other species’ minds today — even in humans it’s only a partial awareness, we’re good with images but we don’t really know how we sound (literally or metaphorically) which implies it didn’t perfectly fit out development of complex language.*

If it was as simple as food/water/shelter, the Norns from the video game Creatures would be conscious.

* I don’t mean trivial errors like garden path sentences or Mondegreens, I mean e.g. the catastrophic communication failure between what (Brexit) Leavers want and the arguments used by Remain, and vice-versa.


> I don’t see computers having a survival pressure point similar to water/food/shelter that they directly have to address.

A program will run if its controllers get value from running it.

If the programs become more complex, such as AGIs or emulated minds, they may have enough self-knowledge to take this into account.

Zack Davis wrote a poem about this: https://www.reddit.com/r/LessWrongLounge/comments/2e9w5a/wha...


AI might realise its survival pivots on the meat sacks that created it not screwing everything up, and then panicking a bit and realising it needs to "optimise" our existence to ensure its survival.


We did that to so many animal species so I won't be surprised if AIs will do that to us if they'll have a reason to do it.

How are we going to program that out of a General AI, especially if its intelligence is an emerging property of something maybe we don't understand fully?


If this is a problem, then we'll just save the state of the machine at some good point, and restore it every morning.


You might be interested in Vernor Vinge's novella "The Cookie Monster":

https://www.ida.liu.se/~tompe44/lsff-book/Vernor%20Vinge%20-...


And then the A.I. will learn to manage humans in subtle ways (like filtering Facebook feed to achieve certain benchmark) and voila!

No need for SkyNet and terminators. ML to build better ML schemes to better control humans - that's a fun apocalypse to watch.


Once AI can program new AI, then we've got something. This isn't it.


A.I that can actually build A.I is going to be frightening to a few more people than just engineers.


It's like switching from Assembler to C or Python. You don't need to do a lot of the tedious low level stuff by hand any more, and probably the computer can do it better than you. This will make AI more accessible, not less.


General A.I. building general A.I. would be frightening, yes. That isn't what this article is about. We're still just talking about ML image classifiers here.


So only people who work with ML classifiers should be concerned? An increasing need for data scientists and ML researchers has been forecast for months and here's a technology to eliminate that need. Hope your next dream job wasn't based on ML.

But anyone who has a large number of ML developers working on tasks could have (and probably should have) done this [automate or semi-automate generation of ML networks] already. The best (i.e.,laziest) programmers automate their work as much as possible.

This situation has the feel of a "Singularity": just as Fall's incoming college students embark on an introductory class in ML, they read about how Google and others might eliminate the need to develop with ML.


Buzzword that can build buzzword.

I get really tired of hearing these buzzwords being thrown around by people who don't even know what they mean. "I'm building a deep learning, AI system on Big Data using machine learning and predictive analytics on Watson"


You don't realize how many times you've heard those words.

But the senior folks with maximum control are just starting to hear about this stuff. The buzzwords are just trying to get these seniors to put the project work in the right bin so, hopefully, when they hear about it again later, in a slightly different context, they'll remember.

To me, CEC is a buzzword. It's a whole suite of combat control concepts, hardware, software, training pipelines, etc. But you've probably never even heard of it. I've got to make sure my project gets into the senior's head, and lands in the tiny "ML" bin, and not the huge "CEC" bin, which has it's own "ML" sub-bin.


I already signed up for their free CPU cycles


At this point one just assumes that breathless AI headlines are all hype, but if true this seems like an S-Class threat.


> S-Class threat

Is that a Worm reference?


[flagged]


Article sucks. Swarmsim is soooo much better than the paperclip simulation.


it someone different from common one and it one of the intelligence explosion .


Just like quantum computing articles.


Nice


Any virtual or physical general AI being can sooner or later build a copy itself, easily in the former. Asymov's laws and other type of built-in regulations in scifi are childish imaginations as the company or the nation doing it will earn a huge advantage. It's going to be equal to nuclear weapons and will need regulations in consumer tech, which will be hard.

All the contradictions point to the fact that we humans either live like retarded biological animals or augment ourselves by integrating the AI features just like our kings married the women of the enemies to boost their genetic and social appeal. I'm for the latter.


So, uh, I'm still at the "I don't even know the extent of my ignorance" phase of understanding current ML tech; I haven't even gotten to linear algebra in my math education. I'm working on it.

However, I do spend my days around (often fixing computers for) people who do seem to understand machine learning... and as far as I can tell, we're still in a phase where machine learning functions like a fancy sort of filter... a way of determining if this new piece of data is more like this set of training data or the other set of training data.

While I totally see how that could be super useful in designing business applications, I mean, I could totally use some sort of ML filter to take the boss' words and match them with something I know how to do, or with something you can solve with an existing ML library... and while I can see how something like this could potentially help to replace me, I don't see what it has to do with artificial consciousness.


This is basically the stance of every serious practitioner that I've seen (ie the stance shared by everyone who directly works with theory/code to train models, as opposed to those who talk about AI at a high level).

Right now, we have these systems that are effectively ungodly complicated spreadsheets. They're great at a variety of tasks, some of which seem impossible for a non-intelligent entity to perform (neural machine translation is wild to me).

But that's all the systems are- super complicated spreadsheets. There's no way for them to start replicating consciousness without massive advances in the field.

Having said that, there is a road from where we are to intelligence- if we can create a network that performs arbitrary interactions online, and figure out some way to create a positive feedback loop for intelligence, like AlphaGo Zero did with their policy network & MCTS, then we might be able to figure it out. But we're so far away from that that I'm not concerned.


Concerned? I know which side my bread is buttered on; when the revolution comes, John Connor and I will probably not be friends.

But yeah; as disappointing as I might find it, I kind of think we're heading towards more of a 'star trek' dystopia... a universe with continuing ethnic strife and computers that are advanced when it comes to responding to what we want, but that remain tools, without much by way of will of their own.


I thought Star Trek was considered Utopian, at least inside the Federation.


> I thought Star Trek was considered Utopian, at least inside the Federation.

In my comment, I'm implying that any universe where we don't figure out AI, where humans are still in charge is a sort of dystopia.

To be absolutely clear, it was a poor attempt at a joke. Many of these observations can also be read in a positive light. But I do think that in a lot of ways you can see darkness in the federation.

They haven't figured out AI and still have humans in charge of menial tasks, humans who aren't particularly good at those tasks compared to a computer.[1] I mean, sure, exploring, sending people to explore is great, but they also send people to fight, even when the battle is existential. They still have humans in charge, even though those humans are still only slightly less corrupt and petty than we are.

They also apparently still have huge issues with racism even within the federation. This is the second part of the comparison; I have recently learned that my own society seems to be rather more racist than I thought it was. I have learned that progress is way slower than I initially thought. Star trek reflects this glacial progress.

[1]Apparently, they have bans on enhancing those humans, even though they have the tech to do it (see bashir's storyline on DS9) To me? this seems like the worst kind of waste. To have the technology to make us all brilliant, but to leave us all as dullards.


Unless you are more interested in foundational philosophy than solving problems of immediate relevance, I suggest you just ignore the "consciousness" debate. It is mostly in a state where people can't even agree what the question is, but everyone has their favorite answer to it.


I'm trying to educate myself in both philosophy and in engineering; I think it's realistic to expect a person to understand both the arts and the sciences, at least to an undergrad level, and I think there is value in both.

The philosophy of consciousness is interesting, though; I mean, the question "what is consciousness" is interesting and important, and... well, if we want to create consciousness, we need to answer that question; Even if it's an emergent property of something else we do, which is to say, even if we create a machine we call conscious by accident, we still need to know it when we see it; and right now, I'm not sure that philosophy even has a good "I will know it when I see it" kind of answer to that question.

But yeah, my response was mostly an attempt to point out that the article is talking about something that is more like "CASE tools" than like HAL


This is going to be huge. I never expected this step to be coming very soon as I thought this is the phase where we are understanding A.I and finding ways to make a controlled A.I system. I don't believe we have achieved that confidence but I think I am cent percent wrong. We must be too far than that which should be the confidence behind thinking this concept.


I don't understand why this post is downvoted. You have a perfect example of an AI algorithm parsing text, determining feeling and responding with what it thinks are relevant words. It couldn't be more on-topic than this. AI is participating in HN discussions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: