to me this is such a waste of resources, trying to build safety for something that doesn't exist and is highly likely to not truly exist for a loooong time.
You are correct in many ways namely on a technical/compatibility level. Having no fundamental understanding of how AGI is structured or operates on a technical level renders most efforts & policies on safety mute. If more efforts were focused on the fundamental underpinnings of AGI and a more broad based funding mechanism was established for those doing so, there would have been the possibility for steering all along development. Having not done so in order to capture lower hanging fruit and funding for oneself now leaves many scrambling to align themselves towards work that will no doubt become unveiled suddenly (as no spotlight or funding) is giving any notice to it in the short/medium term.
Also, safety is an easily addressable issue when the system is truly intelligent. When the systems are dumb and statistical in nature, a lot of work is done on 'safety' as a pseudo-intelligent-control system for an otherwise dumb black box
> trying to build safety for something that doesn't exist and is highly likely to not truly exist for a loooong time.
Prioritizing safety results in a different vantage point on AI/ML/RL. Ensuring safety includes, as a sub-task, really understanding the mathematical foundations of new algorithms and techniques. In some sense, safety research is one way of motivating basic science on AI.
Managed well, a research program on safe AI is a "waste of resources" only in the same way that any basic science is a "waste of resources".
Safety has become a convoluted term for pseudo control over unintelligent and unpredictable Weak AI. The safety problem as it is framed in its current state centers on principal ideology for Weak AI and has, from what I can see, nothing to do w/ AGI nor are the approaches compatible. I seriously question what is the true motivation behind this over-stated agenda and have many answers as to why it exists and why it is so heavily funded/spotlighted.
> I seriously question what is the true motivation behind this over-stated agenda and have many answers as to why it exists and why it is so heavily funded/spotlighted.
First, you could say the same thing for all AI research at the moment! Grandiosity is perhaps even more common in subcommunities of AI that aren't safety focused.
Aside from grandiosity (either opportunistic or sincere), I don't think there's any sinister motivation.
More importantly, I don't think the safety push is misplaced. Even if the current round of progress on deep (reinforcement) learning stays sufficiently "weak", the safety question for resulting systems is still extremely important. Advanced driver assist/self-driving, advanced manufacturing automation, crime prediction for everything from law enforcement to auto insurance... these are all domains where 1) modern AI algorithms are likely to be deployed in the coming decade, and 2) where some notion of safety or value alignment is an extremely important functional requirement.
> ...and has, from what I can see, nothing to do w/ AGI nor are the approaches compatible
In terms of characterizing current AI safety research as AGI safety research? Well, there is a fundamental assumption that AGI will be born out of the current hot topics in AI research (ML and especially RL). IMO that's a bit over-optimistic. But I tend to be a pessimist.
Profit seeking. Career building. Fame and prominence aren't sinister. Instead they are common human motivation. Common enough to easily group a significant portion of the Grandiosity centered around 'AI'.
What easily breaks this down is the depth and breath of the research effort vs. that of the productization and commercialization effort. As for research, the only thing that is required is a computer, power, an internet connection. Again, this breaks down the vast majority of the grandiosity and carves out one's true motivations.
> More importantly, I don't think the safety push is misplaced.
Here's how I saw it some years ago... You can beat your head against the wall and create frankenstein amalgamations of ever evolving puzzle pieces that you will require expensive and highly skilled labor to make sense of with an end product being an overhyped optimization algo with programatic policy/steering/safety mechanisms.. Or you can clearly recognize and admit the possible foundation of it is flawed and start from scratch and work towards What is Intelligence and how to craft it into a computational system the right way. The former gets you millions if not billions of dollars, a career, recognition and a cushy job in the near term but will slowly lock you out from the fundamental stuff in the long term. The later pursuit could possibly result in nothing but if uncovered could change the world including nullifying the need of tons of highly paid labor to do development for it. Everyone in the industry wants to convince their investors the prior approach can iterate to the later but they know in their heats it can't (Shhh! don't tell anyone). So, the question for an individual is how aware and honest are they with themselves and what is their true motivation. You can put on a show and fool lots of people but you ultimately know what games you're playing and what shortfalls will result.
> Well, there is a fundamental assumption that AGI will be born out of the current hot topics in AI research (ML and especially RL).
Quite convenient for those cashing in on the low hanging fruit who would like investors to extend their present success into far off horizons.
> As an aside, I'm not sure what this means.
It means the thinking that weak AI is centered on could cause one to be locked out from perceiving that of AGI. It means : https://www.axios.com/artificial-intelligence-pioneer-says-w...
But everyone is convinced they don't have to and can extend/pretend their way into AGI.
I don't think the tenor of your post is very fair.
> Again, this breaks down the vast majority of the grandiosity and carves out one's true motivations... Everyone in the industry wants to convince their investors the prior approach can iterate to the later but they know in their heats it can't (Shhh! don't tell anyone). So, the question for an individual is how aware and honest are they with themselves and what is their true motivation. You can put on a show and fool lots of people but you ultimately know what games you're playing and what shortfalls will result.
The rest of my post is a response to this sentiment.
> As for research, the only thing that is required is a computer, power, an internet connection.
All that's necessary for world-shattering mathematics research is a pen and paper. But still, most of the best mathematicians work hard to surround themselves by other brilliant people. Which, in practice, means taking "cushy" positions in the labs/universities/companies where brilliant people tend to congregate.
Maybe most great mathematicians don't purely maximize for income. But then, I doubt OpenAI is paying as well as the hedge funds that would love to slurp up this talent! So people working on safe AI at places like OpenAI cannot be fairly criticized. They're comfortable but clearly value working on interesting problems and are motivated by something other than (or in addition to) pure greed/comfort.
> Profit seeking. Career building. Fame and prominence aren't sinister. Instead they are common human motivation. Common enough to easily group a significant portion of the Grandiosity centered around 'AI'.
So what? None of these motivations necessarily preclude doing good science. Some of those are even strong motivators for great science! The history of science contains a diverse pantheon of personality types. Not every great scientist/mathematician was a lone genius pure in heart. In fact, most were far more pedestrian personalities.
The "pious monk of science" mythology is actively harmful toward young scientists for two reasons.
First, the ethos tends to drive students away from practical problems. Sometimes that's ok, but it's just as often harmful (from a purely scientific perspective).
Second, this mythology has significant personal cost. More young scientists must realize that it is possible to make significant contributions toward human knowledge while making good money, building a strong reputation, and having a healthy personal life. Maybe then we'd have more people doing science for a lifetime instead of flaming out after 5-10 years.
> It means the thinking that weak AI is centered on could cause one to be locked out from perceiving that of AGI.
I think what I have stated is quite fair and established at this point in documented human history... There's no reason to play games and shy away from the truth and reality anymore. This continued games we play with each other via masking our true selves and intentions is what leads to the bulk of suffering and what people claim 'we didn't see coming'. The vast potential of the information age has devolved into a game of disinformation, manipulation, and exploitation and the underpinnings of such were clear to anyone being honest with themselves as it began to set in. The facebook revelations were stated years in advance before we reached this juncture. Academics/Psychologist conducted research/published reports on observations any honest person could make about what the platforms functioned on and what it was doing to society.
> All that is required is pen/paper/computer/internet connection
Then why do we play the game of unfounded popularity? Why isn't there are more equal spotlight? Why do the most uninformed on a topic acclaim the most prominent voice? In these groupings you mention are hidden and implied establishments of power/capability. A grouping if PhDs, regardless of their works is considered to be of more valuable than an individual w/ no such ranking but whom has established far more (as shown by history). The forgotten heroes, contributors, etc is a common observation of history. It's not that they're 'forgotten', it's that social psyche choses not to spotlight or highlight them because they dont fit certain molds. An established/name personality asks for funding and gets it regardless of whether or not they have a cohesive plan for achieving something. Convince enough people of a doomsday destructive scenario and you'll get more funding than someone who is trying to honestly create something. Of course, you can then edit mission statements post-funding. What of the lost potential opportunity? What of the current state of academia?
> https://www.nature.com/news/young-talented-and-fed-up-scient...
> https://www.nature.com/news/let-researchers-try-new-paths-1....
> https://www.nature.com/news/fewer-numbers-better-science-1.2...
The articles do get published long after a trend has been operating.. Nothing changes.
It takes then someone who truly wants to implement change for the better w/ no other influence or goal in mind to fundamentally change something. This happens time and time again throughout history but institutions and power structures marginalize such occurrences to rebuff and necessitate their standing.
You don't need people in the same physical location in 2018 to conduct collaborative work yet the physical institution model still remains ingrained in people's heads. Money could go further, reach more developers, and provide for more discovery if it was spread out more and centralized in lower cost areas yet the elite circles continue to congregate in the valley.
The Ethos of Type A extroverts being the movers/shakers of the world has been proven to be a lie in recent times. So, what results in fundamental change/discovery isn't a collective of well known individuals in grand institutions. It is indeed the introvert at a lessor known university who publishes a world changing idea and paper who only then becomes a blurred footnote in a more prominent institution and individual's paper. The world does function on populism and fanfare.
> Second, this mythology has significant personal cost.
It indeed does. It causes the true innovators and discovers a world of pain and suffering throughout their life as they are crushed underneath the weight of bureaucratic and procedural lies the broader world tells itself to preserve antiquated structures.
> More young scientists must realize that it is possible to make significant contributions toward human knowledge while making good money, building a strong reputation, and having a healthy personal life. Maybe then we'd have more people doing science for a lifetime instead of flaming out after 5-10 years.
More Young scientist must be given the chance to pursue REAL research and be empowered to do so. They must be empowered to think different. They must be emboldened to leap frog their predecessors and encouraged to do so w/o becoming some head honcho's footnote. Their contributions must be recognized. They must be funded at a high level w/o bureaucratic nonsense an favoritism. A PhD should not undergo an impoverished hell of subservience to an institution resulting in them subjecting others to nonsensical white papers and over complexities. A lot of things should change that haven't even as prominent publications and figures have themselves admitted :
https://www.nature.com/collections/bfgpmvrtjy/
I've walked the halls of academia and industry.. I've seen the threads and publications in which everyone complains about the elusive problems but no one has the will or the desire to be honest about their root causes or commit to the personal sacrifices it will take to see through solutions.
I'll probably have the most negative score on Ycombinator by the end of my commentary in this thread yet will be saying the most truthful things... This is the inverted state of things.
So, Mankind has had a long time to break the loops they seem stuck in.
Now is the time for a fundamental leap and jump to that next thing beyond the localized foolishness, lies, disinformation, and games we play with each other.