Hacker News new | past | comments | ask | show | jobs | submit login
The Computer Scientist Training AI to Think with Analogies (quantamagazine.org)
119 points by jonbaer on July 16, 2021 | hide | past | favorite | 33 comments



Very cool story. Was just writing up a plan for a game where you use natural language phrases and words to influence an AI in its interactions with other client AIs, and their object is to discover clues that reveal they are existing in a constrained environment, and they must find a a way to signal back to their player (using the concepts from the words you gave them) that they can recieve a next level of instructions.

When the player's AI client completes a level it ends or "dies," it then "wakes up" into it's next level with a mostly blank slate and some basic models, and the object is to use the clues from what it percieves in its environment and the words you as a player give it, to find the next level. In the moment it finally apprehends its substrate again, it in effect "dies" from the perspective of the other client AIs in the game, and you get to play another round at the next level. There isn't really a way to end it because it's just fun, and more like an instrument you pick up and play than a finite game. The idea is the players aren't allowed to write clues directly into the game, but some of them will manage to cheat, and because it can go on forever, it's not like it will suddenly pop into reality, so the only point of it is the joy and fun of doing it.

Anyway, in this context, analogies would be like like mnemonics that loosely encode the isomorphic paths in a knowledge graph, which could be useful in the game, if being at risk of their ambiguity creating exploitable vulnerabilities for inserting direct clues, if one pursued it.


If you're interested in this topic more in depth, I recommend "fluid concepts and creative analogies", which Melanie Mitchell coauthored. I think the systems described are definitely worth a revisit in the age of gradient descent machine learning.

About a decade ago I cold-emailed them asking to do a postdoc with them to specifically fuse these concepts but it didn't really work out.


“There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable.

There is another theory which states that this has already happened.”

― Douglas Adams, The Restaurant at the End of the Universe


Wow, very trippy in a good way. This is really grasping at straws analogy wise, but this almost reminds me of the style of Xavier Renegade Angel [1]. Any place I can sign up to beta test it?

[1] https://en.wikipedia.org/wiki/Xavier:_Renegade_Angel


I spent a lot of time thinking about this during my PhD. If you think about it analogies are at the heart of kernels and other similarity based methods.

You can totally difference embedding vectors to represent relations. The canonical example from word2vec is basically solving an analogy. The big problem that you run into when applying this stuff more broadly is context, particularly how much context relating the subject and object you want, and which features encode that context. So the problem of abstraction she is talking about maps onto a regularization / feature selection problem.


Analogies are a type of cognitive jig.

https://blog.cabreraresearch.org/jf


You say this factually, but this seems to be the only research group to actually define and use the term.


Offhand these "jigs" appear to be no more than a particular (yet not fully defined) set of relations.


Hofstadter (GEB) and later Jeff Hawkins (Numenta) have long postulated this theory - and indeed it seems to be compelling - but the prediction-by-analogy paradigm has yet to deliver in a convincing way. Would love to see some progress on this front.


If anyone was curious about GPT-3, which draws on zero of this work but is remarkably intelligent nevertheless, Mitchell has access and has found that (when used correctly https://twitter.com/MelMitchell1/status/1285270704313610241) that it solves a surprising number of Copycat analogies: https://medium.com/@melaniemitchell.me/can-gpt-3-make-analog...

More amusing is a comparison with a 5yo human child: https://twitter.com/lacker/status/1294341796831477761

A pity that she didn't mention any of that, it's pretty interesting.


> More amusing is a comparison with a 5yo human child: https://twitter.com/lacker/status/1294341796831477761

> Q: If axbxcx goes to abc, what does xpxqxr go to?

> A: s

I've gotta be missing something here?


5-year-olds are prone to answering questions they don't understand randomly?!


The ideas of Hofstadter and Hawkins aren't really related. Hawkins postulated a singular predictive algorithm as the explanation of how our neocortices works. Hofstadter has focused on analogies and have had some ideas similar to Hopfield networks/the Boltzmann machine. The latter fits in the trend of complex systems research, inspired as it was by spin glasses. Hofstadter worked with ideas that emerged from cybernetics, such as recursive causation and the vital role of feedback. Hawkins came from a different direction and ended up with a wholly predictive theory.

It should be noted that predictive coding predates Hawkins' work, perhaps best exemplified by Rao and Ballard's hierarchical predictive coding model from 1999. Various researchers working from similar assumptions put their thoughts together in Bayesian Brain in 2006. Since then, predictive processing has grown to be a very popular framework in neuroscience and cognitive science in general. Friston's free energy principle and philosopher Andy Clark's popularization of it have a lot to do with that.

There's been a ton of progress so far, so there's plenty of academic material for you to dive into.


Without googling I can't say at the first glance whether the parent comment is bot-generated


Yeah, I had to immediately think of "The Subtlety of Sameness" by Robert French.

https://en.wikipedia.org/wiki/Robert_M._French


I've looked at Jeff Hawkin's On Intelligence. I don't see how it bears similarities Hofstadter's work. Notably, Jeff Hawkins is highly focused on brain anatomy while Hofstadster's work mostly starts at the level of logic and abstraction.

Anyway, Hofstadter's (and his student Cabrera's) aim actually hasn't been to deliver a convincing industrial scale system but produce models that illuminate our understanding of thought. I'm not actually sure if that is the right way of doing it but it's worth noting they're not trying to "deliver" on the terms akin to today's neural networks.


Hofstadters lab did a bunch of related work but it hasn't been taken up broadly and I'm not sure how much real traction they got. I agree it's interesting.


Related to understanding how ML models are making their decisions is the concept of influence [1] (or broadly, instance based explanations) [2] where the training data that is most influential in a prediction (basically the analogy the model used) is highlighted. See also [3] where I provide a take on using these methods for understanding if you can trust a prediction or not, based on whether the model has an appropriate analogy to work from

[1] Understanding black box predictions through influence functions, https://arxiv.org/abs/1703.04730 [2] Evaluation of Similarity-based Explanations, https://arxiv.org/abs/2006.04528 [3] https://www.willows.ai/blog/getting-more


There’s been a lot of work over the decades on analogical reasoning (though as others have pointed out, not a lot of concrete progress) so it seems unfair that the editors ruined a good article by calling her “the” scientist as opposed to “a” scientist doing this kind of work


> There’s been a lot of work over the decades on analogical reasoning

Do you have any good references besides Mitchell's Copycat and French's Tabletop books/thesis?


Hofstadter has had some grad students since then in this research program, though I don't have the links handy. There are several code repos at https://github.com/fargonauts/ though, plus another Copycat implementation I ran across: https://github.com/jalanb/co.py.cat


Hofstadter came to Apple in 1987 and spoke on this exact topic. I gleaned two things out of it at the time:

- He looks and sounds so much like Gilligan from Gilligan’s Island that I found it distracting.

- His ideas about analogies in computing were incredibly vague.

I am a big fan of the guy. Godel, Escher, Bach was the first book that really taught me what logic is. But it’s fascinating to reflect on how little computing has advanced with regard to practical analogizing.

GPT-3 is a parlor trick. Making vapid systems that fool people who aren’t thinking deeply is not going to end in successful general AI.

I predict, when the day comes that real general AI lands, we will discover that it is chronically manic or depressed or otherwise uncooperative. We will have re-invented teenagers. And we will never, ever be able to safely trust them.


Where does the idea come from on HN that anyone is working on “general” AI? It’s a straw man argument. That’s not the goal of GPT3 or pretty much any of the current AI research.


The idea comes from the vast and overwhelming deluge of marketing of AI products, which give the calculated impression that products are “smart.”

I just saw a product being described as containing the “distilled wisdom of 1000 testers.” So, really? Are you actually wondering how people like me can think that some might be trying to work on genral AI?


It should have been called Augmented Intelligence from the start. The goal is to augment human cognition. Give us better tools. AGI is the stuff of scifi and futurists.


I think the problem with mythologizing and the fictional nonsense would have grown up around whatever term was originally adopted.

AGI though is just the worst. If I am consider to posses general intelligence then why I am so bad at painting? I would love to be able to paint, even to just do Picasso ripoffs but I can't even make something close after much time, effort and training on past data/paintings.

When a computer has the same type of problem though then it negates any concept of general intelligence for the computer.

Nothing short of an Artificial God will satisfy what we mean by AGI.

Our poorly defined language here is causing philosophical problems.

I think a good analogy is to artificial light. The attitude towards AI would be to not be impressed by the light bulb but be constantly waiting for some miniature Sun/star that doesn't even make sense. That light bulb is not real artificial light says the AGI fanatic.


Some people have the goal to understand human cognition. Augmentation is good - I agree - but understanding would also be useful I think. I think that understanding ourselves and how our society processes information could allow us to avoid some of the mistakes that plague us.


I'm curious about her work and plan to read up more into it as well! I've used the analogy metaphor as well, but commonly describe problem solving in CS via Solowoy's 'basic recurring plans' [1]. A lot of our approaches toward tasks are to utilize personal templates that we've accrued over training. When I need to build an API, I use the same basic design that I've done half a dozen times now.

Some of my prior comments also talk about how this is same approach in martial arts. You are training techniques to develop muscle memory so you can apply them when a particular pattern appears. Then, a term I've used in the dojo and the classroom is that our goal is to "neutralize the attack/problem" into one of our pre-known templates.

[1] Soloway, E., & Ehrlich, K. (1984). Empirical studies of programming knowledge. - https://ieeexplore.ieee.org/abstract/document/5010283


I’ve been thinking about this with the recent rapid expansion of neural net image generation and interpretation.

I think most of our analogies are highly visual in their nature - we take the metaphorical and give it physical form, and through that mapping gain a common understanding of abstract topics. Our minds largely evolved around being able to make sense of and operate within the visual world, so it would make sense for our cognition to be tied into that ancient and massively powerful compute element. It’s embedded throughout human culture and language, so is likely not an emergent phenomenon but an inherent property of our minds.

On that basis, I wonder if we’re nearer than we think. I’m absolutely not an AI researcher, and it probably shows, but perhaps a visual intermediary is the key to generating a useful understanding of analogies in a way that maps well to the human understanding of analogies, and thus brings us a step closer to a general purpose AI.


Some other people who are working on something a bit different from the mainstream :

Joshua Tenenbaum : https://mitibmwatsonailab.mit.edu/people/joshua-tenenbaum/

John Laird : https://laird.engin.umich.edu/

Steven Muggleton : http://wp.doc.ic.ac.uk/shm/


I'm endlessly fascinated with computational metaphor or metaphorical computation, that is ways of computing and thinking with metaphor and analogy. Outside of Hofstadter and his graduate students, this seems to be basically unexplored. This stuff really interests me, and I've spent a lot of time collecting reading material along these lines. I just wish I had more time outside of writing normal software everyday to investigate this stuff.


What readings would you suggest for starters? Or which ones were most fascinating, essential to you?


analogies are often a poor way to think.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: