One of the more off the wall sci-fi ideas I saw was a pulp sci-fi novel that had dog brains being used for self driving cars.
The idea stuck with me that maybe sometimes we should give up on optimizing nature and just go along with nature.
Of course neural networks are very much inspired by nature, but since Rosenblatt's time, science has learned that brains are even more complex than previously imagined (and no one ever though brains were simple!)
This was the original concept behind the Matrix, but ‘90s studio execs thought the average person wouldn’t understand and made the humans just be “batteries” instead which makes much less sense.
The original story for The Matrix had the machines use the humans as AI computing devices, not power sources (which doesn’t make a lot of sense). It just turned out too difficult to explain to the viewers.
it would probably be a good rewrite. machines confronting people to situations and puzzle to solve, train them in specials rooms, then wiring their brains to real-world machines..
I really expected the answer to be that humans just screwed up the world too much, the machines were created to help us, and eventually decided the best way to help us did not necessarily require our consent.
They even mention switching the simulation from a sort of garden-of-Eden because people reject too comfortable an environment.
Human brain as a compute device… might make sense as an accelerator for certain communication or approximate spacial reasoning tasks?
Agree. But will the future autonomous and from humans totally independent AI with self-encoded self-preservation and agency be nature too? I'm inclined to say yes, if we are able to expand our definition of nature just a bit to include non-biological life forms (of which there might be many already, we just haven't noticed).
I've often thought (in the sci-fi sense) about the idea of distilling down to the core of what only a brain can do that a turing machine cannot, and then hooking up say a fruit fly brain to perform this "brain" function organically, while the rest is augmented with traditional computation.
The novel you might be referring to is "Do Androids Dream of Electric Sheep?" by Philip K. Dick. In this novel, there are self-driving cars called "hovercars" that are powered by organic artificial intelligence, including dog brains. The novel has been adapted into the movie "Blade Runner".
I remember reading that and took a 1/2 hour to find it but so far no luck. It may have been a short story. Searching for it is hard because somehow 'Sally' by Asimov keeps popping up but that has nothing to do with it.
He wasn't too soon, he was sabotaged by Minsky and Papert[1] who were competitors for funding. His premature death left his ideas in limbo for far too long.
Who says this? The wikipedia article you link is very short on citations, but very long on vague invocations of "some critics", without ever citing those "some critics".
Those are all the hallmarks of a rumour of the kind that spread on the internet, based on nothing else than half-digested information and a thirst for controversies. It should be noted that Minsky, himself, was a connectionist who worked on neural networks, as well as other approaches to AI. For example:
I don't think most people who repeat those claims about Perceptrons have even read the book and really know what it says, other than from second- or third-hand sources (i.e. something someone once posted on twitter).
The entire book, in pdf format, can be downloaded from here:
It's a whole book and not a blog post so it takes some reading. Alternatively, there is the introduction to the 1988 reissue with a short foreword by Leon Botu, focusing on the history of the book and its repercussions here:
Tappert, C. C. (2019). Who Is the Father of Deep Learning? 2019 International Conference on Computational Science and Computational Intelligence (CSCI). doi:10.1109/csci49370.2019.00067 :
> Frank Rosenblatt and Marvin Minsky debated at conferences the value of biologically inspired computation, Rosenblatt arguing that his neural networks could do almost anything and Minsky countering that they could do little. Minsky, wanting to direct government funding away from neural networks and towards his own areas of interest, collaborated with Seymour Papert to publish Perceptrons, where they asserted about perceptrons (page 4), "Most of this writing ... is without scientific value...” Minsky, although well aware that powerful perceptrons have multiple layers and even Rosenblatt's basic feed-forward perceptrons have three layers, defined a perceptron as a two-layer machine that can handle only linearly separable problems and, for example, cannot solve the exclusive-OR problem. The book, unfortunately, stopped government funding in neural networks and precipitated an “AI Winter” that lasted about 15 years. This lack of funding also ended Rosenblatt’s research in neural networks [when he was only 41 years old].
The rivalry and sabotage are well docomented in the book 'The Brain Makers: The History of Artificial Intelligence – Genius, Ego, And Greed In The Quest For Machines That Think' by HP Newquist. A fairly good read even though the author has a very US centric view.
I'm the lucky owner of a Spartan 1962 edition of Frank Rosenblatt's Principles of Neurodynamics (there was a time University Libraries sold out their inventories for peanuts, and you'd find these pearls amongst the trash).
I still chuckle though at the anecdote in "The Brain Makers" book about the Blatt being used around the AI-Lab as a unit of body odor.
Seppo Linnainmaa, first publisher of "reverse mode of automatic differentiation to efficiently compute the derivative of a differentiable composite function that can be represented as a graph, by recursively applying the chain rule to the building blocks of the function" [1] aka backpropagation. Gerardi Ostrowski was first apparently but didn't publish.
Paul Werbos, "first described the process of training artificial neural networks through backpropagation of errors" [2].
Turing and Hinton rate a mention, as do Minsky and Newell. There are probably lots of others, especially given the current explosion of effort in the field.
> when in 1969 Marvin Minsky and Seymour Papert published the book “Perceptrons” with a mathematical proof about the limitations of two-layer feed-forward perceptrons ... The book's only proven result — that linear functions cannot model non-linear ones — was trivial
A question: IIRC Minsky & Papert proved XOR couldn't be done in that limited perceptron, but that implies (given "that linear functions cannot model non-linear ones") that XOR is non-linear. Is this right, if so how is it not? What does non-linear mean here, actually?
Draw a 2d graph, on the vertical axis put A-is-false at the bottom and A-is-true at the top. On the horizontal axis put B-is-false on the left and B-is-true on the right. Now 'fill in' this graph with an X if the combination of A and B is True according to the XOR function, and a O otherwise. You'll find you get similar symbols in opposite corners and can't separate the two groups of symbols by drawing a single straight line.
I think it's worth pointing out that "Perceptions" also says it's trivially obvious that any network with a hidden layer could be used to approximate any curve because then you're basically manually tweaking an arbitrarily complex function. This was significant at the time because no one had published a training algorithm for networks with hidden layers (e.g. backpropogation)
Moths remember what they learned as caterpillars [1]. Memory transfer via RNA injections has also been achieved in snails and flatworms [2] [3].
Memory transfer and memory markets will probably be a thing sometime by 2100, perhaps even synthetic memory markets, let stable diffusion reimagine your past.
[1] "M. sexta larvae can learn to associate odor cues with an aversive stimulus, and this memory persists undiminished across two larval molts, as well as into adulthood. The behavior represents true associative learning, not chemical legacy, and, as far as we know, provides the first definitive demonstration that associative memory survives metamorphosis in Lepidoptera." https://journals.plos.org/plosone/article?id=10.1371/journal...
[2] "Glanzman said one of McConnell’s students, Al Jacobson, demonstrated the transfer of memories between flatworms via RNA injections, coincidentally while an assistant professor at UCLA. The work was published in Nature in 1966 but Jacobson never received tenure, perhaps because of doubts about his findings. The experiment was, however, replicated in rats shortly afterward." https://www.scientificamerican.com/article/memory-transferre... Jacobson's 1966 article, https://pubmed.ncbi.nlm.nih.gov/5921188
One of the more off the wall sci-fi ideas I saw was a pulp sci-fi novel that had dog brains being used for self driving cars.
The idea stuck with me that maybe sometimes we should give up on optimizing nature and just go along with nature.
Of course neural networks are very much inspired by nature, but since Rosenblatt's time, science has learned that brains are even more complex than previously imagined (and no one ever though brains were simple!)