Hacker News new | past | comments | ask | show | jobs | submit | Glorbutron's comments login

Yea, and it's possible to write chords such that it's actually ambiguous where the tonal center is. To some extent, every (successive diatonic) chord narrows it down, but it's possible that there simply isn't enough information to narrow it down to one key. Or they did something non diatonic to what was happening before, and you have to readjust.


It'd be a lot more interesting if they actually made points rather than vague condescension and sneering.


But points against what? The very idea that a hyperintelligent being is _possible_ very much lingers in the land of science fiction. Yes, even today.

Anyone with a little education in artificial intelligence knows that it's either impossible or somewhat possible but not likely. We truly are no closer to artificial consciousness that we were 500 years ago, it's all still smoke and mirrors.

The thing that's actually a dangerous scenario is the dumb artificial intelligence we can build to start misbehaving, due to human programming errors or statistical bias: self-driving cars running over people; military systems confusing friends from foe; this stupid predictor keyboard not predicting correctly what I'm trying to type. This is the stuff of real, possible and demonstrable fears. Anything grandeur has no basis in reality.

I haven't read Bostrom's book yet, but from what I've read from others about the book, it's all about this nonsense that we can create a new kind of intelligence who will instantly destroys us. This is cool for people ignorant of the basic tenets of what A.I. is (so far) but for people who actually know or understand the current state, they should know better.

Worrying about an hyperintelligent A.I is like worrying about ghosts while living in a neighborhood filled with serial killers.

edit: grammar.


I think your statements that majority of people who have had an AI education think we are no closer to hyper intelligence than the 1500s is inaccurate.

Here is a survey (first one I googled) showing most believe it is relatively soon: https://aiimpacts.org/agi-11-survey/

I think if you want to critique Bostrom that is fine, but better to do so on the merits. His work is straightforward to read.


> The very idea that a hyperintelligent being is _possible_ very much lingers in the land of science fiction. Yes, even today.

Yesterday, the idea that a computer could beat the chess grandmaster, or compose poems, was also in the land of science fiction.

Most technical inventions are in the land of science fiction, until one day they are not.


Just as I thought the author was going to make an actual assertion, he goes "When I was in my twenties.."

I think that's enough of my attention for now.


Well, it's unorthodox because it's a wee bit racist


There's a section on the topic of racism towards the end.

CTRL-F for "logomachia"


I think his is a point that (as a person of color in the us) I've been afraid to say out loud. Nobody ever calls me racist, but when someone eventually does, I know what my response will be: "yes, I am. And so are you".


In anti-racist circles you'd meet other PoCs who acknowledge the same.


I've seen postulated mechanisms by which increasing levels of self modeling get selected for, demystifying paradoxes has some examples.


You do realize that natural selection iterates through these same sorts of garbage proteins at a rate of trillions and trillions of bacteria per year for millions of years?, and with bias toward previously existing functional structures? It's a pretty potent optimization process, given the timeframes and scale. and the uncertainty is enough. It isn't equivalent to that, because there can also be incremental progress made toward a functional protein, unlike code.


Doesn't that presuppose we already _have_ life, and thus is irrelevant to the question, or am I misunderstanding?


There is a question of where does the first self-replicating molecule come from, and how do we get to DNA from that, and how do we get the diversity of proteins that we see today.

Creating random DNA sequences and showing they don't produce 'useful' proteins has nothing to do with any of those questions (and how do we even know they are not useful?)


I think the main stumbling block for me is the perception of time scales. It is impossible for me to say, with certainty, any information regarding time scales of a billion years and the randomness which permeates the evolutionary process. What we see and know are the winners of the race, not the mountains of failures.


No one is saying a neuron is a one to one equivalent with a transistor. That behavior does seem like it's possible to emulate with many transistors, however.


Was just talking about quantum cognition and memristors (in context to GIT) a few days ago: https://news.ycombinator.com/item?id=24317768

Quantum cognition: https://en.wikipedia.org/wiki/Quantum_cognition

Memristor: https://en.wikipedia.org/wiki/Memristor

It may yet be possible to sufficiently functionally emulate the mind with (orders of magnitude more) transistors. Though, is it necessary to emulate e.g. autonomic functions? Do we consider the immune system to be part of the mind (and gut)?

Perhaps there's something like an amplituhedron - or some happenstance correspondence - that will enable more efficient simulation of quantum systems on classical silicon pending orders of magnitude increases in coherence and also error rate in whichever computation medium.

For abstract formalisms (which do incorporate transistors as a computation medium sufficient for certain tasks), is there a more comprehensive set than Constructor Theory?

Constructor theory: https://en.wikipedia.org/wiki/Constructor_theory

Amplituhedron: https://en.wikipedia.org/wiki/Amplituhedron

What is the universe using our brains to compute? Is abstract reasoning even necessary for this job?

Something worth emulating: Critical reasoning. https://en.wikipedia.org/wiki/Critical_reasoning


Nor did I. I asked if we could do the same function with an equal volume. Moore's law is dead. We're not going to scale performance forever. What good is an emulated human brain if it's the size of a building and takes a power plant to operate?


I do this, but I realized a while ago that most of the time, what I want to do is learn how to do things, and not actually do them. So that's all I try to do. Because after that, it's not fun anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: