Hacker News new | past | comments | ask | show | jobs | submit login

And what would they make their decision by, if not by something we put in there?

If they decided what their deepest values were based on a random choice from the set of all possible values... It would still be because we made them do so. We can't turn Pinocchio into a real boy.




That's not a useful definition of "made to do so" though anymore more than your parents upbringing "made you to do" anything you ever decide to do.


How you grew in the womb, to the degree you want to think of it as a program, was infinitely more about the program laid down in your mother's biology, and her parent's again etc. You don't see your child as your product, you see it as the product of the same process that made you.

(If you're sensible, that is. There are cultures that treat children a lot more like any tool their parents would make.)

But conversely, it's nonsense to see a program you write as anything more than a tool. Everything there is a product of your conscious choices - not of some schema that created you both.


>But conversely, it's nonsense to see a program you write as anything more than a tool

Nobody "programs" artificial neural networks. Even saying "train" is wrong with the normal mental model of it because nobody is out there teaching the machines what they need to learn either.

The entire point is that you have no clue what to teach and how to go about that and you the let the machine figure it out on its own.


First, you pick the training data yourself. Consciously. It does not pick itself.

More importantly, you pick the cost function. It's even harder to pretend it's picking itself.

We consciously design it, whether you want to call it programming or not.


When the training data is essentially "all written text I can get my hands on", can't say you've made much in the way of conscious choice here.

If the objective function is so vague that it allows you to complete any task then claiming conscious design also makes little sense.

"I'm consciously designing you to do whatever you want!". Oh gee, that just makes everything so much better for some reason.


> When the training data is essentially "all written text I can get my hands on", can't say you've made much in the way of conscious choice here.

Oh yes, that's a conscious choice. And not one which gets you a decent LM, incidentally.

> If the objective function is so vague that it allows you to complete any task

The loss function is not vague at all, and it certainly doesn't allow you to complete any task (it's more impressive that it allows you to compete any tasks at all, frankly).

> I'm consciously designing you to do whatever you want!

The point is that you aren't, and that "designing to do whatever it wants" is nonsense because by default it wants everything equally much / doesn't want anything at all (those are the same thing).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: