Hacker News new | past | comments | ask | show | jobs | submit | splwjs's comments login

social engineer: actually here's a personal anecdote loaded with goodguy words that inform you about how the thing I want you to think is correct.

engineer: interesting. what problems has it helped you solve

social engineer:


it's true. i just like knowing how things work.


Right now LLMs have a slight advantage over stackoverflow etc in that they'll react to your specific question/circumstances, but they also require you to doublecheck everything they spit out. I don't think that will ever change, and I think most of the hype comes from people whose salaries depend on it being right around the corner or people who are playing a speculation game (if I learn this tool I'll never have to work again/ avoid this tool will doom me to poverty forever).


This take shows up a lot and it's a bad one.

"I can surround your child with dangerous unhealthy things and do my best to corrupt and poison them, there should be no limit to this behavior whatsoever because if I succeed it's your fault for being bad parents! All you have to do is say no, it's not like it's my full time job to make end-runs around you with the aid of behavioral science and psychology and a budget, no no no guiding your children morally is as simple as saying no once, are you too stupid and lazy to do that?".


Your comment is one extreme.

"Parents should just say no" is another extreme.

I would put money on the best solution being somewhere between those two extremes.


… what’s the second extreme expressed here? I see the same one stated two ways.


gjsman-1000 says that all responsibility falls to the parent for failing to say no.

splwjs says that corporations have the responsibility because they spend billions on psychological manipulation campaigns.


We've had markov chain generators for a while, having enough computing power to grant them the power to regurgitate wikipedia reddit and stackoverflow content is not "a huge step towards agi"


I disagree.

It's true that Markov chain generators have existed for years. But historically their output was usually just this cute thing that gave you a chuckle; they were seldomly as useful in a general sense like LLMs currently are. I think that the increase you mention in compute power and data is itself a huge step forward.

But also transformers have been super important. Transformer-based LLMs are orders of magnitude more powerful, smarter, trained on more data, etc. than previous types of models because of how they can scale. The attention mechanism also allows them to pay attention to way more of the input, not just the few preceding tokens.


I think you missed OPs point.

If you want something useful, then we're getting closer.

AGI is something specific, as a requisite, it must understand what is being asked, and what we have now is a puppet show that makes us humans think that the machine is thinking, similar to Markov chains.

There is absolutely some utility in this- but it's about as close to AGI as the horse-cart is to commercial aircraft.

Some AI hype people are really uncomfortable with that fact, I'm sorry, but that reality will hit you sooner rather than later.

It does not mean what we have is perfect, cannot be improved in the short term, or that it has no practical applications already.

EDIT: downvoting me wont change this, go study the field of academic AI properly please


AGI is something fairly specific, yes, but depending on what you mean by “understand”, I don’t think it necessarily needs to “understand”? To behave (for all practical purposes) as if it “understands” is good enough. For some senses of “understand” this may be the same thing as for it to “understand”, so for those senses of “understand”, yes it needs to “understand”.

It seems clear to me that, if we could programmatically sample from a satisfactory conditional probability distribution, that this would be sufficient for it to, for all practical purposes, behave as if it “understands”, and moreover for it to count as AGI. (For it to do so at a fast enough rate would make it both AGI and practically relevant.)

So, the question as I see it, is whether the developments with ANNs trained as they have been, is progress towards producing something that can sample from a conditional probability distribution in a way that would be satisfactory for AGI.

I don’t see much reason to conclude that they are not?

I suppose your claim is that the conditional probability distributions are not getting closer to being such that they are practically as if they exhibit understanding?

I guess this might be true…

It does seem like some things would be better served by having variables with a fixed identity but a changing value, rather than just producing more variables? I guess that’s kind of like the “pure functional programming vs not-that” distinction, and of course as pure functional programming shows, one can still compute whatever one wants while only using immutable values, but one still usually uses something that is as if a value is changing.

And of course, for transformer models, tasks that take more than O(N^2) or whatever (… maybe O(N^3) because on N tokens, each is processed in ways depending on each pair of the results of processing previous ones?) can’t be done in producing a single output token, so that’s a limitation there..

I suppose that the thing that is supposed to make transformers faster to train, by making it so that the predictions for each of the tokens in a sequence can be done in parallel, kinda only makes sense if you have a ground truth sequence of tokens… though there is the RLHF (and similar) where the fine-tuning is done based on estimation of a score on the final output… which I suppose possibly neither is great at getting behavior sufficiently similar to reasoning?

(Note: when I say “satisfactory probability distribution” I don’t mean to imply that we have a nice specification of a conditional probability distribution which we merely need to produce a method that can sample from it. But there should exist (in the abstract (non-constructive) mathematical sense) probability distributions which would be satisfactory.)


I do not consider "understanding", which cannot be quantified, as a feature of AGI.

In order for something to qualify as AGI, answering in a seemingly intelligent way is not enough. An AGI must be able to do the following things, which a competent human would do: given the task to accomplish something that nobody has done before, conceive a detailed plan how to achieve that, step by step. Then, after doing the first steps and discovering that they were much more difficult or much easier than expected, adjust the plan based on the accumulated experience, in order to increase the probability of reaching the target successfully.

Or else, one may realize that it is possible to reformulate the goal, replacing it with a related goal, which does not change much the usefulness of reaching the goal, but which can be reached by a modified plan with much better chances of success. Or else, recognize that at this time it will be impossible to reach the initial goal, but there is another simpler to reach goal that it is still desirable, even if it does not provide the full benefits of the initial goal. Then, establish a new plan of action, to reach the modified goal.

For now this kind of activity is completely outside the abilities of any AI. Despite the impressive progress demonstrated by LLMs, nothing done by them has brought a computer any closer of having intelligence in the sense described above.

It is true however, that there are a lot of human managers who would be equally clueless with an LLM, on how to perform such activities.


>The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.

what are you talking about; the main marketing strategy for so terribly many ai companies is to run around declaring the end is soon because their product is so powerful

>The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture.

At this point I think you're joking. Tightly centralizing power always results in oligarchy, kleptocracy, and collapse. And why do you think this central world government wouldn't militarize via your unstoppable robots?


I think online matchmaking has absolutely destroyed people's ability to feel like they're good at any game.

Like you'll never be a big fish in a small pond. If you played as much as you currently do but could only play with people local to you, you'd be the best person you know at this thing. And that's a really good feeling. But you'll never get that feeling, because you should really be grinding past whatever plat 3 is in order to not suck.


Conversely you'll never be a small fish in a big pond. If you could only play with people local to you and they all played much more than you, you'd be the worse person you know at this thing. And that's a really bad feeling. That gets people to quit games.

The big fish eat all the small fish until the big fish are the only ones left.


I remember being in school and thinking that "what if my (color) is your (other color)" was a cool question, and then later I think I reasoned out that color is measurable so the actual color is objective, and the differences between different people is just like... rods and cones that are somehow different between people aka partial colorblindness.

So I don't know what this is.


That makes sense; it's a predatory business model. It's like auto title loans. They want you corner you, offer you money when you're at your most helpless, then take your car away. If not that then they want to lock you in a debt spiral you'll never get out of and just be a leach on your paycheck forever.

Everyone I've ever talked to has an antagonistic view of these companies and most other american institutions, and the more you hate them and recognize they hate you, the better off you are financially.


It’s predatory if you have a poor credit score, but the alternative is that you have no availability of consumer credit, which is the story in Europe.

You want to take out a loan to start a small business and aren’t well connected? Tough luck. Want a 30 year fixed rate loan to buy property on a median income salary? Take a hike. In a world without consumer credit the 99% become the permanent renting class.


Not to mention RE: payday loans/credit cards, paying 30% APR might seem bad, but if you've got a broken car and a $1000 repair bill, taking the 30% APR loan, paying it off in a year, and incurring $169.85 worth of interest in the process, is probably better than you losing your job. Sure, it'd be better if everyone had an emergency fund so they never need such high APR loans in the first place, but banning such "predatory" loans isn't going to magically make that happen.


The predatory part is how they are marketed to people who have no ability to pay them back, not that they exist as a product at all. If they stuck to offering them to people with steady reliable income that just need a hand out of a hole, no one would have much of a problem with them.


What are you talking about? These things are all available in Europe, without having to build up some credit score. If I want to get a loan for my business, I go to my bank and get one. If I want to buy property, I get a mortgage. If I want to buy a consumer good without having money on hand, I can use a credit card or use a BNPL provider. Credit here is granted based on not currently having debts, rather than being perpetually in "the right amount of debt".


"you can minimize the pain from overreach by thoroughly submitting to it so actually it's basically your fault if you don't like it"


What month was it? Because if it's summer in austin and you're standing around downtown then you probably are struggling in one way or another.

Also downtown austin is disgusting; maybe they were just stunned someone would be there on purpose before the sun goes down.

Hope you enjoyed the museum/trip though


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: