Hacker News new | past | comments | ask | show | jobs | submit | kvee's comments login

On the other hand, it may be that "Alignment likely generalizes further than capabilities." - https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes...


That may be true, but even if it is, that doesn't mean human-level capability is unachievable: only that alignment is easier.

If you could get expert-human-level capability with, say, 64xH100s for inference on a single model (for comparison, llama-3.1-405b can be run on 8xH100s with minimal quality degradation at FP8), even at a mere 5 tok/s you'd be able to spin up new research and engineering teams for <$2MM that can perform useful work 24/7, unlike human teams. You are limited only by your capital — and if you achieve AGI, raising capital will be easy. By the time anyone catches up to your AGI starting point, you're even further ahead because you've had a smarter, cheaper workforce that's been iteratively increasing its own intelligence the entire time: you win.

That being said, it might not be achievable! SSI only wins if:

1. It's achievable, and

2. They get there first.

(Well, and the theoretical cap on intelligence has to be significantly higher than human intelligence — if you can get a little past Einstein, but no further, the iterative self-improvement will quickly stop working, open-source will get there too, and it'll eat your profit margins. But I suspect the cap on intelligence is pretty high.)





I might be very biased:

- This report looks like it was put together in a week's time. - As an academic, I am sure the quality of the presentation is lower than that of 90% of the academic papers out there.



It does seem actually to be "feature working as intended" if you read the initial Gemini paper.

Some relevant concerning stuff pulled from it here: https://twitter.com/psychosort/status/1760849091171352956

https://twitter.com/psychosort/status/1761044625307963445

Paper itself here: https://storage.googleapis.com/deepmind-media/gemini/gemini_...


pg talks about how Sam Altman is the most powered person he's ever met. Seems we have a super powerful psychopath running perhaps the most important company in human history.

I do think he legitimately believes he's doing the right thing though all throughout, which maybe makes it more scary.

Sorta like how Mark Zuckerberg seemed to truly believe in Facebook's mission and wound up having all sorts of negative externalities for the world. Mark Zuckerberg just isn't quite as effective as Sam Altman, and it's easier to be suspicious of his motives.

Not to say that psychopaths are necessarily bad. Peter in Ender's Shadow turned out great!

But it does seem dangerous for 1 person to hold so much power over the future of humanity.

Sam Altman's reasoning for him having all the power, I think, is that “short timelines and slow takeoff is likely the safest quadrant of the short/long timelines and slow/fast takeoff matrix.”

If you believe that and believe that Sam Altman having complete control of OpenAI is the best way to accomplish that, everything seems fine.

I'd personally have preferred trying to optimize for long timelines and a slow takeoff too, which I think might have been doable if we'd devoted more resources to neglected approaches to AI alignment–like enhancing human capabilities with BCI and other stuff like that.


One old guy in a bubble thinks says another young guy in same bubble (who he just happened to mentor) is "the most powered person he's ever met."


That whole first part disgusted me. "most powered person he's met"? Good lord does that come off as tone deaf, almost groveling.

And the most important company in human history? The hell is that guy smoking, because I've got good shit and that's some serious hyperbole.

Is the hype machine in the room with us right now?


The other PG hyperbolic comment about Sam that springs to mind is when he said that meeting Sam felt like what it must have been like meeting young Bill Gates. That's a throwaway comparison from a journalist, but from a bloke who barely interviews anyone who isn't a self-confident workaholic nerd convinced he'll change the world and get rich doing so.. its a bit more of a extravagant comparison.

But then, considering the reputation of young Bill and the one Sam seems to be acquiring, maybe the "powered" traits that apparently set him apart from other YC candidates weren't so positive after all...


I would guess 90% of tech today is hype, that's what you're reading, the hype machine in practice.


It certainly seems that way my dude. I can't remember the last time I saw a new piece of tech or software and thought "fuck yeah this is revolutionary".

Maybe Git...? I thought that was pretty cool back in 2006.


> I do think he legitimately believes he's doing the right thing though all throughout, which maybe makes it more scary

I really think the opposite. I think he's after the biggest payday/most power he can get, and anything else is a secondary consideration.


I think you can fairly ascribe a lot of negative attributes to Sam, but an unnatural thirst for money isn't it. Nothing about anything he does makes me think he's motivated by increasing his personal net worth.


I don't claim to know what motivates him. I don't know him and have no view into his thinking. I'm just going by what his actions look like to me.

I can't distinguish between a thirst for money and a thirst for power because above a certain level, they're essentially the same thing.


> Nothing about anything he does makes me think he's motivated by increasing his personal net worth.

Not even the weird shitcoin with the eye scanner he's been pushing (WorldCoin)?

Based on the last 5 years of crypto hype and failures across the industry, the only motivating factor to get involved in it seems to be 'increase their personal net worth'.


He has said in podcasts he is motivated not by the money but by the power he has at OpenAI


Do you have an actual quote? I've listened to him talk a lot, and this feels like a misquote or misinterpretation. (I'm not saying it's not true; I just don't see Sam saying he personally likes power)


Here are a couple I could find in notes I took while listening to podcasts, though there are more -

“I get like all of the power of running OpenAI” “I don’t think it’s particularly altruistic. Like it would be if I didn’t already have a bunch of money. The money is gonna pile up faster than I can spend it anyway.”

Those I think are either from https://www.youtube.com/watch?v=3sWH2e5xpdo or https://www.youtube.com/watch?v=_hpuPi7YZX8


Just went to the first video and got the following from 1:36, here's a link which starts at that point: https://youtu.be/3sWH2e5xpdo?si=bmum-8B02FLoVkWj&t=96

"I mean I have like lots of selfish reasons for doing this and as you've said I get like all the power of running OpenAI, but I can't think of anything more fulfilling to work on and I don't think it's particularly altruistic, it would be if I didn't already have a bunch of money, yeah, the money is gonna pile up faster than I can spend it"

Some other fascinating and relevant stuff in that video too.


To me, that sounds like he acknowledged his power but disagreed with the person who said it. He's just repeating the question but shifted to it being fulfilling (and not about money). Without the question being included, I think it's hard to use this quote as proof.

He's also said something similar in another interview:

“One of the takeaways I’ve learned is that this concept of having enough money is not an idea that’s easy to get across to people. I have enough money. What I want more of is, like, an interesting life, impact; access to be in the conversation. So I still get a lot of selfish benefit from this. What else am I going to do with my time? This is really great. I cannot imagine a more interesting life than this one and a more interesting thing to work on.”


I think I can see how you interpret it that way.

I certainly didn't interpret as him disagreeing with his statement "I mean I have like lots of selfish reasons for doing this"

It's the "as you've said" part of "as you've said I get like all the power of running OpenAI" that would make me inclined to think what you wrote here.

But I do think there's a greater chance that he is saying that he does like the power.

There's also another quote either in this video or the other one I shared I think where he's asked why he's doing this, or what motivates him, or something like that, and he responds with something like "I'd be lying if I didn't say I really like the power"


Everything else aside - in what world is Sam Altman “more effective” than Zuck? How do you even define effective?


In this case I think I just mean more effective at seeming good to others.

I think they both believe they are good and doing good.

People tend to be more suspicious of Mark Zuckerberg's motives than Sam Altman's.

Sam Altman himself even said he can't be trusted but that was ok because of the company structure and then, when he needed to, overpowered that structure he claimed was necessary: https://x.com/tobyordoxford/status/1727624526450581571?s=20


>perhaps the most important company in human history.

holy shit, hype is unreal :D


There’s a lot to be said about Altman, but calling him a “psychopath” is just wrong. It’s a legitimate medical term and should not be used for hyperbole


Look up Annie Altman.


Are you saying this because of the diddling accusations or for some other reason?



Look up Annie Altman. Be seated.


I think you're using the word "psychopath" when you're talking about something different, though I can't guess what.

Psychopathy is a personality disorder indicated by a pattern of lying, cunning, manipulating, glibness, exploiting, heedlessness, arrogance, delusions of grandeur, sexual promiscuity, low self-control, disregard for morality, lack of acceptance of responsibility, callousness, and lack of empathy and remorse.

(Which, now I read it, is disappointingly pattern matching the billionaire who invested in both OpenAI and also a BCI startup currently looking for human test subjects).

I can see arguments for either saying Altman has delusions of grandeur or lack of acceptance of responsibility depending on if you believe OpenAI is going too fast or if it's slowing things down unnecessarily, but they can't both be true at the same time.


You may be right here.

However, there seems to be a decent amount of evidence that Sam has done exactly what you're talking about.

He manipulated and was "not consistently candid" with the board, he got all the OpenAI employees to support him in his power struggles, he made them afraid to stand up to him (https://x.com/tobyordoxford/status/1727631406178672993?s=20), he exhibited delusions (though I guess they were correct) of grandeur with pg with a glint in his eye making clear to pg that he wanted to take over yc, he did little things like made it seem that he was cool with Eliezer Yudkowsky with a photo op but didn't really chat with him, etc.

Again, I am not sure this perspective is necessarily right (and I may be convinced just because he's such an effective psychopath).

In any case, I think this is a pretty good explanation of this perspective: https://x.com/erikphoel/status/1731703696197599537?s=20


> (Which, now I read it, is disappointingly pattern matching the billionaire who invested in both OpenAI and also a BCI startup currently looking for human test subjects).

Elon Musk actually matches several of those poorly, and matches bipolar disorder much better (most of those are also bipolar or billionaire symptoms, while psychopathy is inconsistent with many Musk symptoms like catatonia): https://gwern.net/note/musk


Thanks; I certainly hope that's closer to the truth.

(Since my original comment, I've remembered that even professionals in this field don't remote-diagnose people like I kinda did).


If of interest to anyone here, my company has built an open source software library for transcranial focused ultrasound stimulation: https://github.com/agencyenterprise/neurotechdevkit

Its goal is to accelerate the development of agency-increasing neurotechnology and lower the barrier of entry for any developers to be able to solve open problems in neurotech without having to have their own hardware or human subjects. It's starting with ultrasound, which we find quite promising, and we hope to expand in the future to other areas as well.



I suspect treating screen use as "treats" may not be ideal.

I also have a 2 year old son and am very curious about this.

It's tough, because I do know whatever I do won't be ideal, and we won't know till later when we learn more about how our brains work what we really should be doing with regards to screen use by kids.

And, at the same time, things are accelerating fast, so there's a decent chance tech is totally different by the time our children are in high school, or even elementary school.

I think that in an ideal world, we'd all use technology all the time and it'd be a natural extension of ourselves letting us be fuller versions of ourselves and more present with each other. It's a shame we're not in that world yet, and I'm hopeful that as we get richer humanity starts designing tech to be more agency increasing with this target in mind rather than losing agency to shorter term incentives.

With our 2 year old, I think the highest impact thing we do is try to strict about being present with him and not on our phones. I suspect this might be much more important than whatever rules are established for a child's own use of technology.

Reminds me of the pretty compelling poem "Children Learn What They Live": https://www.freepoemsonline.net/poems-htm/children1.htm


"treat" in the sense of it being a high sugar, high dopamine thing I want to avoid my children getting too used to. Not in the sense of giving something for good deeds.

Thinking about it, the concept of treat is more fitting for the parents than the kid.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: