Hacker Newsnew | past | comments | ask | show | jobs | submit | xeckr's commentslogin

inb4 that one IBM quote

Man that pre-XP Windows menu really had soul. I miss it.

Economic stagnation for over a decade? Aligns with the vibes, IMO.

Gold is just one of many commodities these days, mostly unconnected from most monetary systems for many decades. Treating it as the benchmark of value is really quite arbitrary, and I expect someone could compare the S&P to other random commodities and come up with completely different conclusions...

I'd definitely be curious to see the S&P valued in different commodities over time. With that said, gold certainly feels like a special indicator given its history as a universally recognized store of value.

> history as a universally recognized store of value.

History of what now? Gold is a volatile commodity. It has crashed, many times, often catastrophically, and had bear markets that dwarf anything you see in stocks.. A quick search tells me that inflation-adjusted gold prices dropped like 80% between 1979 and 2000.

And given its value right now, it's probably due for another.


To add some context, gold was actually something backed by the US government during the Bretton Woods era (40s-70s), where 1 ounce of gold was pegged to 35 dollars. This was only possible because the US accumulated so much wealth relative to rest of the world after WWII, so they controlled the majority of the gold supply. After the golden age of Keynesian America ended with stagflation in the 70s, the US government had to stop all of their gold from fleeing the country, so this guarantee had to end. Which leads to the Nixon shock, where the dollar (and all other currencies as well) became free-floating, and we enter a brave new world where humanity hasn't lived before (neo-liberalism).

Given all that, it's easy to see why the value of gold has plummeted during the 70s - 00s. Though I could see two reasons as to why gold prices are rising during the last decade:

- Gold is actually just a part of the asset bubble (in the same group as housing, stocks, and crypto), and investment in it is aided by too much money printed by the US government not being used towards productive ends but towards rampant asset speculation.

- The current era of neo-liberalism is going to end pretty soon, and some goldbugs are rooting for the revival of late 19th-century classical capitalism, where gold was actually the international standard. I think this is very unlikely though, even if the US dollar loses its status with the end of the petrodollar system. My guess is we're going to deal with free-floating currencies for quite some time, especially when wars are going to happen and governments have to print more money to sustain their war efforts. (I think the best monetary system would be neither gold or crypto, but instead something like the Bancor (https://en.wikipedia.org/wiki/Bancor))


This is one of those bell curve memes. All your text sits in the middle. The Jedi and I are off on the ends screaming "Gold is just a bubble!"

Not really. Gold is not a random commodity. It is historically the primary compact store of value, only recently to meet competition with Bitcoin.

I mean, yeah, but the parallels OP is drawing really kinda feel like the lingering whispers of the “gold standard” crowd rather than anything more substantial.

For the working classes, the peak was the dotcom bubble - everything after that has been repeated speculative bubbles attempting to create explosive growth from nothing of substance, as much a deliberate decision of Capital to weaken the working classes while extracting wealth as it was a desperation gambit by an increasingly stable (but not yet stagnant circa mid-2000s) western hemisphere and its governments. Gold alone isn’t an indicator of this, so much as all asset prices skyrocketing to the moon while worker wages remained relatively flat and precarity increased. Metals, securities, housing, land, all of it has appreciated faster than working wages have kept pace, reflecting a siphoning of that wealth into fewer hands.

Gold just makes the story “neater” to tell to folks lamenting the heyday of Breton Woods.


>Might as well eliminate their position.

It's where we're headed.


>You cannot practice law without "passing they bar"

You are however entitled to represent yourself without passing the bar, and thus use the AI to help your case.

Even for the remaining lawyers, I imagine that their billable hours will crater due to competitive dynamics.


> Even for the remaining lawyers, I imagine that their billable hours will crater due to competitive dynamics.

Billable hours will absolutely crater for lawyers who cater to a low-end clients (esp. for defense) and lawyers who are not good business people.

That said, the best lawyers will almost certainly still be in incredibly high demand, since higher-end lawyering (much like banking) is a personal business as much or more than it is a technical one. AI will simply allow these lawyers to do more and better work.


Perez Hilton tried this with some success: https://www.cjr.org/feature/perez-hilton-og-original-news-in...


interesting, so you can rep yourself, with assistance from an ai? or maybe someone you hired to use an ai, present as an amicus curiae?


He is, of course, incentivised to say that.


Researcher says it's time to fund research. News at 11


Exactly.


Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.


We already have AGI - it's called humans - and frankly it's no magic bullet for AI progress.

Meta just laid 600 of them off.

All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.

For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.

The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.

Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.


Humans aren't really being put to work upgrading the underlying design of their own brains, though. And 5 years is a blink of an eye. My five-year-old will barely even be turning ten years old by then.


Assuming the recursing self-improvement doesn't run into physical hardware limits.

Like we can theoretically build a spaceship that can accelerate to 99.9999% C - just a constant 1G accel engine with "enough fuel".

Of course the problem is that "enough fuel" = more mass than is available in our solar system.

ASI might have a similar problem.


The AI race is presumably won by whomever can automate AI R&D first, thus everyone who is in an adjacent field will see the incremental benefits sooner than those further away. The further removed, the harder the takeoff once it happens.


This notion of a hard takeoff, or singularity, based on self-improving AI, is based on the implicit assumption that what's holding AI progress back is lack of AI researchers/developers, which is false.

Ideas are a penny a dozen - the bottleneck is the money/compute to test them at scale.

What exactly is the scenario you are imagining where more developers at a company like OpenAI (or maybe Meta, which has just laid off 600 of them) would accelerate progress?


It's not hard to believe that adding AI researchers to an AI company marginally increases the rate of progress, otherwise why would the companies be clamouring for talent with eye-watering salaries? In any case, I'm not just talking about AI researchers—AGI will not only help with algorithmic efficiency improvements, but will probably make spinning up chip fabs that much easier.


The eye-watering salary you probably have in mind is for a manager at Meta, same company that just laid of 600 actual developers. Why just Meta, not other companies - because they are blaming poor LLama performance on the manager, it seems.

Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.

Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?

All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?


Not the person you're responding to, but I think the salary paid to the researchers / research-engineers at all the major labs very much counts as eye-watering.

What happened at meta is ludicrous, but labs are clearly willing to pay top-dollar for actual research talent, presumably because they feel like it's still a bottleneck.


Having the experience to build a frontier model is still a scare commodity, hence the salaries, but to advance AI you need new ideas and architectures which isn't what you are buying there.

A human-level AI wouldn't help unless it also had the experience of these LLM whisperers, so how would it gain that knowledge (not in the training data)? Maybe a human would train it? Couldn't the human train another developer if that really was the bottleneck?

People like Sholto Douglas have said that the actual bottleneck for development speed is compute, not people.


There's no path from LLMs to AGI.

> spinning up chip fabs that much easier

AI already accounts for 92% of U.S. GDP growth. This is a path to disaster.


Agreed.

To me the hard take off won't happen until a humanoid robot can assemble another humanoid robot from parts, as well as slot in anywhere in the supply chain where a human would be required to make those parts.

Once you have that you functionally have a self-replicating machine which can then also build more data centers or semi fabs.


Humanoid robots are also a pipe dream until we have the brains to put into them. It's easy to build a slick looking shell and teleoperate it to dance on stage or serve drinks. The 1X company is actually selling a teleoperated "robot" (Neo), saying the software will come later !!

As with AGI, if the bottleneck to doing anything is human level intelligence or physical prowess, then we already have plenty of humans.

If you gave Musk, or any other AI CEO, an army of humans today, to you think that would accelerate his data center expansion (help him raise money, get power, get GPU chips)? Why would a robot army help? Are you imagining them running around laying bricks at twice the speed of a human? Is that the bottleneck?


>GPT-5 was too robotic

It's almost as if... ;)


My spidey sense is telling me an LLM was used in drafting this comment


I think people who use AI too much are also unconsciously adopting turns of speech used by AIs.


I've not seen LLMs use the term "spidey sense" before.


Have we reached a point where a well written comment is suspect and we demand low effort replies for authentically?


They just use flowery language. Doesn't sound like an llm at all


No smell at all for me.


1000%


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: