Hacker News new | past | comments | ask | show | jobs | submit | disqard's comments login

It's a step change, for sure.

I'm seeing signs that they might be propelling us into a post-literate society.


I think you can s//g that in various ways, and it would be true:

* Music, musicians

* Publishing/journals, academics

* (and more that I can't think of right now)


* Startups that exit, software engineers

* Pharma companies, US taxpayers funding basic research

* Waltons, Walmart employees

The list is endless.


It is weird that it seems like the people that are generating value and the people extracting value aren't ever the same people.

Information asymmetry has never played such a large role in income inequality as it does today.

I’m really hoping people are empowered to use these artificial intelligence systems to level the playing field. No idea what the likelihood of this is to happening though.


As long as we use these awfully inefficient models, the probability is closer to zero than it is to one.

> Information asymmetry

More like credit asymmetry.


Maybe they're referring to TFA (where the person did DIY an effective solution)?

I'd like to believe you, but has there actually been any "official" capitulation from Meta on "the metaverse"?

AFAIK, they have not publicly announced any pivot/shift in priorities.


Not sure if it'll help you, but do read this:

https://maggieappleton.com/garden-history

Perhaps all you need is a perspective shift. Wishing you luck with your personal writing journey!


Corporations are "people".

Corporations "eat" money.

Entities that can feed a corporation, are treated as peers, i.e. "people".

Thus, on shitter, if you can pay, you are a person (and get a blue checkmark).


Oh, nice allusion. If corporations eat money and you're not paying, i.e., a free service. You are prey.

You aren't even the product. You're the raw material.

I just checked out your website -- what a beautiful labor of love!

Thank You For Making And Sharing :)


Indeed, this is what I likened to a "Dark Forest":

https://news.ycombinator.com/item?id=42459246


I'm with you (I use Claude Sonnet, but same difference...).

I do wonder if we're the last generation that will be able to effectively do such "course correct" operations -- feels like a good chunk of the next generation of programmers will be bootstrapped using such LLMs, so their ability to "have that insight" will be lacking, or be very challenging to bootstrap. As analogy, do you find yourself having to "course correct" the compiler very often?


Many a times.

I asked it a simple non programming question. My last paycheck was December 20, 2024. I get paid biweekly. In which year will I get paid 27 times. It got it wrong ... very articulately.

I run into this every single day.


You'll be more successful with this the more you know how LLMs work. They're not "good" at math because they just predict text patterns based on training data rather than perform calculations based on logic and mathematical rules.

To do this reliably, prepend your request to invoke a tool like OpenAI's Code Interpreter (e.g. "Code the answer to this: My last paycheck was December 20, 2024. I get paid biweekly. In which year will I get paid 27 times.") to get the correct response of 2027.


Sure, thanks ! Your suggestion worked. I looked up my chat history and the following was my original question (my answer above was from memory)

> I get paycheck every 2 weeks. Last paycheck was December 20, 2024. Which year will I have 27 paychecks?

I sent it again and it bombed again. It seems your prompt and my prompt are quite similar, but I realize the suggestion (or direction) to it to code.


Awesome! I'm sure the following is not an original thought, but to me it feels like the era of LLMs-as-product is mostly dead, and the era of LLMs-as-component (LLMs-as-UX?) is the natural evolution where all future imminent gains will be realized, at least for chat-style use cases.

OpenAI's Code Interpreter was the first thing I saw which helped me understand that we really won't understand the impact of LLMs until they're released from their sandbox. This is why I find Apple's efforts to create standard interfaces to iOS/macOS apps and their data via App Intents so interesting. Even if Apple's on-device models can't beat competitors' cloud models, I think there's magic in that union of models and tools.


This is an amazing feature! Thank you for sharing it!!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: