Hacker News new | past | comments | ask | show | jobs | submit | tempodox's comments login

Or someone misunderstood. Teen drug use did not decline, it shifted. From substances to social media / “AI”.

Say what you will, this “AI” hype has top-notch entertainment value. I mean, getting people sold on the idea that they need “AI” to lessen the impact of “AI” on their lives is a level of absurdity that other marketing scams can only look at with envy. Interesting times.


When I expressed concern that AI generated responses might make inaccurate claims about our products, I was told by the cloud rep to just put the answer through AI to make sure it was compliant…


Lol we're getting the same, except we do customer support software. An actual quote I've heard multiple times from PMs and even our CTO:

"If the AI returns an inconclusive response, we should send that back to the AI and tell it to think about it!"

And other variations of that. It feels like I'm surrounded by lunatics who have been brainwashed into squeezing AI into every nook and crany, and using it for literally everything and anything even if it doesn't make an iota of sense. That toothbrush that came with "AI Capabilities" springs to mind.


Ferengi rule of acquisition #239: Never be afraid to mislabel a product. Besides, selling AI with the promise to remove AI is self-perpetuating.


Exactly, and Goodhart's law drives the nails in the coffin.

https://en.wikipedia.org/wiki/Goodhart%27s_law


Indeed, ingesting generated bluster gives them cancer of the perceptron.


Isn't everybody always gushing about how LLMs are supposed to get better all the time? If that's true then detecting generated fluff will be a moving target and an incessant arms race, just like SEO. There is no escape.


Yep, that's what I've been thinking since people started talking about it. I hear that AI plagiarism detectors can never work, since LLM output can never be detected with any accuracy. Yet I also hear that LLMs-in-training easily sift out any generated content from their input data, so that recursion is a non-issue. It doesn't make much sense to have it both ways.


I wonder if the truth about sifting out synthetic training data is based on signals separate from the content itself. Signals such as the source of the data, reported author, links to/from etc.

These signals would be unavailable to a plagiarism/ai detector


This post has nothing to do with Go ignoring computing history. The authors/maintainers of Go can do that just fine on their own.


The antipode of the underdog?


Looks like updating your subscription is managed by AI.


It will exhibit the Pro version of the Dunning-Kruger effect.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: