Say what you will, this “AI” hype has top-notch entertainment value. I mean, getting people sold on the idea that they need “AI” to lessen the impact of “AI” on their lives is a level of absurdity that other marketing scams can only look at with envy. Interesting times.
When I expressed concern that AI generated responses might make inaccurate claims about our products, I was told by the cloud rep to just put the answer through AI to make sure it was compliant…
Lol we're getting the same, except we do customer support software. An actual quote I've heard multiple times from PMs and even our CTO:
"If the AI returns an inconclusive response, we should send that back to the AI and tell it to think about it!"
And other variations of that. It feels like I'm surrounded by lunatics who have been brainwashed into squeezing AI into every nook and crany, and using it for literally everything and anything even if it doesn't make an iota of sense. That toothbrush that came with "AI Capabilities" springs to mind.
Isn't everybody always gushing about how LLMs are supposed to get better all the time? If that's true then detecting generated fluff will be a moving target and an incessant arms race, just like SEO. There is no escape.
Yep, that's what I've been thinking since people started talking about it. I hear that AI plagiarism detectors can never work, since LLM output can never be detected with any accuracy. Yet I also hear that LLMs-in-training easily sift out any generated content from their input data, so that recursion is a non-issue. It doesn't make much sense to have it both ways.
I wonder if the truth about sifting out synthetic training data is based on signals separate from the content itself. Signals such as the source of the data, reported author, links to/from etc.
These signals would be unavailable to a plagiarism/ai detector
reply