Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

every time I dig in this story is always stories of stories, and all walk backward to maybe one single merchant, which is just his word, with no police trail or court case trail or anything substantial, with news agency work over "examples and reconstruction of what might have happened" and no actual data that could be verified / falsified.

is this something anyone has actually seen happen, or is it part of the AI hype cycle?





I’ve heard that this was happening with food apps in India. I am waiting for when people realize how to fake prescriptions.

> I am waiting for when people realize how to fake prescriptions

How would an LLM help with that? Paper prescriptions can be copied using Word and a pen.


Image gen, not LLM help.

Word and a pen is still effort, compared to just Image + prompt.


I mean a lot of US states use an electronic system where the doctor submits them directly. Are there still many printed prescriptions?

Not in place outside in India, (or I suppose some US States, based on what you said?) I am going to guess that theres far more paper prescriptions than digital, globally.

scamming to get refunds has always been a thing.

I was consulting for an insurance company once. they even had examples of some of their employees to get insurance money for broken things, using their internal example pictures….

and that say nothing whether this is actually happening or not, what's your point?

It's funny how every few months there's a new malicious usecase that AI proponents cast unreasonable amounts of doubt onto, then the problem becomes widely recognized, and AI proponents just move onto the next bastion of "ya but is this obvious malicious use case of my favored technology REALLY happening?"

Gigantic bot farms taking over social media

Non-consensual sexual imagery generation (including of children)

LLM-induced psychosis and violence

Job and college application plagiarism/fraud (??)

News publications churning out slop

Scams of the elderly

So don't worry: in a few months we can come back to this thread and return fraud will be recognized to have been supercharged by generative AI. But then we can have the same conversation about like insurance fraud or some other malicious use case that there's obvious latent demand for, and new capability for AI models to satisfy that latent demand at far lower complexity and cost than ever before.

Then we can question whether basic mechanics of supply and demand don't apply to malicious use cases of favored technology for some reason.


well yes that's how should we navigate societal change, out of actual threats and not what ifs. what ifs gave us some nice piece of work legislation before like DMCA, so yeah I'm going to be overly cautious about anything that is emotionally charged instead of data driven.

Who is talking about legislation?

Are you adjusting your perception of the problem based on fear of a possible solution?

Anyway, our society has fuck tons of protections against "what ifs" that are extremely good, actually. We haven't needed a real large scale anthrax attack to understand that we should regulate anthrax as if it's capable of producing a large scale attack, correct?

You'll need a better model than just asserting your prior conclusions by classifying problems into "actual threats" and "what ifs."


I mean digital privacy was not a what-if when the DCMA was written, it and its problems existed long before then. You're conflating business written legislation which is a totally different problem.

Also I guess you're perfectly fine with me developing self replicating gray nanogoo, I mean I've not actually created it and ate the earth so we can't make laws about self replicating nanogoo I guess.


Yes please go ahead and do. We already have laws against endangerment as we have laws against fraud as we did have laws aroubd copyright infringement. No need to cover all what ifs, as I mentioned, unless unwanted behaviour falls between the cracks of the existing frameworks.

That "whether this is actually happening or not" is not even a question worth asking.

No shit it's happening. Now, on what scale, and should we care?


is it happening literally is the most important question. people are clamoring for regulations and voiding consumer protections, over something nobody seem to find a independently verifiable source.

Lmao no. "The estimated amount of refund fraud" + "off the shelf AI can generate and edit photorealistic images" adds up to "refund fraud with AI generated images" by default.

There are enough fraudsters out there that someone will try it, and they're dumb enough that someone will get caught doing it in a hilariously obvious way. It would take a literal divine intervention to prevent that.

Now, is there enough AI-generated fraud for anyone to give a flying fuck about it? That's a better question to ask.


Well then you'll have no trouble to find a verifiable source of it happening and prove your point. something beyond "this person said" or "here a potential example to showcase it's possible"

No. The prior is so strong that it's up to you to prove that no AI fraud is happening.

Good luck.


The "some say" prior?

well then here's my refutation: some say this isn't happening at the scale this article claim someone say it's happening.

that should convince you by your own admission.

beside it's the article responsibility to provide evidence for their points. circular links leading to the same handful of stories is not "preponderant"


The "humans are stupid in all the usual ways" prior.

You might as well be asking for proof that humans use AI to generate porn.


well, now that it hit the news it will happen more often!

maybe, but this story is circulating for a while now even on mainstream media, and I still haven't seen shops names, no order IDs, no platform statements, nothing that can be independently verified yet. just "people say". sure if this is such a big problem we'd have some proof to go by by now.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: