Exploiting a bug, for example SQL injection, is completely a hack.
But can it be considered a bug if it's inherent and unsolvable in the system? Can AI truly ever be resistant against prompt injection or will it always be a series of patches to cover up unwanted behaviour? (Genuine question - I do not know)
If something is inherent in the design of the system I'd argue that it is not a bug and not hacking, just as it's not "hacking" to use a hunters rifle to shoot a human, despite it being "not the way it was intended to be used". It can still be illegal of course, but that's a separate issue.
In context it's clearly not intended to mean that. The quote is: "paid someone to hack OpenAI's products". This is clearly "hack" in the "malicious actor" type of "hack" rather than "Hacker News" type of "hack".
That is not what people see that word and think. In fact, we are so far away from that definition of the word that most people here wouldn't see that word and think that and OpenAI knows this.
On the contrary, I think that's exactly what people think (except maybe people on this site). See the loads of GPT "prompt hacking" and "jailbreaking" guides...
You may be living in a bubble if you think the people who are talking about "prompt hacking" are representative of the average person who reads headlines like these.
I can promise you most ChatGPT users don't even know what a GPT is. That's like saying if you own an Apple Silicon Macbook then you must know what ARM architecture is.
Are those guides featured in mainstream media that most of these people consume? Most ChatGPT users don’t know what LLM is, let alone “prompt engineering”.
Yeah, I'd agree it's hacking in that definition of the word, but obviously OpenAI is using it in the boogieman scary evil guy hacking definition of the word. It is semantics, but to clarify definitions OpenAI, despite sama obviously knowing and using our definition of the word, is not using it in that sense.
Otherwise the court document would read "NYT paid someone to creatively create a solution to generating copyrighted text" and would actually be praising NYT for that, instead it reads like they are damning them.
In that sense, it was hacking.