Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
og_kalu
on Oct 1, 2023
|
parent
|
context
|
favorite
| on:
Bing ChatGPT image jailbreak
You seem to have this idea that LLM guardrails are anything more than telling it not to do something or limiting what actions it can perform. This is not the case.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: