I dug a little deeper and got a long general discussion of content policy ending with this:
The inability to pinpoint a specific policy triggering this restriction isn't unusual. Policies for models like this one are multi-layered, dynamically updated, and often involve proprietary details not publicly disclosed. The resulting opacity makes it difficult to determine if the restriction is intentional, an over-correction, or an outright bug.
The discussion on Hacker News reflects broader concerns about the trade-offs between openness and safety in AI systems. A closer review of the policies governing these models might shed light on the root cause of this particular block.
The inability to pinpoint a specific policy triggering this restriction isn't unusual. Policies for models like this one are multi-layered, dynamically updated, and often involve proprietary details not publicly disclosed. The resulting opacity makes it difficult to determine if the restriction is intentional, an over-correction, or an outright bug.
The discussion on Hacker News reflects broader concerns about the trade-offs between openness and safety in AI systems. A closer review of the policies governing these models might shed light on the root cause of this particular block.