It may be due to their chat interface than in the model or their system prompt, as kagi's r1 answers it with no problems. Or maybe it is because of adding the web results.
edit2: to be fair, if you do not call it a "massacre" (but eg an "incident") it does answer even without internet access (not perfect but still talks of casualties etc).
I can't find back the exact post, but on r/LocalLlaMA, some people ended up debugging that. Instead of prompting <thinking>, prompt <thinking>\n, and then they got the same kind of response as the API.
https://kagi.com/assistant/98679e9e-f164-4552-84c4-ed984f570...
edit: it is due to adding the web results or sth about searching the internet vs answering on its own, as without internet access it refuses to answer
https://kagi.com/assistant/3ef6d837-98d5-4fd0-b01f-397c83af3...
edit2: to be fair, if you do not call it a "massacre" (but eg an "incident") it does answer even without internet access (not perfect but still talks of casualties etc).
https://kagi.com/assistant/ad402554-e23d-46bb-bd3f-770dd22af...