They use an LLM to summarize the chats, which IMO makes the results as fundamentally unreliable as LLMs are. Maybe for an aggregate statistical analysis (for the purpose of...vibe-based product direction?) this is good enough, but if you were to use this to try to inform impactful policies, caveat emptor.
For example, it's fashionable in math education these days to ask students to generate problems as a different mode of probing understanding of a topic. And from the article: "We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, ..." That last part smells fishy, and even if you saw a prompt like "design a practice question..." you wouldn't be able to know if they were cheating, given the context mentioned above.
> It’s stealing, but also, admittedly, really cool. Does the growth of AI have to bring with it the tacit or even explicit encouragement of intellectual theft?
You answered your own question by explicitly encouraging it.
This is the key point, but if US laws are being violated and AI is considered part of national security, that could be used by the US government in international negotiations, and for justification for sanctions, etc. It would be a good deterrent.
I found the centrality of AI generated video to be quite distasteful. I had a visceral negative reaction to it, which probably speaks to how much AI slop is polluting my life in other places. I would have much preferred a game based on actual historical photographs.
I understand, it's not for everyone. The game will always be based in AI generated renditions though, because it removes the constraints on which events can be depicted.
reply