That sounds like a deliberate decision to maintain backwards compatibility.
I shudder to think of badly coded consumers of that API sticking that text into a fixed-sized buffer (with the right scaling factor between whatever Twitter considers a character and the actual bytes) and Twitter just wants to avoid buffer overflows like that.
I shudder to think of badly coded consumers of that API sticking that text into a fixed-sized buffer (with the right scaling factor between whatever Twitter considers a character and the actual bytes) and Twitter just wants to avoid buffer overflows like that.