I'm just reaching the end of a pretty soul-destroying consulting project. The client side is C++, and uses a lot of strings. To my horror, there still doesn't seem to be a de-facto standard way of dealing with the various unicode encodings in C++, even after my multi-year C++ hiatus. I ended up using the WideCharToMultiByte() and MultiByteToWideChar() Win32 functions, which are rather yucky. I'd fully expected boost to have an answer to this problem by now, but that only offers UCS-4 <-> UTF-8 conversion.
What libraries do C and C++ programmers use to hold unicode strings and convert between encodings these days?
Doesn't quite sound like the standard way that you're looking for, but moving closer.
Edit:
"ICU today is de-facto standard Unicode/localization library" from a mailing discussion of the boost solution. And http://art-blog.no-ip.info/cppcms/blog/post/43 has an interesting comparison of a few libraries, but not too comprehensive.
Do I copy the code points, or the encoded characters--the bytes--along with what encoding is used? Similarly, when I paste, is it the code points I paste which are instantaneously encoded using the application's encoding scheme?
In PHP curl_exec returns data in the raw encoding of the source. Fine, some people will want that. But I want to do things with the data, so I want it in UTF8.
So, I ended up writing my own curl_exec_utf8 function which I'm sure is wrong for many edge cases, but it is 2010! Why is there no decent ways to deal with charsets?
Actually I just miss that he doesn't state anything about the downsides of UTF-8. Like that you need to go through the string to determine how many characters it has, due to their (potentially) variable length.
I love how people then tend to bring up Win32 "wide strings", Java and .NET as alternatives, all of which use UTF-16, which is also a variable width encoding.
I'm still trying to find one valid use for length of string in unicode characters. What one usually needs to know is length of string as it's rendered by some output device, which is not related to count of unicode characters in any useful way. Even for fixed point fonts you can have glyphs that are composed from multiple unicode characters or characters whose glyphs occupy two consecutive positions.
That's weird, I thought its limit was deliberately low enough to fit into an SMS message, which has a limit of 140 octets (160 characters in some 7-bit encoding GSM uses). Do they actually allow, say, 140 kanji?
Which you can fix by storing an unsigned long at the front of the string which holds the size.
If it would overflow, you just set the long to the max size, byte the bullet and read the entire string. If you are using a dynamic language just throw whatever InfinitelyLargeNumber class it has in the size column and you're good.
If you're worried about ram that much you can just use plain strings when you need to.
What libraries do C and C++ programmers use to hold unicode strings and convert between encodings these days?