Now presumably there is some drawback from doing this, otherwise "utf8mb4" would be the default (right?) but I'll be damned if I can figure out what the drawback is.
This is when I discovered that InnoDB limits index columns to 767 bytes. Why is this suddenly an issue? Because changing the charset also changes the number of bytes needed to store a given string. With MySQL’s utf8 charset, each character could use up to 3 bytes. With utf8mb4, that goes up to 4 bytes. If you have an index on a 255 character column, that would be 765 bytes with utf8. Just under the limit. Switching to utf8mb4 increases that index column to 1020 bytes (4 * 255).
And later:
You might say, “OK, now we’re finished, right?” Ehh… it’s not so simple. MySQL’s utf8 charset is case-insensitive. The utf8mb4 charset is case-sensitive. The implications are vast. This change in constraints forces you to sanitize the data currently in your database, then make sure you don’t insert anything with the wrong casing. At work, there was much weeping and gnashing of teeth.
Good luck. The real solution is to switch to something that isn't MySQL. The project in that post ended up switching to Cassandra. For new SQL stuff, I now use PostgreSQL.
Edit:
To those of you who point out that utf8mb4_unicode_ci is case-insensitive, we did try that. This whole fiasco happened two years ago, and I don't remember what the show-stopper was, but we decided not to use that collation. Whatever issues we had with it were bad enough to warrant switching to utf8mb4_bin and sanitizing everything. Really, the solution is to switch away from MySQL and forget about collations and encodings.
That's so true! Years ago I switched almost all projects to PostgreSQL. I never looked back. I use MySQL only for very legacy applications.
MySQL has so many bugs an inconsistencies, unicode support is only one of many issues. (Unsafe GROUP BY, silently cutting concatenated text fields when the result grows too big, silently converting invalid dates to '0000-00-00', and so on ...)
In the past, MySQL had some performance advantages over PostgreSQL, but only for MyISAM, i.e. without real transaction safety. Nowadays, if you want to trade ACID for speed, you wouldn't use MySQL but some "NoSQL" database instead. So even from a performance point of view, I see no point in using MySQL for anything but legacy stuff.
I switched to sqlite. I've done my time on SQL servers. In my view, they simply have too much complexity for many scenarios. Basically, one big database server equals one big downtime when issues occur. Sharded data architecture on the other hand can design for failure and maintain graceful degraded performance, not to mention optimized resource allocation and the option of adopting vastly different security and backup policies across different parts of the datastore (eg. encryption, snapshots, etc.). Yes, you lose database-internal consistency guarantees. No, that doesn't mean you have to lose consistency.
I did that in 2006. A big win was maintenance, since downloading/uploading a copy of the database was so simple. Eventually I had performance problems when doing read-write-read-write stuff, since writing locks the entire db file.
You could argue I shouldn't be doing that, and you may be right. But I just switched back to MySQL, and the problem went away.
SQLite fixed that issue in 2010 with write ahead logging (version 3.7). Writes go to a separate file so there is no impact on readers of the main file. It is a little more complex than that and described at https://www.sqlite.org/wal.html
One nifty feature of Postgresql is that if you have a group by a,b,c,d, you need to aggregate everything else in your select that isn't a/b/c/d, or the statement won't prepare, whereas if you don't select an aggregate in MySQL you'll just get one random column, which is probably not what you want.
This is mostly true, but in recent Postgres versions (since 9.1 I believe), if you GROUP BY the primary key, you can still include other columns in the SELECT statement. For instance:
SELECT a.name, COUNT(b.id)
FROM a
LEFT OUTER JOIN b ON b.a_id = a.id
GROUP BY a.id
For anyone thinking of using utf8_general_ci, it's also seriously broken if you care about sameness of characters at all.
For example, utf8_general_ci treats u and Ü as the same character. They are very much not the same. Your sort orders for international texts will be all over the place if you use utf8_general_ci.
But a lot of databases have to deal with people's names. To paraphrase an old saying, "I don't care why I'm in the database, as long as they spell my name right."
For highly multilingual databases sort order is basically a lost cause anyway, because there is no globally correct, multilingual Unicode sort order. Even when using the same alphabet different languages have different sorting conventions. And some languages' correct sorting order is not even fully decidable solely from the UTF8 text. For example, Danish treats 'aa' as a sequence of two 'a' letters sometimes (mainly in words and names of non-Danish origin), and as a variant spelling of 'å' other times. So in a properly collated Danish encyclopedia, Aachen goes near the beginning of the encyclopedia, while Aalborg goes near the end. Good luck implementing that in your database!
The unicode people have addressed this with the Unicode Collation Algorithm - http://www.unicode.org/reports/tr10/ - which obviously can't be perfect, but it can be reasonable.
The ICU project - http://site.icu-project.org/ - has open source implementations of the collation algorithm including appropriate information for different locales. ie you shows those Danish names to a Danish user in their expected sort order, while also showing them to an American in their expected sort order.
i18n and l10n is hard. But it is also largely solved fairly well, and there is no excuse to avoid it all together or not use the ICU code.
We're discussing how well each collation SORTS international strings, not whether or not you can store them. There are a lot of instances where you need to be able to store an international string without needing to be able to do a highly accurate ORDER BY on that column.
It's also worth mentioning that by design, UTF-8 doesn't use any more space for storage than ASCII. There are exceptions when databases need to pre-allocate storage, but in general, you should just be using UTF-8 everywhere.
The case sensitive part is incorrect, the author made a mistake and corrects it in the comments, but not in the article (don't know why).
Case insensitive works fine with mb4.
Erm, what? You are the author! So why would you copy/paste a section you know is incorrect? And an even bigger question is why would you leave it on your website to confuse lots and lots of people!
Someone pointed out in the comments on that link that utf8mb4 isn't necessarily case sensitive. The author believed that due to the fact that he was using utf8mb4_bin_ci as collation, which is a binary collation. It works as expected when using a "true" collation like utf8mb4_general_ci.
The index problem is mildly annoying, but limiting index length isn't a big problem. For most applications, you can uniquely identify a row with much less than 191 characters, and it might be a good optimization to limit index length regardless to prevent the index size from growing out of control. If it's truly a problem, hacks like adding a sha1 or md5 hash column with a value derived from the original column and indexing that instead solves the problem.
Edit: Ops. Didn't realize parent was the article author. :)
There is a case insensitive collation for utf8mb4 so half of the problems there are just a misconfiguration. The index size restrictions are a problem though.
There is a utf8mb4_unicode_ci collation, but we had big problems with it. I'm sorry, but this happened two years ago and I can't remember what the exact issues were. Whatever they were, they were bad enough that we ended up using utf8mb4_bin and sanitizing everything.
Edit: Apologies, I wrote utf8mb4_general_ci when I meant to say utf8mb4_bin. I think the issue we had was conflicting rows on unique constraints due to utf8mb4_unicode_ci having different comparison rules than utf8_unicode_ci.
Don't want to be confused by utf8 vs utf8mb4 charsets or deal with the subtle differences between utf8_general_ci, utf8_unicode_ci, utf8mb4_general_ci, utf8mb4_unicode_ci, and utf8mb4_bin? Switch to PostgreSQL. You will not regret it.
A vague problem like this is really not cool. At the minimum your readers deserve that you update the article.
Say that there is a case insensitive version. If you want add that you had some problem but that you don't know if it had anything to do with it or not.
The problem you had was probably because of the difference between utf8mb4_general_ci and utf8mb4_unicode_ci.
Perhaps. But I think they are keeping MySQL support both for legacy reasons and compatibility reasons. You could also argue they should stop implementing it in PHP, if they are willing to sacrifice its ease of install.
But MediaWiki does support other DBs, such as Postgresql and Sqlite.
Anther nice "feature" of mysql: If you have a table configured to UTF-8, and you use utf-8 client encoding but then accidentally try to send something that's not UTF-8 (shit happens), then mysql will happily accept your data and truncate it on the first byte it doesn't understand.
It issues a warning, but most frameworks don't care to inform the caller if a warning has happened (some don't even provide a way to access them).
Yes. You should not be sending invalid data to your database, but holy sh*t, your database shouldn't (mostly) silently alter the data you entrust it with.
If the input data is wrong and you can't deal with it, blow up in the face of the user. Don't try to "fix" it by corrupting it.
Except for causing annoyances, the truncation of data may also have security implications. Last year, I found that WordPress (which has the utf8 character set by default), was vulnerable to PHP object injection because their filter could be bypassed by (ab)using the truncation of strings in MySQL. For more details: http://vagosec.org/2013/09/wordpress-php-object-injection/
DBIx::Class (in perl) has a connection info option that calls a method that does its best to turn all of that horrible crap off:
sub connect_call_set_strict_mode {
my $self = shift;
# the @@sql_mode puts back what was previously set on the session handle
$self->_do_query(q|SET SQL_MODE = CONCAT('ANSI,TRADITIONAL,ONLY_FULL_GROUP_BY,', @@sql_mode)|);
$self->_do_query(q|SET SQL_AUTO_IS_NULL = 0|);
}
I recently moved an application off mysql to postgresql after hitting this error. After over a decade of mysql doing things their way and it always being a pain in the ass for everyone else when is enough, enough?
I no longer develop software to run on mysql because I like my sanity.
Well, funnily enough MySQL are now owned by Oracle and I've wasted a couple of days trying to get some sane semblance of `LIMIT` working for me on Oracle... so don't hold your breath, Oracle have an even longer history of damning their customers...
Oracle does seems to make limit queries much harder than they need to be. I wrote an article about limit and paging queries in oracle ages ago that may interest you, but you probably have it all figured out now.
This actually made me think of: Oracle's "UTF8" vs "AL32UTF8" which I consider to be a gotcha, even though it is reasonable in terms of the evolution of the standard, a modern land mine perhaps.
The lack of limit sucks, but the thing that annoys me most regularly is that Oracle refuses to make the 0 length string / null behavior configurable to allow for ANSI compliance ( MySQL null handling is worse IMO, makes me insane.... )
Oracle historically has poor support. Here's an old example with an ADO bug, they never fixed it even though the support ticket was open for almost 2 years!
and unfortunately, they are the only ones who can offer support or implement a patch for their software!
With postgres and mysql you can find many companies competing to offer great service, and who can contribute their fixes and improvements directly should they so choose. The stronger the output of these communities and the organizations that support them, the more pressure for even the companies like Oracle to improve ( or perish :))
You should be able to do LIMIT in Oracle using http://www.oracle.com/technetwork/issue-archive/2006/06-sep/... . If you want a standards-compliant way to do it, you can use the ROW_NUMBER() window function. This might not be as fast, but will work on virtually every DB except MySQL...
Yep, I switched to Postgresql after seeing a client lose data to corrupted databases - they had rebooted their server, and mysql refused to restart as a result and had to be manually recovered. Every time I hear another mysql horror story like this (truncation without warning was another favourite), it makes me thankful I no longer use it.
Sure, I came in after their disaster to clean up, and their issue was their backups weren't up to date and they had no replication etc (it was a small site, no sysadmin, they rebooted their server during a busy time and boom! All the other services came up ok, but not mysql). The fact that Mysql failed and ended up with corrupted dbs on a simple reboot (not power failure) doesn't fill me with confidence - that really shouldn't happen. That's just an anecdote of course, but I think I prefer working with psql anyway.
Certainly not! However, its WTFs per minute count is far lower than MySQL.
I started off my career as a PHP/MySQL developer. PostgreSQL is definitely years ahead of MySQL. It used to be that MySQL had far superior read performance for simple queries, but that hasn't been true since around 2006. Also Postgres' SQL optimiser is really good.
I guess this does not apply to replication, upserts, etc.
Well, I think I will just grab some popcorn and wait. It will be fun to watch how those who never bothered to learn MySQL (and by this I don't only mean its peculiarities) will get bitten for not bothering to learn PostgreSQL.
Why? Because you think everything is equal, everything has its own gotchas and not one is better than the other?
That is against all history, not only in software. Postgres is obvectively better (safer, better engineered, easier to use, etc.) than MySql, just as Python is objectively better than PHP, and an AK47 was better than most other assault rifles of its time.
The lack of upserts is a problem in postgres, as much as some people say they can get away without them. There are objective concurrency issues with the lack of it, that have been acknowledged by postgres developers when they started work on upserts.
I find the official documentation to be superior for postgres (YMMV), but I think there are certainly more examples / blogs / tutorials for MySQL.
I prefer to work with Postgres when I can, but I also have found instances where MySQL better fit my needs, a heap table, even with more recently added index only scans ( and even reordering the table with cluster may not be desirable) you may want an index organized table (oracle term ) (mysql would say clustered index )
I am curious why even in the context of freely available engines we focus on competition, which I think can lead to great things, but I don't understand the animosity the communities feel towards one another, isn't it just a matter of horses for courses? Are we really interested in competitive kills or improving both the tools we use and our understanding thereof ?
I don't mean to single your comment out gbog, and I know this is certainly not exclusive to databases either ( witness almost any discussion of various programming languages on HN )
The MySQL clustered index (and Oracle IOT) are, in my opinion, often over looked powerful features. To pick an example I recently encountered ... I have an Rails app to implement a RSS reader. Each user has many feeds that get added over time. If you cluster the feeds on user_id then all the feeds for a given user will be in a single database block or two, so reading them out is super efficient. If they are in a standard heap table, (or the way Rails creates the table by default, clustered on ID), then reading the list of N feeds could require N database block reads in the extreme case. Same goes for all articles on a given feed - cluster the related articles together and its very efficient to find the most recent articles for a feed.
in at least some cases, animosity comes from people who've have to clean up someone else's exploded mess in the tech of their targeted derision.
I do think many (most?) people are interested in seeing all tools get 'better' over time, but what needs to be improved, or how it should be improved, often don't get agreed on.
PostgreSQL is no exception, but hey I do insane things with the db so I suppose it is fitting that I occasionally (but rarely) run into insane aspects of the system.
The difference is, you don't have to do crazy things to run into truly insane behavior on MySQL ;-)
> Why they call this encoding "UTF-8" is beyond me, it most definitely is not UTF-8.
This is a confusion of concepts. That an encoding can encode a particular value does not mean all applications using that encoding will support that value. You can store any size number you like into a JSON number, but if you send me a 256-bit value in a field for temperature in celsius, I'm going to give you back an error.
Encodings ease data exchange, they do not alter the requirements or limitations of the applications using them. MySQL's original unicode implementation supported only the Basic Multilingual Plane, so unsurprisingly, characters outside that plane are rejected. That would be the case regardless of the particular encoding used.
What you want is support for all Unicode characters. This is a reasonable request, but it is decoupled from the encoding used to communicate those characters to MySQL.
Say what? UTF8 allows for any character that the Unicode standard defines. If they wanted to restrict a characterset to the BMP, then they should call it UTF-8-BMP.
When the purpose of your software is simply to record data, saying you support a particular encoding when you only support _some_ of it is pretty irresponsible.
> That an encoding can encode a particular value does not mean all applications using that encoding will support that value.
And yet you have to change the encoding of a column to support storage of non-BMP characters, even though the UTF-8 encoding supports them just fine. Where's the sense in that?
No, you have to change the character set of a column. It's unfortunate they used a name normally used for an encoding to name a character set, but that doesn't make the character set an encoding.
It seems to me that you're the one confusing concepts. MySQL originally didn't support characters outside the BMP, fine. But since the encoding is completely different from that, a real UTF-8 would have automatically supported astral characters as soon as MySQL supported them.
The fact that MySQL itself gained support for all of unicode, but did not use this support in the 'utf8' encoding, shows that it is not actually UTF-8.
but I expect an app that says "you can store utf8 data here" to actually store that, not "I will silently drop everything after the first utf8 character that doesn't fit in 3 bytes".
I don't see how I should imagine that putting utf8 as a column encoding should be interpreted as "I will read and discard utf8".
This is different from declaring utf8 as the connection encoding.
Your expectation is common among westerners. It typically goes away within a few days of beginning to work with large amounts of CJK content.
When someone says they accept UTF-8, UCS-2, or any other encoding, my first question is always "Are characters outside the Basic Multilingual Plane supported?". There is a long history quite apart from MySQL of the answer to that question being "no", particularly if the project in question originated in the 90s (or even early 2000s).
IMO this is a combination of "doesn't know any better", "works for me", and "it's easy enough, don't bother me with complicate solutions".
Which is unfortunate as the "complicate" solution is often just the right solution but if the wrong solution doesn't blow up into their faces in 99% to 90% of the use cases, many people are satisfied enough with it.
I think these reasons also play a role in PHP's ongoing popularity.
Last I checked the alternatives were either very expensive Oracle or did not have a good HA solution Postgres... MySQL/MariaDB has two good HA solutions - either behind haproxy with master-master or a VIP+keepalived with Galera cluster... The issue of encodings for most applications is minor...
Also, if you set a column charset to "Latin1", guess what encoding MySQL actually gives you. If you guessed "like windows cp1252, but with 8 random characters swapped around", have a cookie.
(Though I have to admit, based on the title I was hoping for a deeper problem. MySQL has plenty of stupid gotchas like this, but it's (marginally) easier to deal with them than to cluster postgres)
I can recommend reading the comment from Alex on there as well; it has some good info on UTF-8:
[quote]
😸 works just fine in PostgreSQL by default (Python too).
The UTF-8 issue is pretty complex, so it's not really a surprise that implementations (like MySQL's) are incomplete. The basic encoding from the code points into UTF-8 bytes could handle up to 30 bits of data (first byte of 0xFC indicating 5 following bytes with 6 bits of the code point each (6 bytes for 36 bit code points are only impossible because UTF-8 bans 0xFF and 0xFE as byte values other than for the Byte Order Mark). Yet various standards attempt to restrict the range, currently with RFC 3629 putting the ceiling at 0x10FFFF (4 bytes per code point). That RFC doesn't bother to justify the constraint, other than as a backwards compatibility band-aid with the more limited UTF-16. The point isn't to rag on UTF-16, which made sense once, but to express sympathy for the various attempts made to avoid coping with the full, 30-bit range the underlying encoding can actually handle.
Not only is there a wash of hodgepodgery on the range, but UTF-8 (and other similar encodings) can put small values in large byte encodings. There's nothing stopping someone from just using a 6-byte long, 30-bit codepoint block of RAM for each character, even just ASCII (obviously this violates a bunch of RFCs, but the coding system does provide for it). The result would confuse many UTF-8 parsers (and blocked by the more complete ones, a Pyrrhic victory), since ASCII characters are expected to be exactly one byte long in UTF-8, not seven. Even in RFC-compliant UTF-8, an ASCII character codepoint value can be encoded in four different ways. Example: an "a" (ASCII 97, hex value 0x61) can be encoded as any of (pardon any bit errors, I'm doing this off-the-cuff, late at night...):
The latter three overlong encodings aren't considered canonical, and software devs have to fix them to make string matching efficient and so forth, to save space, and most worrisomely to prevent attackers from sliding special characters into strings to crack systems - say by using larger, noncanonical encodings to evade filters that would catch and block the canonical, shorter encodings.
Some developers would try to limit UTF-8 characters to four bytes because that's a power of two, and fits comfortably into a 32-bit long int.
UTF-16 also complicates UTF-8 with the banning of codepoints between 0xd000 and 0xdfff in UTF-8, used as "surrogate pairs" in UTF-16. See http://www.unicode.org/version... for an update that mentions this explicitly)
Anyway, the general idea is that I can sympathize with MySQL having incomplete UTF-8 support, most implementations are incomplete in some way, and one could argue the RFC's variant is pointlessly incomplete itself (no 24 and 30 bit code points, though apparently UNICODE's Annex D does allow the use of those two larger sizes for characters outside of the UNICODE range, perhaps widely interpreted as a constraint of UTF-8 itself instead of being about the UNICODE subrange of UTF-8). Implementations vary enormously, and with good reason (see http://www.unicode.org/L2/L200... for some of them).
Fortunately, MySQL's limitation doesn't trouble me, since I do almost all my work in PostgreSQL or some non-SQL database or another anyway. ;-P
[/quote]
I don't know; to me it reads like someone trying to show off, but I don't think it's helpful.
> The latter three overlong encodings aren't considered canonical, and software devs have to fix them to make string matching efficient and so forth, to save space, and most worrisomely to prevent attackers from sliding special characters into strings to crack systems - say by using larger, noncanonical encodings to evade filters that would catch and block the canonical, shorter encodings.
This is misleading. "Software devs" don't have to "fix them". Overlong encodings in UTF-8 are invalid since Unicode 3.1, precisely because of the security considerations, and conformant implementations are required "not to interpret any ill-formed code unit subsequences" (Unicode Standard section 3.9). RFC 3629 also states that "[i]mplementations of the decoding algorithm above MUST protect against decoding invalid sequences."
Fortunately, it's pretty straightforward to achieve this, and the Unicode standard contains a nine-row table listing all valid combinations of ranges of one to four bytes.
It'd be really interesting to hear from an engineer at one of the big MySQL users (I'm thinking Twitter, Facebook, Github) how they solve this problem.
This bit me pretty hard. I was trying to debug other Unicode issues an application had, and in my infinite and enduring wisdom decided to use one of those "turn text upside down" sites to generate sample data. Turns out they use a bunch of very high codepoints (Mormon text or something) and MySQL choked on them. Took me ages to figure out that was the issue.
Available here [1] on page 18, or just look for "1F44E". I found the twist really clever, and first thought the author had simply used a generic block.
I didn't spend much time reading the cited document, but it looks like some existing glyphs may be merged with newer ones under certain conditions. What a mess... I feel Unicode has tried to compass way too many things (even italics!). Go figure...
You need a font which supports emoji. Android, iOS and Mac OS X should support it out of the box, on Linux I had to install a font. I'm not sure what the situation is on Windows.
I'm guessing it's implemented like this for performance reasons and calling it utf-8 was just a marketing ploy as everyone knows we should always use utf-8 for everything...
Sounds like the OP should read "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)" over at: http://www.joelonsoftware.com/articles/Unicode.html
In this article, Joel says he "decided to do everything internally in UCS-2 (two byte) Unicode". He fell into the same trap that MySQL did and the software he describes would also fail on Emoji.
It is still a very informative piece and Joel was way ahead of the curve by supporting and evangelizing Unicode at all in 2003. But it is not the best article to point the OP at, as it does not mention the BMP or discuss proper handling of characters beyond the BMP.
I never got that. He wrote a great article about the issue and then ended with using UCS-2. I always wondered if there was something I missed that made him choose UCS-2 over UTF-8, since UTF-8 can represent every Unicode code point.
Why? UTF8 covers more than just the BMP, it's just an encoding scheme. This has nothing to do with the fact that UTF8 is restricting characters to 0xFFFF!
There are a couple, and they are insidious. From http://geoff.greer.fm/2012/08/12/character-encoding-bugs-are...:
This is when I discovered that InnoDB limits index columns to 767 bytes. Why is this suddenly an issue? Because changing the charset also changes the number of bytes needed to store a given string. With MySQL’s utf8 charset, each character could use up to 3 bytes. With utf8mb4, that goes up to 4 bytes. If you have an index on a 255 character column, that would be 765 bytes with utf8. Just under the limit. Switching to utf8mb4 increases that index column to 1020 bytes (4 * 255).
And later:
You might say, “OK, now we’re finished, right?” Ehh… it’s not so simple. MySQL’s utf8 charset is case-insensitive. The utf8mb4 charset is case-sensitive. The implications are vast. This change in constraints forces you to sanitize the data currently in your database, then make sure you don’t insert anything with the wrong casing. At work, there was much weeping and gnashing of teeth.
Good luck. The real solution is to switch to something that isn't MySQL. The project in that post ended up switching to Cassandra. For new SQL stuff, I now use PostgreSQL.
Edit:
To those of you who point out that utf8mb4_unicode_ci is case-insensitive, we did try that. This whole fiasco happened two years ago, and I don't remember what the show-stopper was, but we decided not to use that collation. Whatever issues we had with it were bad enough to warrant switching to utf8mb4_bin and sanitizing everything. Really, the solution is to switch away from MySQL and forget about collations and encodings.