Hacker News new | past | comments | ask | show | jobs | submit login
Scaling to 100M: MySQL Is a Better NoSQL (wix.engineering)
394 points by andreyvit on May 24, 2016 | hide | past | favorite | 175 comments



So much to disagree with here ...

> Locks limit access to the table, so on a high throughput use case it may limit our performance.

Then use a proper database that implements MVCC.

> Do not use transactions, which introduce locks. Instead, use applicative transactions.

Or just use a database that handle transactions more efficiently.

> `site_id` varchar(50) NOT NULL,

Why varchar(50)? UUIDs are 16-bytes. The best way to store them would be the binary bytes (which is how postgres stores them). If it's hex without dashes, it'll be varchar(32). If it's hex with dashes, it'll be varchar(36). Why did they pick 50? Future growth? Smart keys? Schema designer doesn't know what a UUID actually is?

> Do not normalize.

Bullshit. Normalize as much as is practical and denormalize as necessary. It's much easier to denormalize and it greatly simplifies any transaction logic to deal with a normalized model.

> Fields only exist to be indexed. If a field is not needed for an index, store it in one blob/text field (such as JSON or XML).

This is terrible advice. Fields (in a table) exist to be read, filtered, and returned. If everything is in a BLOB then you have to deserialize that BLOB to do any of those. That doesn't mean you can't have JSON "meta" fields but if your entire schema id (id uuid, data json) you're probably doing it wrong. It's next to impossible to enforce proper data constraints and all your application logic becomes if/then/else/if/then/else... to deal with the N+1 possibilities of data. Oh and when you finally add a new one, you have to update the code in M+1 places.


>> Locks limit access to the table, so on a high throughput use case it may limit our performance.

> Then use a proper database that implements MVCC.

InnoDB does implement MVCC. MVCC is not a silver bullet.

>> Do not use transactions, which introduce locks. Instead, use applicative transactions.

> Or just use a database that handle transactions more efficiently.

Easy to say, hard to implement at this scale. If you do a lot of writes and reads concurrently to a hot dataset, it's really quite hard to beat this architecture. This is why its such a popular and battle tested solution for many extremely high scale applications with workloads like this. Not to mention extremely well understood.

>> Do not normalize.

> Bullshit. Normalize as much as is practical and denormalize as necessary. It's much easier to denormalize and it greatly simplifies any transaction logic to deal with a normalized model.

But we are talking about performance... Having something in a single table that is denormalized is always going to be faster than having an elegant data model with "Everything In It's Right Place"

>> Fields only exist to be indexed. If a field is not needed for an index, store it in one blob/text field (such as JSON or XML).

> This is terrible advice.

So facebook/friendfeed, uber, dropbox, and many more or wrong then. Ok.

This is really all best practice for running something like this.

Of course it flies in the face of best practice for running a smaller system. Is there tradeoffs? Absolutely! Would it be smart to do this if the need for this scale is not obvious? Probably not.

You end up having more logic in your application and coordination layers, but this is all pretty good advice for people at this scale, and certainly not bad at all.


> Of course it flies in the face of best practice for running a smaller system. Is there tradeoffs? Absolutely! Would it be smart to do this if the need for this scale is not obvious? Probably not

From the article:

> The routes table is of the order of magnitude of 100,000,000 records, 10GB of storage. > The sites table is of the order of magnitude of 100,000,000 records, 200GB of storage

That's tiny. Both of those easily fit in memory on modern hardware. This isn't cough web scale, this is peanuts.

The savings from having a simpler system that operates both transactional and the lack of disparate CASE/IF logic would win over this monstrosity of a design.

For a counterpoint where this type of model makes more sense check out Ubers data model[1]. Similar setup but more applicable use case and (without having any inside intel on it) I'd wager is justified.

[1]: https://eng.uber.com/schemaless-part-one/


So your argument has shape shifted from "This is terrible advice" to "this is terrible advice unless your at uber scale".

Sounds like we are in agreement then - at high enough scale - this is solid advice.


> So your argument has shape shifted from "This is terrible advice" to "this is terrible advice unless your at uber scale".

No my argument is this particular design is both unjustified for the use case and poorly thought out/implemented. The uuid as varchar(50) is a dead giveaway of amateur status.


Are you saying a NoSQL solution is better for this use case? Because that's what this article is asking. Sure, they could do things better (can't we always?), but that's not the point they're trying to make.

Like you said, 10GB of data isn't very much, it really doesn't matter if you go with NoSQL or SQL. But SQL will probably give you more flexibility and will be easier to manage until you get really, really big.


Actually yes, a single key-value database could handle all of their load with better functionality. Make that 2 nodes and you have HA and redundancy.

Problem solved.


I think you loose flexibility with sql. If the usage changes slightly and you need to index one or two more fields you would have to do an alter table and read through 200 GB of data deserializing all blobs to put the values into the new field. A good nosql would just add a secondary index. You might also have a hard time doing map-reduce on sql. It is often built into nosql systems.


Not sure how you manage to miss the point of the article. I'll break it down for you:

* A lot of traditional sql solutions have scaling issues.

* A lot of nosql solutions for these issues have become popular in recent years. Their main use case is web scale (simplification)

* A ton of actual use cases fall between those two chairs i.e they would have scaling issues with traditional sql systems but don't really have billions of transactions to process.

The article presents some valid real world approaches for this third case, some of it might be case dependant but overall pretty good advice.


they would have scaling issues with traditional sql systems but don't really have billions of transactions to process.

If you have scaling issues with traditional SQL systems, the first thing you should do is understand your problem, not cargo-cult to a different paradigm.

More specifically:

* You do not get to complain about SQL transaction speed if you use denormalized tables.

* You do not get to complain about SQL transaction speed if your primary keys are varchar(50).

* You do not get to complain about SQL transaction speed if you store all non-identifying data in a single json blob.


> Having something in a single table that is denormalized is always going to be faster than having an elegant data model with "Everything In It's Right Place"

This one folk wisdom that is untrue. There are significant speed disadvantages relating to large blobs of data the database doesn't understand. Serialisation time makes returning large JSON/XML objects expensive when you only need a small part. Overwriting a whole object to increment a counter is an unnecessary source of IO. Duplicating JSON keys in every record bloats the size of your working set, making it more difficult to fit into memory (or the fast part of your SAN).

99% of denormalisation out there is unnecessary and has inferior performance. The best route to performance with row store SQL databases (any database?) is two fold: 1) get an expert on your database to help you write your code and 2) get an expert on your hardware to help you choose wisely. Denormalisation is typically a way to take a performance problem and make it worse while introducing data corruption and race conditions.


the reason for putting everything in json in one column is because alter table on a large database can take days. the only sql database i'm familiar with that doesn't have this problem is tokudb.

> Serialisation time makes returning large JSON/XML objects expensive when you only need a small part.

the expensive part of reads is finding the row on disk. once you've found the row the cost of reading part of the row vs the whole row is negligible. amount of data sent over the network doesn't matter either in 99% of cases. these days network bandwidth is orders of magnitude greater than random IO bandwidth on a spinning disk and still greater than random IO bandwidth on a SSD assuming you're using 10GbE.

> Overwriting a whole object to increment a counter is an unnecessary source of IO.

there is no way to write 4 bytes to a hard disk. disk io is performed in multiples of 4096 bytes, so it doesn't matter whether you just update the counter or update the whole blob. only incrementing the counter may allow you to write less data to the write ahead log, so you may save some IO there, but most databases put the whole row in the log anyway so it doesn't matter.

> Duplicating JSON keys in every record bloats the size of your working set, making it more difficult to fit into memory (or the fast part of your SAN).

this is definitely true, it's better to use an extensible but compact format like protobuf or thrift if you have a significant amount of data. or you could use a better database and not have to worry about the cost of adding columns.


> alter table on a large database can take days

There's a lot to migrations in these various implementations but in short: every variant of alter table is an "online" operation (meaning it doesn't hold an exclusive lock) in InnoDB as of MySQL 5.6 and it's possible with various well-supported third party tools before that. For Postgres: most migrations can be done online and those that do hold a lock are typically constant time.

Admittedly migration has been a big problem in the past but that hasn't been true for years now.

> the expensive part of reads is finding the row on disk

Hopefully most SQL queries in an optimised setup are not finding a row on disk! The difference between reading the whole row from on-disk heap storage and reading the interesting part of it from an index in memory is in fact considerable: 1000x or more - and obviously far worse if you have to scan through any of the heap storage.

> amount of data sent over the network doesn't matter either in 99% of cases

It actually matters hugely in the case of JSON blob storage because it all has to be doubly deserialised on the other end - first from the db's wire protocol and then from JSON. There are many apps out there for which JSON serialisation is a bottleneck (for the CPU, not the network) - that's why there are so many "fast JSON" libraries.

Good point - you could mitigate this by using something quicker. I haven't seen anywhere do that - the ability to read the database with other tools is normally useful

> most databases put the whole row in the log anyway so it doesn't matter

re: this whole topic - I'm not proposing switching a JSON blob with a table for which every JSON field was reified into a column. I'm comparing to a normalised design with narrow tables (most tables have fewer than 5 columns). The other stuff about serialisation applies.


> But we are talking about performance... Having something in a single table that is denormalized is always going to be faster than having an elegant data model with "Everything In It's Right Place"

Unless you specify the workload, that's anywhere between completely true and exactly incorrect. Do you have big values you're always interested in and a couple of tiny ids? That's probably going to be faster in one table.

Are you querying only the metadata most of the time and the big value is multiple KB, almost never accessed? You're just killing your readahead and multiple levels of caches for no reason. "always going to be faster" is always incorrect ;)


Querying a single table with an indexed key will be faster as a single lookup than doing a JOIN, let alone several. That said, it really depends on your load, and if you're not dealing with hundreds of thousands of simultaneous users, and haven't over-normalized your data, you can get by with a lot of options. And a good caching layer for mostly-read scenarios will likely get you further anyway.

That said, use a system that's a good match for your data model(s)... if you're data can fit in a handful of collections, but may have varying object shapes/structure for semi-related data a document store may work better. Need massive scale, C* may be your best bet. There are use cases that are great fits for just about every database server that's been made. Some similar options may be a better fit, but it all depends.

Personally, I'm hoping to see in the box replication options for PostgreSQL in the next few releases, then it will probably be my go to option for most stuff. RethinkDB is pretty damned nice too, and you should be able to scale to many millions of users.

Once you hit certain scales, you usually have to implement a number of solutions... be they sharding, caching or queuing to deal with higher load. It depends on where your bottlenecks are.


> Querying a single table with an indexed key will be faster as a single lookup than doing a JOIN, let alone several.

Not always, particularly when you're returning multiple keys at once. Some comparisons with column stores would be instructive.


To be fair, this article is about using MySQL as a NoSQL storage and so all of this advice is geared towards that use-case. I'd kill for so much traffic that any of this would be necessary as opposed to any RDBMS best-practices.

I do agree that UUIDs should be stored differently -- the use of varchar rather than a fixed length type for a primary key will hurt performance.


We use varchar for UUID (on postgres) which surprisingly hasn't been that terribly performance wise. And yes we do use varchar(36) although on postgres it doesn't really matter because I think almost all varchars are text.

I would love to switch to native UUID someday though.


we did the same and then switched to the native UUID type. it eliminates the need for a unique index and we saw a drop in storage space by 1/2. it's totally worth converting UUID to the uuid field.

ALTER TABLE my_table ALTER COLUMN my_uuid TYPE uuid USING uuid::uuid;


And you want as much of the index as possible in RAM. When the index is 10 times larger than necessary and do not fit in your RAM, you get a very expensive performance penalty!


Same, but we use raw binary columns (i.e. "BINARY(16)" for MySQL and "BYTEA(16)" for PostgreSQL).

Conversion between the database-native UUID type and the application-layer UUID type is often so flaky (particularly with Java and JPA).


All the `CHAR`s in Postgres are `TEXT`, from the docs:

Tip: There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs and slower sorting. In most situations text or character varying should be used instead.


Yeah, VARCHAR and TEXT are backed by the same datatype, but VARCHAR(N) gives you a limit check (that can be a pain to change if you ever need to.) There shouldn't be any noticable performance difference. For UUIDs, the native type is way better.


This article is pretty terrible but just in point of fact: it looks like they are using MySQL's InnoDB backend - which does support transactions and MVCC. If they're even talking about avoiding transactions for speed purposes (no matter how stupidly) they must be talking about Inno because in MyISAM BEGIN and COMMIT are no-ops.


The article says "Note that a transaction is using a DB-level lock that prevents concurrent writes—and sometimes reads—from the affected tables."

In innodb locks are row-level; myisam supports table-level locks though that's not and shouldn't be confused with a transaction; I don't know what a "database-level" lock is supposed to mean, are they really saying they're locking all tables to do a write? It doesn't sound like this author understands what a transaction is.


> If they're even talking about avoiding transactions for speed purposes (no matter how stupidly)

As many have pointed out, so much criticism of this article ignores that it is comparing to other key-value stores which are not transactional. Many of what this would compete with are AP, with Consistency not guaranteed.

It really sounds like they should be talking about MySQL cluster, which is protocol compatible but a completely separate implementation and essentially a key-value store with RDBMS attributes atop it. It supports many-master mode like mongo and other distributed systems, which is fairly mandatory for replacing them. It's hard to argue you can replace HDFS with anything that's not distributed, and if you didn't, why wouldn't you just use .. the actual FS? The author may not really understand that HDFS is optimized for storing large-ish files.


There is nothing terrible or stupid about avoiding joins/transactions for speed. The technique of using blobs of data in MySQL rows in this article is perfectly valid and widespread at this point. As long as you understand the trade offs.


In principle that is fine so long as you know what you're doing. In practice most people denormalising "for speed" aren't knowledgeable about the database implementations they are optimising for and that is very obviously the case here.


The point of this article is showing how MySQL could be used to get a lot of what a NoSQL solution provides. NoSQL certainly has a place, but a lot of people don't really understand what that is and simply use NoSQL because it's popular, which cuts them off from a lot of useful features a SQL solution could provide them. That said, you're right more care could have been put in the details of the article, but a lot of the points could be correct for their situation.

For example, 'Do not normalize.'

This was in the context of a read heavy table that competes with NoSQL. In that context, I think this is accurate. We noticed a big difference after denormalizing when we went from millions of rows to billions of rows.

The general advice of SQL solutions being as useful as NoSQL to a certain scale is good. I don't think the individual examples are horrible, but they aren't universal advice to achieve NoSQL performance.


>> `site_id` varchar(50) NOT NULL,

> Why varchar(50)? UUIDs are 16-bytes.

Why do you think it's a UUID?

> The best way to store them would be the binary bytes (which is how postgres stores them).

Is it actually better than a pair of BIGINTs?


>Why do you think it's a UUID?

Because of this:

>Also notice that we are not using serial keys; instead, we are using varchar(50), which stores client-generated GUID values


GUID and UUID refer to different things in general. Also, depending on the context, they can be longer than 128-bits. Oracle Coherence API defines 256-bit UUIDs for example (which is clearly not RFC 4122 or Microsoft GUID, but it still is an identifying number which can statistically be called unique, which a UUID is).

As long as it meets the statistical properties for collision, I don't see any problems with making a 50-bytes GUID. The essential thing is the statistical property, not the number of bits.


Do you think it's more likely that they invented a unique id scheme that requires 50 bytes, or that they're using an overly wide datatype for ordinary UUIDs?


And I have to pick one of those two options?

Like I mentioned, there are different UUID/GUID schemes using more than 128 bits publicly available already. You don't have to invent one. They may simply be using one of those with additional bytes reserved for future algorithmic changes (using first few bytes as a header to specify the algorithm).


But is it likely that they are? I would imagine no.


> If it's hex without dashes, it'll be varchar(32)

Or just char(32) no need to note the length if it's always the same.


This is not terrible advice in general. It is also not good advice in general. To make things scale, you obviously have to "break the norms". They give an insight how their specific case works. It's "watch, learn and pick what fits you" material.


Aside from the MySQL vs. Other DB debate (which I refuse to take part of, although I'm willing to ascribe good points to all camps), this article is absolutely excellent with respect to acting as a guide for people who want to use MySQL as a Key/Value store. Absolutely stellar article! All of the points are dead-on. I applaud the author for putting together so much specific information about tuning MySQL for Key/Value in one place, and the ridiculous speed and scalability you can get if you do it correctly. (That being said, NoSQL Key/Value databases are good too.)


This is basically "We made it work, easy, all the rest are wrong". Wix is 10 years old, they probably started with MySql and stuck with it, is it wrong? Maybe, maybe not. If they were to start today would they have used Mysql aswell or gone with another solution? Did they spend the last 10 years building tools to help them scale MySql (at which point it's easy for them to operate) rather than use a tool that had multi server or multi DC in-mind.

Oh and citing statistics without details is plain lying, how many server, how much RAM, SSD based or HDD....


Reddit does something very similar. https://kev.inburke.com/kevin/reddits-database-has-two-table...

I think the important thing to note here is that there are lots of different ways to use any given tool that can fit your use case without being an atrocity.


And they did it because of maintenance problems not (just) performance (from the link you provided). But they also said "Postgres was picked over Cassandra for the key-value store because Cassandra didn’t exist at the time. Plus Postgres is very fast and now natively supports KV." Which isn't patronizing like the article OP linked.

http://highscalability.com/blog/2013/8/26/reddit-lessons-lea...


Thanks for that... I think it depends on one's needs... it would surprise me if Wix as using MySQL without sharding data either by route and/or site.

Personally, I'd be inclined to use PostgreSQL for most things once they have an in the box solution for replication and failover that doesn't require a commercial contract for a sane solution. Every time I've used mySQL (admittedly not for about a decade), I wind up with a lot of WTF moments. If I'm paying for it, may go with MS-SQL, or a hosted "cloud" provider as tbh, no interest in being a DBA.

I really like RethinkDB's approach as a no-sql solution, and their admin interface is second to none in this space. If you really need more scale (more than around 20 servers or so), then you should probably be using Cassandra, and/or supplementing with the likes of ElasticSearch.

In any case, if you want real-time data for hundreds of millions of users, you're going to have data partitioning and redundant replication and caching solutions.

I worked at GoDaddy on Website builder while they were transitioning their publishing target to C*, and have to say it performs incredibly well as basically a distributed file system in the role described in the post article.


> If they were to start today would they have used Mysql

Uber built something similar in the last couple of years.

https://eng.uber.com/schemaless-part-one/

So did Dropbox.

https://blogs.dropbox.com/tech/2016/05/inside-the-magic-pock...


Since Wix is using MySQL as a key-store ... I wonder why they didn't look at using Postgres HStore [1].

HStore is a key value store built directly in the RDBMS of Postgres.

[1] http://www.postgresql.org/docs/9.6/static/hstore.html


We've drank the NoSQL coolaid mostly as we prefered a schemaless approach to our database and couchdb looked like a cool thing to use. Tested, deployed in production abd after a while we figured out that most of the promises about performance, stability, etc we're mostly bull.

HStore was released, we've migrated to PG and we can't be happier. Zero issues so far.


Couchdb isn't exactly the best thing to judge NoSQL by today especially by it's old performance issues, lack of automatic compaction, indexing on demand instead of proactive background indexing, etc.


At what point do I get to tout out the No True Scotsman paradox? Every NoSQL solution whether as its own DB or built into another one (like Postgres) I've worked with I've either hated (Couch, Mongo, Cassandra, Neo4j), or relegated to _very_ specific use-cases (Riak, Redis, Postgres).

I agree with the Wix team. I've used Mongo in a very high volume site and it was a constant source of headaches. We even had an engineer we called "MongoDB" because he was constantly having to deal with scaling it. It didn't even handle more than 10% of the application data, it was strictly for timelines; the rest of it was in trusty old MySQL. If we'd just done blobs as JSON in MySQL in the first place we would have been completely fine.

Even though I prefer Postgres, knowing nothing else, I would prefer 100% SQL in MySQL than a Postgres backed application that splits the storage between tables and KV store.


Just curious, but why use HStore instead of JSONB?


Presumably because HStore predates JSONB


While this is the conventional wisdom, I just wanted to make sure there wasn't some obscure benefit of HStore over JSONB that I wasn't aware of yet.


> Tested, deployed in production abd after a while we figured out that most of the promises about performance, stability, etc we're mostly bull.

Well, to be fair you would typically expect a database that's the equivalent of a remote hash table to be pretty much as fast as you can get. Now I don't have any experience with couch but most of the other key value stores I've used they scream with performance. But if you're doing anything beyond basic manipulations then it's going to require a lot of tuning depending on the solution you went with.

But RDMS can be very similar. Both are useful tools when used correctly and there is a huge amount of overlap in terms of capability.


It really depends on your use case... if you can rely on heavy caching, then working against a transactional sql database can work for the bulk of your application, while also "publishing" to a distributed k/v store...

Most applications are probably fine regardless of the underlying data store, and are better served by what works best for the model of the data, which can often be SQL. I'm more inclined to react for mongo or rethinkdb first, as I tend to think more in terms of more complex records/objects in most of the applications I've worked on.


Hstore has a lot of issues in my experience. Very hard to query through most ORMs, expensive indexes, strings only, not nested data, lots of unoptimized parsers out there relative to json. Use jsonb if you want postgres KV storage.

Hstore is more k->k->v, which is the same but different, and also leads inexperienced developers to model entire relationships in a single column


Presumably they preferred to stay within the mysql world. I didn't read the article in detail, but I couldn't help but wonder why not use innodb more directly [1]? Imagine how much CPU is wasted using a full blown RDBMS SQL engine on top of InnoDB just to do key-value read/writes.

I have some experience with this having worked on an experimental storage engine for mysql that we connected to a transactional in-memory k-v store. The performance penalty for simple k-v workload through mysql was quite substantial, though our storage engine code was probably not sufficiently optimized. It would be interesting to explore this for innodb though

[1] https://dev.mysql.com/doc/refman/5.6/en/innodb-memcached.htm...


IRC If you are using JDBC (which is pretty much the only choice on Java) HStore is a pain to use since its query operators conflict with JDBC's parameter syntax ("?").


At the time we started, MySQL was more mature. Today, having a lot of MySQL installations at Wix we have no need to try HStore as well. Having said that, it we would have stared today we may have considered it, among other options.


I hate these stupid "my db is better than whatever db" articles. 1) What db to be used depends on the situation AND MORE IMPORTANTLY 2) what experience your staff has

I can say that 10 years go, I would have chosen M$SQL over MySQL and it would have been the correct choice. At the time I had almost 10 years experience with M$SQL and almost none with MySQL. Now I have more than 10 years of MySQL under my belt. AND the MySQL experience is more current. Right now I could choose between the two based on specific features and performance characteristics. For me to pick Posgresql because of a specific feature would be insane because I don't experience with it. No knock on Posgresql ... maybe I'll spend time with it and pick it for some down-the-road project.

I have implemented couchdb as a caching solution. I know how to manage, backup, and restore the database server. I have managed a 5 node cluster. If you ask me to implement NoSQL, it would be my choice for 2 reasons: 1. It can do the job. 2. I have experience making it do the job.

I'm sure there are 10 million people out there would would choose mongo in the same situation. The would not be wrong and they may come up with a superior solution. For me to implement Mongo today would be wrong - I would almost certainly come up with an inferior solution. for them, it would be stupid not to.

I'm not saying "don't learn anything new". I'm saying "don't gamble your business on technology with which you're not familiar".

Its a bit like backups ... the most important thing about a backup is not the technology you use, but whether you are capable of restoring and maintaining the backups.


First off, I can't take you seriously after using a dollar sign in Microsoft. That's just a conversation killer for anyone talking seriously in tech. This isn't an IRC channel for 13 year olds in 2004.

Secondly, the article isn't about what is "better" overall. It's about scaling SQL, and how noSQL isn't always necessary. The "in" thing to do right now is to have noSQL in your stack, blindly, without looking at your project. Or doing expensive migrations to noSQL solutions when you already have an expansive infrastructure built on SQL but need to scale. Wix is just giving insight into their techniques with MySQL and how in the end it made more sense for them than going with something like Mongo.

TL;DR: You didn't read the article.


Scalability is like an abstract painting. It's unique to one's infrastructure. Its writing or sometimes postmortem makes good brain fertilizer. Not so much more. Beyond that I wouldn't rush to implement scalability du jour.

A setup that works for a certain service won't necessarily work for another unless yours is a very close replica. Based on my experience in this area, and I'm a performance seeking nut, each platform, and even each traffic pattern, needs its own thinking hat.

That's what makes it so fun!


Spot on! I bet there are hundreds of different stories like this with unconventional uber-hacks for performance.


That is an awesome way to describe scalability! Can you give a couple of examples?


This may be an unpopular perspective, but here goes. For many years I ran a business doing web development. I had many clients approach me who were using Wix, and who I could not help, because Wix had effectively taken hostage their images. Because of those years of bad experiences (telling clients that they are screwed unless they keep paying Wix), I do not trust Wix, and so I do not trust this post. Should those clients have trusted that Wix would make their data available in the future? No, totally not. But that is the cost of doing shady things. Everything with your name on it now gets taken with a grain of salt.


By "taken hostage their images," do you mean that literal graphic files uploaded to Wix servers were somehow made inaccessible to the user?


Yes, exactly. I don't know how they do it now, but Wix did the "one big Flash blob as a website" and did not make data available to clients to download once they had been uploaded. So images and other data that had been "compiled" into the Flash blob were erased or something. There was no warning about this, and it took many by surprise. This effectively forced people to renew their subscription to Wix who otherwise wanted to use something else. The worst was a friend of a friend whose elderly mother had used Wix to upload old family photos thinking that Wix was a safe place to store them, not knowing any better. I felt so bad for that woman. I have no love for Wix at all.


Weird. I wouldn't ever do anything like that with Neocities. Downloading a site is a button click on the bottom of everyone's dashboard (it spits out a zipball of the files). Flash is anachronistic as all hell anyways.

It's weird to me that they're hosting web sites that need inline MySQL at all, but I suppose they probably do a lot of fancy backend stuff. Much better to async queue any data sending unless it needs to be inline. Much better than that is to static cache.

We're straight up static + nginx for all site serving, via an anycast geo CDN. Logging is passively provided by the nginx logfiles, which are parsed hourly asynchronously. It's a pretty good system, and about as fast as site delivery can get. I guess I don't know what Wix offers for site features, but for simple web sites, why bother with a database for web site display at all?

That said, I agree with the article's premis. I'm partisan to PostgreSQL but it's dumb to say it doesn't work and that some wacky new TrendDB that doesn't handle fsync or atomicity correctly (and therefore is TEH SCALE!) is somehow better for this job.


I wonder if using the memcached plugin for InnoDB[1] would speed things up even more, at the expense of not having flexible queries (and thereby introducing multiple roundtrips) anymore. Presumably, they are using simple "SELECT * FROM table WHERE id = ?" in most places anyway, so that could be an okay tradeoff to make.

[1] https://dev.mysql.com/doc/refman/5.6/en/innodb-memcached.htm...


I wonder about this myself, anyone has experience using this? MySQL 5.6 brought many long awaited features, like schema update without locking tables https://dev.mysql.com/doc/refman/5.6/en/innodb-online-ddl.ht... (bye bye Percona tools).


> Use client-generated unique keys. We use GUIDs.

Minor note but wouldn't UUIDs be better since they're time based? Sure it's really unlikely to hit an already used GUID but an UUID makes it impossible.

In fact is there a use case where it's better to use GUIDs over UUIDs? I couldn't think of one but I could be omitting something from my thinking so I'm curious.

Edit: apparently GUID and UUID are the same thing and GUID is simply Microsoft's original implementation of UUID. All this time I had no idea...


They are the same thing. Both should include time as well as the server address, etc.


note there are like 5 versions of UUID, and only v1 and v2 include time and server address. It's also considered bad practice to use them, as it makes your UUID's guessable.


Depends entirely on what you're using them for.


Ah you're right. GUID is Microsoft's implementation of UUID. I guess, much like how many refer to tissues nowadays as Kleenex, the terms kinda got mixed around. At least in my experience from seeing how they're used.


Called a "proprietary eponym", for those curious.

https://en.wiktionary.org/wiki/proprietary_eponym


UUIDs of all formats are universally unique, for all practical purposes.

Consider UUID4, the one with 122 random bits. The birthday paradox says that you would need about 2^61 UUIDs before you expect even one duplicate. If this concerns you, you might not recognize how big 2^61 is.

(edited because I was originally talking about 2^64, but there are 6 non-random bits in UUID4)


What does "expect even one" mean? This is a probability equations, so what p does expect correspond to?

Let's make this real concrete, with 122 random bits, you can issue a million UUIDv4s every second for the next 100 years and still have a less than one in a million chance that you issued a duplicate.

https://lazycackle.com/Probability_of_repeated_event_online_... n = 5316911983139663491615228241121378304 (2122) p = 0.000001 => m = 3260955271619137 3260955271619137/(100000086400365) => 103


GUID == UUID


> When someone clicks a link to a Wix site... That server has to resolve the requested site from the site address by performing a key/value lookup URL to a site.

So Wix uses MySQL to resolve site routes internally? Is this the best way to do it? Would it be possible to use internal domain names and rely on DNS to resolve everything?


Define best but it's a pretty reasonable approach. Some DNS servers (powerdns is an example from memory) use mysql over more traditional back ends like dbm which aren't so hot for high volumes of zone changes. I imagine a site like wix could be pretty tough on DNS.

Re: nosql, I'm coming at that with really positive experiences in Cassandra but I can't imagine what kind of DNS system it would be a good fit for. The ability to tune CAP to fit DNS may be useful but in general I think of Cassandra as the solution you think about when your 2 node vertically scaled monster can't keep up.


It sounds reasonable, but with DNS you'd get geographic distribution for free, right? Won't you have to do something like sharding to achieve a similar thing with MySQL?


Hmmm I don't think you'd get geo dist for free. For example, I don't think start of authority can be shared by multiple resolving servers. Or if it can, it's a feature of some specific dns implementation.

You get free caching which is tolerant to partitions, but you'd get that with either solution since it's largely the forwarding dns servers which make that happen and the difference here is constrained to the authorative resolving dns servers.


I'm pretty new to this and just learned some more about routing requests in a cloud service at scale. For a lot of services where the location of the resources might change (frequently sometimes), you want to handle the routing internally and not use DNS because of the lag times and complexity of TTL and caching.

For companies other than Wix I don't know what is used to handle it on the back end but I imagine it's either some specialized piece of hardware than can handle an insane load or some commodity hardware / cloud service & in-house software like here.

At the end of the day, and I'm sure I'm missing some edge cases, I think it's basically a service that provides a mapping of domain.com/user-resource-or-website to the location of their resources with no lag time when changes are made.


IIRC, Github does this for Pages via an nginx module that queries MySQL.

http://githubengineering.com/rearchitecting-github-pages/


When I was at GoDaddy, we used a distributed Cassandra cluster to handle similar work... it worked very nice for a few key lookups (site, resource) ... most endpoints were static resources stored in C*, cached in local redis, and served via a load balanced application cluster.


On a site constructor with custom URLs created by millions of users for their websites every day?


For those interested in NoSQL (particularly MongoDB) you may find this an interesting read:

https://www.linkedin.com/pulse/mongodb-32-now-powered-postgr...

But over time I finding less and less reason to _not_ use PostgreSQL when contemplating a NoSQL document store.


And PostgreSQL is a better MySQL so... all is settled?


Is it? MySQL improved a lot since 5.1 days, you know.


That's true, and I don't doubt that many developers' opinion of MySQL is tainted by some of the issues in earlier versions. It's still difficult to see what newer versions offer over Postgres though - and Postgres has a lot of bonus features too (like the JSON storage types, which are sublime)


Based on my experience: MySQL is significantly easier to set up, and it's much easier to admin. PG still seems to do the thing where it assumes your system user is your database user unless you do a bunch of things, and the weird blurred lines between the operating system and the database can get really confusing at the start.

On top of that, MySQL is well-documented and lots of people use it for lots of different things, which makes the most common (and most of the uncommon) problems eminently google-able.

Anecdotally, I've avoided Postgres also in part because it had the same problem that Perl had when I got into Linux: the community was pretty toxic and unwelcoming. I asked about replication once, and I got a bunch of answers that fell into a few categories: 'Why on earth do you need replication?', 'You don't need replication, Postgres is fast enough', 'set up a script to rsync your data directories over and over again', 'mysql replication is unreliable so you shouldn't use it so it doesn't matter'.

That sort of attitude drove me (and, I'd wager, a lot of other people) away from Postgres and towards the vastly more welcoming MySQL community, and I'd wager that a lot of people just kind of stuck there because why switch?

Postgres has added a lot of pretty great features lately, and it seems good? But I'm left wondering why I should use Postgres for my RDBMS, my JSON data store, and my KV store, when history has shown that one piece of software that expands to do everything rarely does any of it well.

(Postgres has since gained replication abilities)


> On top of that, MySQL is well-documented

I seriously disagree with that. Documentation is one of MySQL's weakness, especially compared to PostgreSQL which is known to have one of the best documentation in the open source world.


I think the comment you're replying to might be using "documentation" to mean Stack Overflow.

I really love the PostgreSQL manual.


By 'a bunch of things' you mean uncomment like 2 lines in the config, right?


Well, he's right about it, kinda. I don't have problems with Postgres, but I do wish it had some defaults that made a bit more sense.


I think the defaults make a fair bit of sense. Do you want it listening on a network interface by default?


I'm a big believer in PostgreSQL but it does have one large and glaringly bad area which needs addressing: distribution. MySQL does Master-Master well. PostgreSQL has been a "chose your own adventure" for a long, long time. With the changes they're planning to make to allow better replication, that will change. Right now, MySQL has that aspect beat.


The thing is that generally in Postgres is conservative and avoids features that would endanger data, unless they have a good solution for it.


If you knew what newer versions of MySQL offered, you would be aware that it also supports JSON.


Sorry for not having comprehensive knowledge. I see that MySQL added support for a JSON datatype in the past year – I just knew it wasn't in 5.7 last time I looked, and it seems like an unusual feature to add in a 'patch' version.


Has MySQL added support for Common Table Expressions (CTE) yet? CTE was added to the SQL standard in 1999.

I ask because I always miss this feature when querying MySQL.


Trolling along, what about working correlated subqueries that don't degrade into a cartesian join of all rows?


Latest postgres's have improved tons too. Seems like we're seeing 10% speedups with every point release. Some of the new parrellization stuff in 9.6 is really sexy.


MySQL is faster than pg, you should ask why Facebook is running the largest MySQL shop.


Maybe because they started with a tiny LAMP system in 2004 and got stuck there to the point they invested considerable resources to write their own PHP interpreter and optimize MySQL.

From https://www.percona.com/blog/2014/03/27/a-conversation-with-...

"we had the MySQL engineering talent we needed to work with the Oracle team to get 5.6 ready for production at our scale."

"We all worked hard to adapt 5.6 to our scale and ensure that it would be production-ready. We found some issues after production deployment, but in many cases we could fix the problem and deployed new MySQL binary within one or two days"

"Performance regression of the CPU intensive replication was a main blocker for some of our applications" followed by a description of how they addressed that.

So it's not vanilla MySQL vs vanilla PostgreSQL. They tailored MySQL to their needs and keep honing it. What they do has little resemblance with what the other 99.9999% of companies do, and I'm probably missing a few 9s. Another excerpt from that post highlights the differences:

"For example, typical MySQL DBA at small companies may not encounter master instance failure during employment, because recent mysqld and H/W are stable enough. At Facebook, master failure is a norm and something the system can accommodate."

My take: if they started with and stuck to PostgreSQL they'd have to work on it as they did on MySQL.


Facebook guys as I understand it did look carefully at PostgreSQL; at least some of the engineers are very familiar with its internals. They found MySQL satisfactory for a variety of reasons including ability to engineer better performance but also replication which was much better in MySQL for a long time. In fact, for an interesting perspective on the "Facebook stuck on MySQL" topic see [1].

Interestingly much Facebook database work has been outside the relational orbit on systems like Cassandra and HBase, as well as RocksDB. The choice for most web properties today is not MySQL vs. PostgreSQL but RDBMS vs. "other" systems.

[1] https://dom.as/2011/07/08/stonebraker-trapped/


Their MySQL version is pretty much the vanilla version, from your post it sounds like it's very different but it's not.

https://github.com/facebook/mysql-5.6


Also, if you read their engineering papers it turns out Facebook uses MySQL these days as a storage backend to a graph store and not as a relational database. They also use the replication logs to power their pub-sub system. In other words, Facebook's use case is so large scale and so throughly hacked (in a good way) that it's not necessarily a credit to the idea of using MySQL on a greenfield project.


Harrison answered this question here: https://www.quora.com/Is-Facebook-considering-ditching-MySQL...

They continue to use MySQL on new projects.


I was expecting a comparison but they only presented one side.


I have not heard about "Wix" before, but maybe they should have done some more research before picking this name. To a German this sounds like "wichsen" which means, well, "wank"[1].

[1] http://dict.leo.org/ende/index_de.html#/search=wichsen


As a neighbor (Dutch) I can totally understand it and I still laugh pretty much every time I hear flickr[1]. Still, if I finally found a cool name for my project after a long search (it often is), I'm not going to cancel it just because it "sounds like" penis in Arabic or something.

http://www.woorden.org/woord/flikker (tl;dr: homosexual)


they picked the similarity to the german word "wichsen" in their campaign in germany... https://www.youtube.com/watch?v=4AKDZmsy5yo it says "everyday million of people are wanking – wanking changed my live – when my girlfriend felt asleep, i'm going to wank – i love wanking – my wife convinced me to wank – i'm wanking after my training – to be honest, we wank together most of the times – wanking is the future" and the hardest/badest part is the last sentence "make it by yourself – be a wanker"


Not only is this completely off-topic, they are a publicly traded company NASDAQ: WIX and they are currently valued at ~$1B.


Then please forgive my ignorance. I was not aware that they are so big. Even more though I'm surprised about their name. Reading it really feels weird to me.


We did hear about it, if a bit late.

Our response - https://vimeo.com/138432267


I've been running a rather large website with MySQL for the past fifteen years. There was a period when I regretted that choice and used something else. Today I'm using MariaDB and the TokuDB storage engine, and I'm so thankful that I never migrated to Postgres.

Like many people I investigated the NoSQL movement for potential applicability, and almost swallowed the hype. As I investigated more, I realised:

1. There are some specific instances where a NoSQL engine makes good sense. They're a valid option and should be considered depending on the application. In my experience though, well formed RDBMS structures are the better option in the vast majority of applications.

2. Most of the hype and growth came from people who (a) were using the abomination known as ORMs which are the canonical example of a round peg in a square hole; and/or (b) didn't know how to build performant RDBMS schemas. For these people, the NoSQL engine was fast because it was the first engine they actually learned how to optimise correctly.


> were using the abomination known as ORMs which are the canonical example of a round peg in a square hole

Indeed, the "Vietnam of computer science."

https://blog.codinghorror.com/object-relational-mapping-is-t...


A little bit off topic, but I would like to hear more about using Solr [1] instead of any "real" NoSQL databases.

I don't have experience with MongoDb and such, but I've always asked myself why someone wouldn't use Solr as a distributed NoSQL database... Am I wrong or, with Solr, you get that key/value scalable storage AND you get advanced search features as an extra?

Why would I want to use MongoDb instead of Solr? What killer feature Solr doesn't have?

[1] http://lucene.apache.org/solr/


Haven't worked with solr but with elastic-search which are both based on lucene.

Some issues are:

async indexes, unable to modify/remove indexes, unable to grow/shrink number of shards etc (basically search why not use es as primary data store)


We evaluated solr for a project just a week ago. It does not have authn & authz that mongo has, and that was a feature we needed. Other that that, if all you need is a kv store, solr is great.


I thought this article would be about the true MySQL NoSQL system: MySQL Cluster (or NDB). It scales to 200m transactional reads per second - per second! http://highscalability.com/blog/2015/5/18/how-mysql-is-able-... We have got 16m read/sec on our commodity rack with MySQl Cluster, so it's not a fantasy result.


Two things that are sorely missing in this comparison to NoSql is:

How are they performing horizontal scaling, I'm guessing they aren't, without addressing the issue of sharding and scaling they can't really compare the solution to NoSql - it is the number 1 feature that NoSql has over RDBMS.

If they are achieving 1ms response time , then they almost certainly have the entire table in memory cache. What happens when the data grows beyond the size of the memory and it's not financially feasible to get a larger memory instance.


1. They probably don't need sharding, since the dataset is small enough to just replicate it in mirrors.

2. 1ms is achievable with SSDs, but 200K q/minute seems slow my gut feeling tells me.

This post is more like "ha we don't need NoSQL for this special use case" - Once you need scaling and some sort of atomics, you quickly have to use HBase for row-level atomicity and scaling.

Redis is probably better suited for the posted usecase anyway.


Why HBase? why not just shard your keys?


From 7 years ago, by Bret Taylor (who went on to become CTO at Facebook after acquisition):

How FriendFeed uses MySQL to store schema-less data https://backchannel.org/blog/friendfeed-schemaless-mysql

Edit to add the HN discussion at the time: https://news.ycombinator.com/item?id=496946


Here is my (albeit limited experience) advice:

1. Use PostgreSQL, or MySQL with InnodDB for row level locking

2. Huge tables should be sharded with the shard key being a prefix of the primary key.

If you need to access the same data via different indexes then denormalize and duplicate the index data in one or more "index" tables.

3. Do not use global locks. Generate random strings for unique ids (attempt INSERT and regenerate until it succeeds) instead of autoincrement.

4. Avoid JOINs across shards. If you use these, you won't be able to shard your app layer anymore.

5. For reads, feel free to put caches in front of the database, with the keys same as the PK. Invalidate the caches for rows being written to.

It's actually pretty easy to model. You have the fields for the data. Then you think by which index will it be requested? Shard by that.

Note that this will still lead you to a huge centralized datacenter!! Because your authentication happens at the webserver level and then you just have all the servers trust each other. While it is a nice horizontal architecture, it leads to crazy power imbalances like we have today. Consider instead making it a distributed architecture, where the shards turn into domains, and each user on each domain has to auth with every other domain. But your network can then be distributed without a single point of failure. What's more, local area networks will be able to host your app and be quick without the signal bouncing halfway around the world.


It seems to me that one of the core differences between MySQL/Postgres and distributed stores like Cassandra / Hbase is that with the former your data and your write workload have to fit onto a single host. If either one cannot fit then you have to partition at the application level or use a real distributed data store. Partitioning at the app level is an operational burden and complexity that would be best avoided, but there are always exceptions.


So MySQL is great if you use none of its features, but then its really hardly different from all the other databases. So it's not the implementation, but the very promises that databases make which can't be held, but if you know that, you are just fine. Great insight, and pretty much the definition of NoSQL...


Everyone should try PostgreSQL with hstore (and JSONB now, too!).

This is a key/value store inside an RDBMS that just works, and it works great!

I converted a crappy sloppy super messy 1000+ column main table in a ~800GB database to use hstore, it was, in real world benchmarks, between 7x and 10,000x (yes, really, ten thousand times) faster.

The CEO of the company who had a technical say in everything, and was very proud of his schema "wasn't excited" and it never happened in any production instance.

I've left since then, and the company has made very little advancement, especially when it comes to their database.

Really, just use hstore. Try it out. The syntax is goofy, but... I mean, SQL itself is a little bit goofy, right?


> SQL itself is a little bit goofy, right?

The proper pronunciation of SQL is SQuirreL.


100M? Of course you'd scale an RDBMS for that, especially if you want searchability and analytics. It's way easier than a Hadoop -> Elasticsearch pipeline (or pick your flavor).

NoSQL databases are for BIG data. As in, billions of rows big.


I think it depends on the shape of your data... if your data is mostly collected in sets (as a single object base), and mostly key lookups, then a document store may be the best solution... Example, used to work for a mid sized classifieds site... most of the data was used as a single-listing query, and pulled in from a single base record. The SQL database was over-normalized and required a couple dozen joins if you wanted to see it flat... the system was crumbling...

Replicating the data to mongodb at the time, with a thin API for search and read queries, and omg, it was a staggering difference. Beyond just caching, all the search queries. Today, I'd be more inclined to use ElasticSearch (there was an issue with geo indexing at the time iirc)... just the same, it really depends on the shape of your data.

I feel that the storage shape should serve the needs of the application. SQL databases encourage normalization to such a degree, that it's costly to construct a semi-complex object in memory, especially when your queries will run across many tables for many thousands of users. Joins kill performance at scale... If you can get around that, you're often better off.

Duplicating data to a system that is a better fit for mostly-read/query scenarios is a good idea. There's nothing that says you can't have your data in two places, and it's easy enough to setup services that copy on update.


totally agree with that. 100M rows doesn't make any sense.


related: friendfeed used a similar approach https://backchannel.org/blog/friendfeed-schemaless-mysql


Very interesting post. I seem to remember that you were featured in a MongoDB 'success story' last year but they seem to have removed it now.

Does that mean you've stopped using Mongo altogether?


Just about as wrong as the day it was posted here in December


Awesome! Now lets do it for 1B rows, and then 10 Billion and then some. It is well known that for small datasets NoSQL is no better , if not worse, than an RDBMS.


System design 101: keep business logic into the layer above database layer rather than relying on specific db system to implement them. In this way to design an system, there shouldn't have any different between using MySQL of using NoSQL, their role is just storage engine. So, you don't need to follow the relational database practice, like for example, foreign key, constrains, normalization anymore.


Trying to avoid using foreign key constraints in a relational database is not "system design 101", its an instant fail.

When I cast my eye over a table with foreign key constraints, I am 100% certain that every single row conforms to those constraints, and always will.

By contrast, when the same table does not have constraints, but instead relies on some business logic layer to enforce them, then I have to consider whether there might be corrupt rows put in there by:

- bad business logic code

- bad import scripts

- some contractor who used to work for us 5 years ago and briefly uses his php script to push up some data


Sadly, you remind me the DBA-is-everything system design style. In modern system designs, a system need more than just a database system to store business states, and to encapsulate business logic into higher layer is not just have flexibility also have scalability. Take sometime to think about the following three scenarios:

Scenario 1: what if a system need to migrate to different database system, then the whole business logic are need to totally re-implemented with the destination system DSL.

Scenario 2: if system need more just one storage system to persist business states, for example, I use db to store image metadata and use s3 to store the image? I don't believe the foreign key constrains will still works.

Scenario 3: if we have system need to process business state in asynchronously, for example, use message queue.

Also think about how to do unit tests (this is also how we keep the business logic correct) how to do CI/CD. System design is more than just a ERD design.


I think you'll find that its not just DBAs who like to have the database enforce integrity when it can.

> Scenario 1: what if a system need to migrate to different database system, then the whole business logic are need to totally re-implemented with the destination system DSL.

If it takes you more than a few minutes to rewrite a foreign key constraint to work on a different target database then you're doing something wrong.

> Scenario 2: if system need more just one storage system to persist business states, for example, I use db to store image metadata and use s3 to store the image? I don't believe the foreign key constrains will still works.

You're right, it doesn't - you can't use a foreign key constraint there. That doesn't mean you abandon them altogether, just use them where they work - in your database.

> Scenario 3: if we have system need to process business state in asynchronously, for example, use message queue.

Then use a message queue. Still no reason to give up on the declarative integrity checking of FK constraints inside your database.

No idea what you;re talking about wrt unit tests - perhaps you're saying you want to allow bad data into your database so that you can run your tests? In which case here's a better idea - use FK constraints to never let it in in the first place, and don't bother testing, since the DB vendor already did it for you.


>An active-active-active setup across three data centers.

Any info how "active-active-active" (I assume 3 aws regions) is accomplished?


Some kind of master-master replication, probably: https://dev.mysql.com/doc/refman/5.7/en/mysql-cluster-replic...


In my company we are using a mysql cluster based on galera, (percona xtradb server) and it's a master-master solution that is rather easy to deploy and maintain. The only limit was that we had to use a single server for writes (that would make the "cluster" thing kind of useless but it's not in fact, we are using a load balancer on top of the cluster and the load balancer decides of the "master" where writes go so it's transparent to our application), definitely worth a try.


MySQL looks great when used as K-V because it avoids the bad planner (when you have a primary key as the only searching key, a planner is useless) and denormalizing avoids expensive JOIN ops.

But there is the awkward replication model, the lack of native data structures as column type and the lack of sharding support.


I love mysql , saved my ass many times , but this article doesn't mean anything .. it just says that you can use subqueries and joins to do "nosql"... we all know that .. you can also use a text file. I'd like if mysql copies what postgres has done with hstore.


Wouldn't Aerospike be a cheaper, lower maintenance, and more robust solution to this problem?


Yes, a single instance would more than handle all of their load. 2 for HA/redundancy and they're all set. Setup some more pairs elsewhere else with active/active replication.

This is basically them failing to do enough research into existing solutions that would work far better.


The problem with SQL DBs is that they just weren't designed for distributed computation to begin with. SQL doesn't take into account CAP theorem - So it lets you write queries which work on a single machine but which cannot scale to multiple machines.

On the other hand, many NoSQL databases like MongoDB and RethinkDB have a query language which was designed to run on both single-machines and distributed infrastructure (in a homogeneous way); the same queries which work on a single machine will also work at scale on multiple machines - No need to rewrite your queries as your app grows.

You CAN scale with SQL but you have to know what queries to avoid (E.g. table joins, nested queries...) but with NoSQL, you don't have to avoid any queries; if it's in the Docs, it's safe to use.

Finally, a major difference between SQL vs NoSQL is the typed vs untyped structure. Most SQL databases were designed in a time when statically typed languages were mainstream; so it made sense for SQL databases to enforce static typing on their data.

On the other hand, NoSQL was designed in a time when dynamically typed languages where popular and gaining more popularity (E.g. Ruby, Python, JavaScript); when using these languages, having to add SQL-specific types to data feels like an unnecessary step. With NoSQL you can still enforce a schema in the application code but your schema logic doesn't have to abide by any type constraints from DB layer - Your schema is the ultimate authority on typing of your DB - If gives you the flexibility to be lazy with type-checking in the areas which are low-importance (where errors are tolerable) and strict where data type consistency is paramount.

Generally, NoSQL DBs impose constraints to query expressiveness in order to free you from architectural constraints. SQL DBs impose few constraints on query expressiveness but because of this, they add architectural constraints to your system.


There's so much wrong with the above.

To pick a quick one: "query language which was designed to run on both single-machines and distributed infrastructure". Mongo has no fewer than THREE query syntaxes: standard, map-reduce[1], and the aggregate pipeline.

'homogeneous', lol.

[1] which even Mongo employees recommend people avoid like the plague https://www.linkedin.com/pulse/mongodb-frankenstein-monster-...


enforce a schema in the application code

HA HAHA HAHA


"Many developers look at NoSQL engines—such as MongoDB, Cassandra, Redis, or Hadoop" Noone uses Hadoop as a database. On the other hand, HBase which uses HDFS as underlying storage is a great NoSQL database that we use in production.


This seems specific to their use case. He shows an example with a subquery. I wonder why they don't break that to two queries. The should be fast enough if cached, and would prevent the need for both tables to be unlocked during query.


> Do not perform table alter commands. Table alter commands introduce locks and downtimes. Instead, use live migrations.

Care to elaborate more on this? What do you mean by live migrations?


Mongo comes with geospatial indexing baked right in. Never mind map / reduce. It comes down to the data structures of our times, which are increasingly not relational.


So if I got it straight the message is: "Don't fall for the sirens of hype but instead make sure that your choice of technologies suits your needs"?


The first thing that comes to mind is that they write about read throughput, when the write throughput is a big selling point of many NoSQLs.


Kinda begging the question aren't we? I turn to nosql for things that aren't key-value, generally.


What do you mean? Isn't NoSQL inherently key-value?


All structured data can be represented as key-value, that includes SQL. They just differ in what constraints are used for the keys and values.

As for your question, NoSQL datastores can be grouped into multiple categories:

- column stores (like hadoop, cassandra, informix), which optimize for sharded and distributed storage of related data elements

- document stores (like elasticsearch), which focus on metadata organization for large opaque (binary) objects

- key-value stores (like redis, openldap), which are basically unstructured, associative arrays (hash maps). They allow the most storage freedom, and are hardest to optimize.

- graph databases (like neo4j, trinity), where more information is carried in annotated inter-object links than in the objects themselves.


Are graph databases key/value? Are document databases key/value?

Try this:

https://en.wikipedia.org/wiki/NoSQL#Types_and_examples_of_No...

https://www.youtube.com/watch?v=qI_g07C_Q5I


No. Triplestores and Graph databases aren't key-value stores. There's more to NOSQL than key value stores although most examples seem to be.


Does anyone know what they mean by `Instead, use applicative transactions.`?


Stupid question here: what are serial keys, and how do they impose locks?


Ah, did he mean "SERIAL" as in "BIGINT UNSIGNED NOT NULL AUTO_INCREMENT UNIQUE" ... and the reason why this locks a table is because the database, on insert, needs to figure out a valid key?


Anybody else thinks Postgres would have been a better comparison here?


I'm surprised there's no mention of HandlerSockets.


Does this mean PostgreSQL is an even better NoSQL? ;-)


Is a 100 million 'scaling up' these days?


Upon reading this i have three questions for them: 1) Do you do backups? 2) Do you use source control versioning system? and last but not least 3) Why do you kill so many kittens?


Glad to see the nosql hype blowing down to reasonable levels. Next up: imperative languages back in vogue.


I thought nosql movement was about distributed systems, cap and all that. What does this "active-active-active" even mean? No consistency and no availability guaranties I presume?


I may be reading this wrong, but I think they are purposing a not C not A not P solution... thats "ok" fast? They explain how to make a single mysql instance run as key-value but I dont understand how it becomes master-master or cross DCs. Wonder if they run Jepsen or do any partition tolerance tests given their mentioning it.


AFAIK the only way to do 3 masters in vanilla mysql is ring replication. That means C is B's slave, B is A's, A is C's. If that's what they do, then yeah, it's a noCAP deployment. No consistency if you insert the same UUID at the same time into 2 masters. No availability unless you implement it yourself by retry to another master. No partition tolerance, because if you break one replication link, half the writes are not replicated between other servers and you can't really both reconfigure the ring and replay the transactions.

(Yes, they say active-active-active, not master-master-master, but then they say across DCs... It could be just M-S-S with config switch on failover, but for me the post suggests it's not that)


I'm a purist and also need the absolute fastest lookups with out SQL overhead so I go straight for MDB (Sleepycat BerkleyDB) - faster than LevelDB or any others.


Why can't autists just use whatever they want and stop trying to convince people that their way is best.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: