Well, if I make the same sacrifices I make to use Mongo I don't really have a reason to use PostgreSQL.
No, but I really like the recent enhancements in PostgreSQL. Failover is nearly as easy as in MongoDB, however all this doesn't play so nicely yet, if you are using stuff like extensions (PostGIS), your own functions and still isn't really an out-of-the box experience.
I agree, if you use it with the same limits and the performance gain is worth it, you can as well be using Postgres. However, a lot of this actually only changed in the recent releases. It's all still fairly new.
Also there are still a number of things that are basically missing, like out of the box upserts (we are using a function for this, but it more a hack) and if you are still somewhat in development a lot of little changes get really hard in PostgreSQL. Converting your data structure, even with stuff like CTE and surrounding functionality can become really challenging, especially when you think there must be an easier way.
Where it is easier to modify structures in MongoDB it is actually harder to aggregate it sometimes. Using stuff like Map Reduce (even the lightweight version called aggregate) frequently appears sort of an overkill.
I think however it really depends on the kind of data you are dealing with. That's why we are using a hybrid system right now. Both systems are actually evolving really quickly and if you have the joy of using their most recent versions one is always excited about new releases.
About the upsert function, have you found a way to do this generically, or are you generating (at least) one function per table from a template? I've found the hackiness of this to be easily the worst part about using Postgres.
You didn't answer my question at all. You claimed postgresql was an enormous challenge to scale horizontally. I asked how. There are still lots of reasons to use postgresql, I assumed you knew this since you expressed that you wanted it to be easier to scale horizontally.
If you want to scale out using a replica-strategy you really want to have the most recent version of PostgreSQL. It made things like log shipping pretty much an automatic experience. It still has rough edges, but the latest version really makes a difference here. See the release notes of PostgreSQL.
You will however find plenty of information on the wiki. Sometimes it is not completely up to date because of so much going on on that front.
You can use third party solutions too, but they usually have a number of caveats and general problems. However it depends on what you do with your database. If you don't deal with writing your own functions, use extensions or have fancy transaction they will work just fine. If you do, the functionality of Postgres is a more safe strategy. If you are dealing with extensions, etc. you should really be ready to get dedicated support for these things.
However, it really depends on what you do. For everything standard it won't make too much trouble once the first setup is done.
The problem is he doesn't know he is making the sacrifices.
MongoDB marketing sloppy and promises "webscale" while hand waving partition-tolerance way.
After the whole "let's disable durable write to make our benchmarks faster" I just can't see trusting them with data I want to actually read back from the database. It might be good for probabilistic storage or stats reporting. It sort of because an issue of trust more than anything.
I really wish PostgreSQL wasn't such an enormous challenge to scale horizontally.