Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that the only reasons to choose MySQL (or Maria) over Postgres for a new project are operational. Postgres is probably the better database in almost all respects, but major version upgrades are much much more of a pain on Postgres than on almost any other system I have ever used. That being said, I would choose Postgres pretty much every time for a new project. The only reason I would use Maria or MySQL would be if I thought I later would want to have something like Vitess, for which I think there isn't really an equivalent for Postgres.


> but major version upgrades are much much more of a pain on Postgres than on almost any other system I have ever used.

This is a thread comparing MySQL and Postgres and your claim is that postgres is harder to do major version upgrades than anything you have used??

Context is important here, have you honestly actually upgraded a MySQL node? It’s a lesson in pain and “major” version changes happen on minor versions, like the entire query planner completely trashing performance in 5.6->5.7

Postgres has two forms of updates:

1) in place binary upgrade.

Fast, clean, simple, requires that you have the binaries for the old and the new database.

2) dump/restore.

Serialise the database into text files, load a new database and deserialise those files into it.

Slow, but works flawlessly & consistently with relatively low danger.

MySQL can only do option 2.

You can sort of fake an “update” by abusing the fact that MYSQLs replication offers no guarantees, so you can make a new server a replica; then roll over. But it is impossible to know what data was lost in that transition and MySQL will happily continue without ever telling you.

I have experienced this behaviour in large e-commerce retailers. MySQL was very popular for a very long time and I am intimately aware of operational best practices and how they are merely patching over an insane system.


MySQL doesn't use SemVer. MySQL 5.6 vs 5.7 are different "release series", and switching between them is considered a "major" version change.

MySQL absolutely fully supports in-place binary upgrades, saying otherwise is pure FUD. And the upgrade process in MySQL doesn't even iterate over your table data in any way, so claiming it will cause "data loss" is also pure FUD.

At Facebook we automated rolling in-place updates of our entire fleet, with new point builds of fb-mysql going out several times a month, to the largest MySQL deployment in the world. Worked flawlessly and this was a full decade ago.

MySQL is widely considered easier to upgrade (relative to Postgres) because MySQL's built-in replication has always been logical replication. Replicating from an older-version primary to a newer-version replica is fully supported. When upgrading a replica set, the usual dance is "upgrade the replicas in-place one at a time, promote one of the replicas to be the new primary while temporarily booting out the old primary; upgrade the old primary and then rejoin the replica set".


Facebook has, at minimum, 3 teams maintaining MySQL. including a team who genuinely modifies it into submission. so much that they needed 1,700 patches to port their modified version to 8.0.

It is not relevant to the discussion to discuss how Facebook has managed to munge it to work reasonably well by pouring thousands of hours of engineer time into the effort; and MySQLs in-place upgrades absolutely do not work the way you describe consistently.

I know this because I have been in the code, and only after having experienced it. Maybe some of your lovely colleagues has helped out your particular version to be marginally more sane.

It genuinely must be nice having a dozen people who can work around these issues though, I certainly wouldn’t consider it an operational win, most companies have no DB automation engineers, or DB performance engineers or MySQL infrastructure engineers.

> Replicating from an older-version primary to a newer-version replica is fully supported.

Here also be dragons, as eluded to. I know it works quite often, I have used it.

FWIW: I ran global AAA online-only game profile systems on a handful of Postgres machines at about 120k transactions/s in 2016, I would have needed 5x as many instances to do it in MySQL, and this was only tiny part of our hosted infra.. which included a global edge deployment of game servers, auth servers, matchmaking, voice bridges and so on.

and we only had two people responsible for the entire operation


Please educate me on how my statement about MySQL upgrades is incorrect, I'd love to hear this. I've been using MySQL for 20 years, and while 2 of those years were at Facebook, 18 were not. I've performed MySQL upgrades in quite a wide range of environments, and what you're saying here about lack of in-place upgrades or eating data is simply not aligned with reality.

I haven't made any comments regarding performance comparisons, and have also run extremely large DB footprints with tiny teams, but I don't see how any of that is relevant to the specific topic of new-version upgrade procedure!


Because it depends so much on your storage engine and schema, I have never seen it recommended because there are circumstances where you have data which is unrepresentative unless you are very careful or you don’t actually use the expressiveness of the DB.

I mean, I’ve also seem my share of “ERROR 1071 (42000) at line xxx: Specified key was too long; max key length is xxx bytes” randomly that basically means the machine needs manual recovery.

God help you if you don’t have innodb_file_per_table enabled to begin with too.

I know you want me to cite exactly. That will take me time to find because I stopped caring about MySQL 7 years ago, but I will dig for you.


FWIW while I use Postgres for my own development I've had to administer a number of MySQL servers for other devs. Upgrades have always been updating the MySQL package, restarting MySQL, then running `mysql_upgrade`, and restart the server again. I'm pretty sure the mysql_upgrade has even been missed a number of times and it's worked fine.

I won't say it's impossible you ran into issues doing this, but it is the documented and supported upgrade path.

I love Postgres, but as someone whose maintained both for years, upgrades (at small scale) are the one area where I'd say MySQL has Postgres beat.


as long as you upgrade with a minor version, you will have the same experience with postgres.

11.0->11.2 will work totally fine, with no command needed.


Right, but now go from 11->12, which is the equivalent of the upgrade path I was describing for MySQL. I either need to install both versions and use pg_upgrade to convert the binary files, then remove 11 (and extensions may break this flow) or do pg_dump/restore.

Minor versions on both Postgres and MySQL are painless, just install and restart the server. Major upgrades on MySQL are significantly less painful.


> I’ve also seem my share of “ERROR 1071 (42000) at line xxx: Specified key was too long; max key length is xxx bytes” randomly that basically means the machine needs manual recovery.

What? This error has nothing to do with upgrades, nothing to do with manual recovery, and hasn't been a common problem for many many years.

In old versions of MySQL, it just meant you needed to configure a few things to increase the InnoDB index limit to 3072 bytes, instead of the older limit of 767 bytes:

innodb_file_per_table=ON innodb_large_prefix=ON innodb_file_format=barracuda

and then ensure the table's row_format is DYNAMIC or COMPRESSED.

But again, all of this happens by default in all modern versions of MySQL and MariaDB.

Should it have been the defaults much earlier? Absolutely yes, MySQL used to have bad defaults. It doesn't anymore.


The error I gave is a similar one to the one I used to get with “major” upgrades that happened when Ubuntu decided it was time to upgrade.

It happens and I seriously never claimed that it was an ultra common problem, merely that upgrades in Postgres are more intentional and not painful except for a little extra work between major versions. The standard upgrade path within major versions; 9.x or 10.x or 11.x or 12.x is working just the same as MySQL, except I have much more experience of MySQL completely fumbling their “automatic unattended” upgrade or even the mysql_upgrade command.

Mostly because in the real world outside of engineering cultures databasen are massively abused, ISAM tables that are constantly updated, InnoDB ibdata1 in the terabytes, poor configs, replicas that have skipped a few queries, column changes inside a transaction that failed but actually modified data, it happens. Usually I am called in to clean the mess.

Major difference here is that Postgres doesn’t leave a mess, so I never have the kind of issues that I am describing in this thread with it, and you don’t because I am guessing that you’re there when they’re installed, someone with knowledge was actively maintaining. or you have a lot of people to help with shortcomings.

I get it though. you’ve got your sunk cost knowledge of MySQL and you’ve been on large support teams for it. Maybe you’re afraid I’m suggesting that this knowledge goes out the window. and it has gotten better, but I wouldn’t give my kids watered down led infused soft drinks just because I had suffered through led poisoning. I remember coming to blows with you in other threads over the years because you think MySQL can be saved or is totally fine, but honestly, just, no.


I'm primarily a software engineer, not a member of "large support teams". I've also worked for many years as an independent consultant, brought in when things go wrong, certainly not when they were first "installed". I'm not "afraid" of anything concerning my knowledge going "out the window". If MySQL suddenly disappeared worldwide, I could happily pivot to some other area of software engineering, or I could simply retire. Please stop make assumptions about other people who you know nothing about.

I'm responding to you because you're repeatedly posting factually incorrect items, for years. For example you and I have directly discussed the "MySQL doesn't use SemVer" thing before on HN, and yet here you are again in this thread, claiming 5.6 to 5.7 should be a "minor" upgrade.

Anyway, to the topic at hand, as others have also mentioned in this thread: historically the difficulty with Postgres upgrades has been the lack of cross-version replication, due to Postgres WAL replication being a low-level physical replication system. This made it difficult to perform an upgrade while keeping your site fully online. Perhaps the newer logical replication support makes this easier these days. I hope to learn more about it someday. If you can share your process for upgrading a Postgres cluster while keeping it online, that would be helpful and informative.


1. The log-replication method of upgrading can be performed using the built-in logical replication facilities as well as using external logical replication systems such as pglogical, Slony, Londiste, and Bucardo. Most of which have existed essentially forever.

2. Failovers of any database are not instant, but they are indeed quick! So let’s not claim that you can do an upgrade with zero downtime.

3. In-place upgrades are extremely fast and you can test the speed using a physical replica before hand, usually it’s a couple of seconds though the docs say minutes.

4. MySQLs major version being in the minor position is exactly the kind of “you should be sure you know what you’re doing but we won’t make it obvious” territory that I really despise.


While you two have agreed on approximately nothing, this has been an informative discussion and I do thank you both.


I echo the sentiment and think yours is likely the most pertinent takeaway having made it this far absent reaching any consensus whatsoever haha.

It was nevertheless a pretty epic journey of dialectic discourse plunging _deep_ into the esoteric and nuanced realm of expert-level technical minutiae. A mostly intellectual journey, albeit distinctly punctuated by an undertone of emotional angst that steadily progressed in its growing intensity in a manner proportional to the magnitude of your collective disagreement… epic indeed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: