Considering it was about 100 developers, it was horrible.
The two major problems were:
1. The volume of data itself was not that that big (I had a backup on my laptop for reproductions), but it was just too heavy for even the biggest things in AWS. Downtimes were very frequent. This is mostly due to decisions from 10 years ago.
2. Teams constantly busy putting out fires but still getting only 1-2% salary increases due to lack of new features.
EDIT: Since people like those war stories. The major cause for the performance issues was that each request from an internal user would sometimes trigger hundreds of queries to the database. Or worse: some GET requests would also perform gigantic writes to the Double-Entry Accounting system. It was very risky and very slow.
This was mostly due to over-reliance on abstractions that were too deep. Nobody knew which joins to make in the DB, or was too afraid, so they would instead call 5 or 6 classes and joining manually causing O(N^2) issues.
To give a dimension of how stupid it was: one specific optimization I worked on changed the rendering time of a certain table from 25 seconds to 2 miliseconds. It was nothing magic.
That does sound like an engineering problem more than anything.
On an off note, migrating to nosql might not have a lot of on paper benefit, but it does enforce developers to design their table and queries in a way that prevents this kind of query hell. Which might be worth it on its own.
How does NoSQL (and which flavor are you referring to?) enforce that? RDBMS enforces it in that if you don’t do it correctly, you get referential integrity violations and performance issues. You’d think that would be enough to motivate devs to learn it, but no, let’s use more JSON columns!
It's the human aspect of engineering, you can't join 15 different tables just by running an 200 line SQL command in nosql and this manual burden forces a re-thinking in what the acceptable design is.
relational DB is great, but just like java design pattern, it's being abused because it could be. People are happy doing stuff like that because it was low resistance and low effort, with consequences building up in the long term.
In my example the abuse was on the OOP part, not in the relational database part.
Database joins were fine, they just weren’t being made in the database itself, due to absurd amounts of abstraction.
I don’t disagree that rethinking the problem with NoSql would solve it (or maybe even would have prevented it), but on the other hand I bet having 5 layers of OOP could also mess up a perfect NoSql design.
The two major problems were:
1. The volume of data itself was not that that big (I had a backup on my laptop for reproductions), but it was just too heavy for even the biggest things in AWS. Downtimes were very frequent. This is mostly due to decisions from 10 years ago.
2. Teams constantly busy putting out fires but still getting only 1-2% salary increases due to lack of new features.
EDIT: Since people like those war stories. The major cause for the performance issues was that each request from an internal user would sometimes trigger hundreds of queries to the database. Or worse: some GET requests would also perform gigantic writes to the Double-Entry Accounting system. It was very risky and very slow.
This was mostly due to over-reliance on abstractions that were too deep. Nobody knew which joins to make in the DB, or was too afraid, so they would instead call 5 or 6 classes and joining manually causing O(N^2) issues.
To give a dimension of how stupid it was: one specific optimization I worked on changed the rendering time of a certain table from 25 seconds to 2 miliseconds. It was nothing magic.
I'm glad I left.