The first one can't be true actually. It might make the rest of the team all cheerful and happy when you did this and 2 days before the release you call them together and show them, but it doesn't actually help a bit.
It seemed in this first point that nobody really gave memory usage a lot of thought ('write the solution first, optimize second'), and thus they were way over the limit. Let's say the goal was 120MB and they were at 160. Then they started compressing and optimizing everything, and after a while they got very close; 121.5MB. So the experienced programmer removes the 2MB allocation and saves the day.
If the experienced programmer hadn't done this, I don't think (seeing how much they cared about the memory usage before) the memory usage would have been more. They might have been at 158.5MB before optimization as well, and gotten under the limit with the same optimizations as they already had to do now.
So as far as I can see, it seems the only value of doing this is the psychological value. Might still be worth something, though!
Sure it can, it happens all the time in fact. Many console development kits have more memory available for development purposes that isn't available on retail units. This is hugely beneficial in development because it means you can actually run a build that has asserts enabled, or you can use special memory allocators that pad allocations for debugging, etc. I have seen, and been involved with, the mad scrambles to bring memory footprint down as a project edges toward completion.
It's not as fatal to blow your memory budgets on PC as it is on a console but if you're trying to hit a certain memory footprint so that the game is playable on the minspec machines defined by your publisher then it could very well be an issue.
Not only is it true, in fact I have never seen a project that didn't do this in one way or another, and I've been making games for 25 years. You need to have some spare room for late-minute, unforeseen issues.
The doing it in secret and the cheering in this story are quite funny though (and yeah I have seen those as well).
It seemed in this first point that nobody really gave memory usage a lot of thought ('write the solution first, optimize second'), and thus they were way over the limit. Let's say the goal was 120MB and they were at 160. Then they started compressing and optimizing everything, and after a while they got very close; 121.5MB. So the experienced programmer removes the 2MB allocation and saves the day.
If the experienced programmer hadn't done this, I don't think (seeing how much they cared about the memory usage before) the memory usage would have been more. They might have been at 158.5MB before optimization as well, and gotten under the limit with the same optimizations as they already had to do now.
So as far as I can see, it seems the only value of doing this is the psychological value. Might still be worth something, though!