Hardware like cars and laptop can continue to perform after they are written off, or even after the warranty.
The grade of hardware used is critical in servers.
Hyperscaling might mean commodity based servers. Hosting a large app does not mean using commodity component servers.
Hardware, when self hosting, does not need to be replaced every 3-5 years because it does not fail every 3-5 years. Depends on load and a bunch of factors.
Why?
We wouldn’t buy the cheap and disposable components a massive cloud or social media network might use to scale faster because they have a massive budget.
Besides, do providers really replace all their servers every 3-5 years? Hosting companies don’t seem to.
The cloud is many multiples more expensive than self hosting especially at scale. Hosting and cloud tools have brought down labour costs tremendously.
For the hardware, with the extremely clean environments servers run in, plus much cleaner electricity hardware runs much longer.
Purchase actual enterprise grade servers (HP Proliant, etc) that a company would buy for themselves for maximum reliability (compared to the commodity based ones of clouds) and those have so much reliability built into them that they sometimes never die.
You can still buy used proliant servers many, many, many generations old and they hum along just fine. It is bizarre but not.
Support is a few things. Warranty on parts and software. Extended support options which amounts to (hardware warranty) are always available for a fee, and achievable on your own.
If your software is a hypervisor you will be mirrored.
If a server has an issue the affected machine moves the load elsewhere.
The server has hot swap equipment. Takes a few moments to swap components if needed.
If you are self hosting, you can buy a used server or two or theee to have a backup and mirror and spare parts. It’s like buying a few NUCs.
Hosting corporately can be done not just with buying, but leasing too (meaning hardware swapping can happen). Add to this moving older equipment to less demanding tasks (if they ever do stay at load)z
> Write off is am accounting term, not operational.
That's the point. I've just decommisioned 10 year old servers for a client. They were still working fine, but the system had finally been replaced.
If you're calculating break-even based on the rate at which you're writing off the accounting value of the servers, you'll end up with a far longer time to break-even than if you amortise the hardware cost over the projected actual lifetime of the hardware.
Depending on the jurisdiction (this seems to be common), equipment can be written off at a faster depreciation schedule than it's been kept or used for.
In that way, writing off is often one part maximizing depreciation schedule (to "write it off" as an asset in you business as quick as possible), and another part is how long does it take.
Insert stereotype of bean-counting and propellers-spinning.
This means it's perfectly possible to use equipment after it's been written off, and be in a position to re-purchase it when it fails.
Spreadsheets can be a disease this way, written for the single scenario it's evaluating and not entirely enough scenarios or forecasts.
We should assume multi-billion dollar clouds do not use single spreadsheets to understand how they make 5-10x (or higher) off the same server resources by selling them as individual API calls.
The markup on cloud services can be so astronomically high, having been around data centre hosting for your own bare metal servers, virtualized servers, going to the cloud 1000% and now realizing it's gotten much easier to self host personally, and professionally (with experience).
The assumptions of why one might use a cloud originate with the update of the cloud and remain anchored there, regardless of the changes and evolutions in place.
Sure, but again, the accounting write-off period is entirely orthogonal to how you choose to calculate your break-even point, so the accounting write-off period is really a distraction. But the "default" of 3 years is often used without much thought when customers of the cloud providers evaluate pricing, and so for the cloud providers it makes sense to make themselves look plausibly competitive when customers look at the numbers that way.
A lot of the other markup is obscured by splitting things into multiple categories, such as costs per requests, separate pricing for bandwidth etc. A lot of clients I talk to don't understand the pricing of the services they run, and the developers usually both don't know and don't care.
Hardware like cars and laptop can continue to perform after they are written off, or even after the warranty.
The grade of hardware used is critical in servers.
Hyperscaling might mean commodity based servers. Hosting a large app does not mean using commodity component servers.
Hardware, when self hosting, does not need to be replaced every 3-5 years because it does not fail every 3-5 years. Depends on load and a bunch of factors.
Why?
We wouldn’t buy the cheap and disposable components a massive cloud or social media network might use to scale faster because they have a massive budget.
Besides, do providers really replace all their servers every 3-5 years? Hosting companies don’t seem to.
The cloud is many multiples more expensive than self hosting especially at scale. Hosting and cloud tools have brought down labour costs tremendously.
For the hardware, with the extremely clean environments servers run in, plus much cleaner electricity hardware runs much longer.
Purchase actual enterprise grade servers (HP Proliant, etc) that a company would buy for themselves for maximum reliability (compared to the commodity based ones of clouds) and those have so much reliability built into them that they sometimes never die.
You can still buy used proliant servers many, many, many generations old and they hum along just fine. It is bizarre but not.
Support is a few things. Warranty on parts and software. Extended support options which amounts to (hardware warranty) are always available for a fee, and achievable on your own.
If your software is a hypervisor you will be mirrored.
If a server has an issue the affected machine moves the load elsewhere.
The server has hot swap equipment. Takes a few moments to swap components if needed.
If you are self hosting, you can buy a used server or two or theee to have a backup and mirror and spare parts. It’s like buying a few NUCs.
Hosting corporately can be done not just with buying, but leasing too (meaning hardware swapping can happen). Add to this moving older equipment to less demanding tasks (if they ever do stay at load)z