Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cloud has always been more expensive. I remember being quoted 250k/month for bandwidth when I was paying 15k with rackspace 10+ years ago. You’re paying for convenience and speed. The Math stops working when you grow to a certain point.

You can mitigate this to some extent by making some key architecture + vendor decisions upfront when first building… or just consider that some day you’ll need to do things like this. It’s not a novel problem.



It's horrifyingly hard to convince people of this, though, even you can present them with actual numbers.

A lot of people have convinced themselves that cloud is cheap, to the point that they don't even do a cursory investigation.

A lot of those even don't do the bare minimum to reduce hosting costs within the cloud they choose, or choose one of the cheaper clouds (AWS is absolutely extortionate for anything that requires significant amount of outbound bandwith), or put caching/CDN's in front (you can trivially slash your AWS egress costs dramatically).

Most of my consultancy work is on driving cost efficiencies for cloud, and I can usually safely guarantee the fee will pay for itself within months because people don't fix even the most low hanging fruit.


Periodically management says we shouldn't have a DC, just put everything in the cloud.

OK says HPC, here's the quote for replacing one of the (currently three) supercomputers with a cloud service. Oh dear, that's bigger than your entire IT budget isn't it? So I guess we do need the DC for housing the supercomputers.

If we'd done that once I'd feel like well management weren't to know, but it recurs with about a 3-5 year periodicity. The perception seems to be "Cloud exists, therefore it must be cheaper, because if it wasn't cheaper why would it exist?" which reminds me of how people persuade themselves the $50 "genuine Apple" part must be better because if it wasn't better than this $15 part why would Apple charge $50 for it? Because you are a sucker is why.


Apple may have markup, but the part is for sure more likely to be higher quality: https://www.cultofmac.com/news/apple-thunderbolt-4-cable-com...


Same goes for Apple power adapters:

https://news.ycombinator.com/item?id=28053398


It's just people conflating popularity with <every positive attribute>.

If <service> is popular, it must also be cheap, beautiful, well documented, have every feature that exists and make you popular with your friends.

I once had a Product Manager try to start an argument with me: "Explain to me how it is possible that the service we pay 25k a month doesn't have <feature>. You don't know what you are saying.". It just didn't do what he wanted, and getting angry with them over the phone didn't magically made the feature appear.


> If we'd done that once I'd feel like well management weren't to know, but it recurs with about a 3-5 year periodicity.

So basically every time management changes[1]?

[1]: https://maexecsearch.com/average-c-suite-tenure-and-other-im...


I work in HE so there's obviously much more turnover in senior management than the rest of the hierarchy but that's individual turnover, there's no need for Jim†, who arrived last week, to task HPC with gathering a quote that six of his subordinates and colleagues already know from last time shows this is a waste of time.

† Name changed to protect individuals but also because frankly I don't care very much who is currently doing these roles, there'll be others.


What’s HE? High energy physics?


Higher Education. A University. So, senior management are much the same as anywhere (although maybe with at least enough sense to realise that the mission is now different) but many other people are there because of the mission. Researchers, teachers, even if what you actually do is marketing, there's a very different sense of purpose behind that than if you were selling garden furniture.


Yeah, I used to be asked to price out a move to AWS every year at one position. After several years Hetzner finally got cheaper than operating our own colo's, but only basically because we were in London and London real-estate is expensive, and so colo space is accordingly expensive, while Hetzner's DC space is dirt cheap.

AWS, however, remained 2x-3x as expensive, with the devops time factored in.

> The perception seems to be "Cloud exists, therefore it must be cheaper, because if it wasn't cheaper why would it exist?

People are also blithely unaware that large customers get significant discounts, and so I regularly has to explain that BigCo X being hosted in AWS means at most that it is cost-effective for them because their spend means they're getting a significant discount over the already highest volume published pricing, and my clients usually are nowhere close to spend enough to be able to get those discounts.


I think management is just prone to wanting to believe the grass is greener on the other side. If you are already a cloud org with negotiated pricing and cost-optimization management would ask about building a data center and you would show them how much you would need to expand your IT staff in order to acquire the skills to operate the new data center never mind the upfront cost.


Regarding apple parts, I recently replaced a broken screen on a MacBook pro with an OEM part. I can’t get the color to look right. Not to mention the one vertical row where pixels look off (not dead, but not normal either). The guy at the shop said I would not notice. I am now kicking myself for not going with the real thing.


> A lot of people have convinced themselves that cloud is cheap

I've noticed this too, freelancing/consulting around in companies. I'm not sure where this idea even comes from, because when cloud first started making the news, the reasoning went something like "We're OK paying more since it's flexible, so we can scale up/down quickly", and that made sense. But somehow today a bunch of people (even engineers) are under the belief that cloud somehow is cheaper than the alternatives. That never made sense to me, even when you take into account hiring people specifically for running the infrastructure, unless you're a one-person team or have to aggressively scale up/down during a normal day.


I can provide an example where cloud, despite its vastly higher unit costs, makes sense. Analytics in high finance (note: not HFT). Disclosure: my employer provides systems for that.

A fair number of our clients routinely spin up workloads that are CPU bound on hundreds-to-thousands of nodes. These workloads can be EXTREMELY spiky, with a baseload for routine background jobs needing maybe 3-4 worker nodes, but with peak uses generating demand for something like 2k nodes, saturating all cores.

These peak uses also tend to be relatively time sensitive, to the point where having to wait two extra minutes for a result has real business impact. So our systems spin up capacity as needed, and once the load subsides, terminates unused nodes. After all, new ones can be brought up at will. When the peak loads are high (& short) enough, and the baseload low enough, the elastic nature of cloud systems has merit.

I would note that these are the types of clients who will happily absorb the cross-zone networking costs to ensure they have highly available, cross-zone failover scenarios covered. (Eg. have you ever done the math on just how much a busy cross-zone Kafka cluster generates in zonal egress costs?) They will still crunch the numbers to ensure that their transient workload pools have sufficient minimum capacity to service small calculations without pre-warm delay, while only running at high(er) capacity when actually needed.

Optimising for availability of live CPU seconds can be a ... fascinating problem space.


There are absolutely plenty of spaces where this is true and cloud makes sense either because it's actually cost effective, or because the cost doesn't matter.

Most people aren't in those situations, though, but I think a lot of them think they're much closer to your scenario than the much more boring situation they're actually in.


> paying more since it's flexible, so we can scale up/down quickly

I’ve heard this argument too and I think I’ve seen exactly one workload where it actually made sense and was tuned properly and worked reliably.


I've noticed this too, freelancing/consulting around in companies. I'm not sure where this idea even comes from

Internal company accounting can be weird and lead to unintuitive local optima. At companies I've worked at, what was objectively true was that cloud was often much cheaper than what the IT department would internally bill our department/project for the equivalent service.


I think it's because people think their workloads are extremely spiky, and so assume they will spin up/down loads enough to save money, and that has translated into cloud being perceived as cheap.

But devs rarely pay attention to metrics. I've had clients with expensive Datadog setups where it was blatantly obvious that nobody had ever dug into the performance data, because if they did they'd have noticed that key metrics were simply not fed to it.

If they did pay attention, most of them would realise that their autoscaling rarely kicks in all that much, if at all. Often because it's poorly tuned, but also because most businesses see small enough daily cycles.

Factor in that the cost difference between instances vs. managed servers is quite significant, and you need to have significant spikes much shorter in duration than most businesses day/night variation to save money.

It can make sense to be able to spin up more capacity quickly, but then people need to consider that 1) a lot of managed hosting providers has hardware standing by and can automatically provision it for you rapidly too - unless you insist on only using your own purchased servers in a colo, you can get additional capacity quickly, 2) a lot of managed hosting providers also have cloud instances so you can mix and match, 3) worst case you can spin up cloud instances elsewhere and tie it into your network via a VPN.

Some offer the full range from colo via managed servers to cloud instances in the same datacentres.

Once you prep for a hybrid setup, incidentally, cloud becomes even less competitive, because suddenly you can risk pushing the load factor on your own/managed servers much closer to the wire, knowing you can spin up cloud instances as a fallback. As a result, the cost per request for managed servers drops significantly.

I also blame a lot of this on business often shielding engineering from seeing budgets and costs. I've been in quite senior positions in a number of companies where the CEO or CFO were flabbergasted when I asked for basics costing of staff and infra, because I saw it as essential in planning out architecture. Engineers who aren't used to seeing cost as part of their domain will never have a good picture of costs.


Yes. We saved ridiculous amounts of money (and made it a lot faster) by moving our analytics workloads from Snowflake to a few bare-metal nodes running Exasol. But it took months to convince management even though we had clear numbers showing the sheer magnitude of the cost reduction. They had drunk the cloud kool-aid, and were adamant that it would be cheaper, numbers be damned.


I think one business argument for cloud is capital expenses vs operational expenses. If you’re (over) paying for cloud resources vs an in house option (or colo), those are numbers that are a straight expense. When you own hardware, those are on your books until they depreciate off. For some businesses, that can make sense.

Now, a good accountant probably wouldn’t care one way or the other. Debits and credits balance either way. And spending more still means less profit in the long term, no matter how it looks on the books. But, in addition to the flexibility, that was what I always thought of as the main cloud benefit. It’s the same with leasing vs buying cars/computers/etc…


You don't have to buy the hardware. It's very common to rent it.


But that too is based on people not knowing the alternatives, as renting managed servers can be close to a wash vs. leasing hardware for a colo (often to the point that relatively cost of land near your preferred managed hosting providers vs. colos that work with access to staff etc. might be what makes one or the other cheaper). Buying outright can be cheaper but isn't necessary.

None of the colo'd setups I've worked on bar one used purchased servers - it's all been leased. But the majority of non-cloud workloads I've worked on have not even been leased, but rented.


Cloud made sense for the startup I worked for previously. If you are a startup then a $1M per year expense makes much more sense than a $5M up front purchase with 5-10 years of life - in five years you might be billionaires or you might be bankrupt and until then the Cloud was better.


It feels like if you're spending 1M in cloud per year, the hardware and colo will probably cost 1M to buy.


If your monthly cloud bill is < $100K, you don’t need $5MM worth of hardware. That cloud spend equates to, at best, a couple of beefy servers, which (modulo AI cards) could almost certainly be had for <= $50K/ea. So for $200K, you could have a two-zone setup.

Where it does make sense in the short-term for this scenario is the experience and knowledge necessary to reliably run your own servers. If you don’t have that, you may not want to invest the time and effort to do so. But on pure cost, unless your bill is on the order of a few thousand per month, cloud will never win. It can’t; they have to make money.


Like sibling is saying it's not $5M upfront but on the order of 1 year cloud spend for large enough accounts. There are also such things as leases and loans.

One justifiable excuse is you simply don't know how much hardware you will need to buy if you're hitting hockeystick growth. That until you realize you can also go hybrid...


Or you could have rented or leased-to-own. There's hardly ever any need to actually purchase outright to get prices far below equivalent capacity in clouds. In fact, in 25 years, only one of the colo'd server setups I've worked on had any hardware purchased up-front in it.


Cloud also makes sense with certain traffic patterns where peak requirements is a huge outlier but critical to satisfy.


Or where locality is critical. Like if you have a game that hits peak traffic at different times throughout the day in different regions. So, a company may not want to own hardware in multiple regions, when they would only be at peak usage for a few hours.


The variation in peak loads tends to be far smaller for most people than what they imagine, but indeed it can sometimes be cheaper. The window needs to be very short, though, to outweight the large cost differential. And you don't need to buy - you can rent.


The classic case for this is Intuit which presumably needs most of its compute only two months a year. Not that many companies in the same boat though


I don’t think the window length is the key metric. It’s the ratio of time between peak traffic and normal traffic time


Yes, but in practice it's even rarer for people to have loads that have multiple spikes a day, to the point it's a rounding error you can mostly ignore outside of very unusual niches. Usually the day-night cycle entirely dominates in terms of traffic variations and is already long enough to make auto-scaling unviable in terms of cost.

You're right that if the longest window is short enough to make autoscaling financially beneficial over managed hosting, then you also need to make sure that you don't regularly have other spikes that can tip things back to being unprofitable.


Partly because AWS give out a lot of free credit for start ups, and basically allow them to grow without planning any infrastructure. VCs who are invested into Amazon also wants to push the cloud narrative. Starts up who dont want to deal with servers, want massive scale when they think the website and later an app went viral.

That was in the late 00s and early 10s. PHP, Python, Ruby and even Java were slow. Every single language and framework has had massive performance improvements in the past 15 to 20 years. Anywhere from Java 2x to Ruby 3 - 10x.

When a server max out at 6 - 8 with Xeon core, compare to today at 192 Core. Every Core is at leats 2 - 3x faster per clock, with higher clock speed we are talking about 100x difference. Especially when IO used to be on HDD, SSD is easily 1000x faster. What used to wait for I/O is no longer an issue, the aggregate difference when all things added together including software could be 300x to 500x.

What you would need 500 2U server in 2010, you could now do it in one.

Modern web developers are so abstracted with hardware I dont think many realise what sort of difference in hardware improvements. I remember someone posted before 2016 Basecamp had dozens of Racks before moving to cloud. Now they have grown a lot bigger with Hey and they are only doing it with 8 racks and room to spare.

AWS on the other hand is trying to move more workload to ARM Graviton where they have a cost advantage. Given Amazon's stock price are now dependent on AWS, I dont think they will lower their price by much in the future. And we desperately need some competition in that area.


For smaller businesses it seems to be its the safe option because its what everyone does.

I have even had it suggested that it might make selling a business or attracting investors harder if you used your own servers (not at the scale of having your own datacentre, just rented servers - smaller businesses still).

Another thing that comes up is that it might be more expensive but its a small fraction of operational expenses so no one really cares.


For smaller businesses it's often "the only thing Joe knew when he was building it".


You have a great point about finding cost efficiencies - there was a time cloud was cheaper.

Maybe it's an understanding that doesn't change because the decision makers were non-techincal people (when finance oversees IT despite not understanding it)

Virtualizing and then sharing a dedicated server as a VPS was a big step forward.

Only, hardware kept getting cheaper and faster, as well as internet.


> when finance oversees IT despite not understanding it

... and when IT often do not even get to see the spend, and/or isn't expected to.

I've had clients where only finance had permissions to get at the billing reports, and engineering only ever saw the billing data when finance were sufficiently shocked by a bill to ask them to dig into it - at which point they cared for long enough to get finance off their backs, and then stopped caring again.


Unsure how I missed this.

Great points.

Overseeing things they don’t understand and wanting to manage and direct it feels unnatural for finance to do to other departments.

Maybe this was more common for businesses with stable business processes that aren’t evolving.

Covid and now AI will ensure change is constant and where a practice is outdated so will the organizations become.


The reality is when you get to another certain point (larger than the point you describe) you start negotiating directly with those cloud providers and bypass their standard pricing models entirely.

It's the time in between that's the most awkward. When the potential savings are there that hiring an engineering team to internalize infrastructure will give a good return (were current pricing to stay), but you're not so big that just threatening to leave will cause the provider to offer you low margin pricing.

All I'd say is don't assume you're getting the best price you can get. Engineers are often terrible negotiators, we'd rather spend months solving a problem than have an awkward conversation. Before you commit to leaving, take that leverage into a conversation with your cloud sales rep.


> Engineers are often terrible negotiators, we'd rather spend months solving a problem than have an awkward conversation.

My experience is the opposite: lots of software developers ("engineers") would love to do "brutal" negotiations to fight against the "choking" done by the cloud vendors.

The reason why you commonly don't let software developers do these negotiations is thus the complete opposite: they apply (for the mentioned reasons) an ultra-hardball negotiation style (lacking all the diplomatic and business customs of politeness) that leads to vast lands of burnt soil. Thus, many (company) customers of the cloud providers fear that this hardball negotiation style destroys any future business relationship with the respective (and perhaps for reputation reasons a lot of other) cloud service provider(s).


Even with the discounts of volume pricing cloud prices are still quite inflated unless you need to inherit specific controls like the P&E ones from FedRAMP High/GovCloud. The catch there is lock-in technologies that may require to re-develop large swaths of your applications if you're heavily reliant on cloud-native tools.

Even going multi-region, hiring dedicated 24/7 data center staff, and purchasing your own hardware amortizes out pretty quickly and can you a serious competitive advantage in pricing against others. This is especially true if you are a large consumer of bandwidth.


> The reality is when you get to another certain point (larger than the point you describe) you start negotiating directly with those cloud providers and bypass their standard pricing models entirely.

And even if you do, you still end up with pretty horrible pricing, still paying per GB of "premium" traffic for some outrageously stupid reason, instead of going the route of unmetered connections and actually planning your infrastructure.


> It's the time in between that's the most awkward.

That's an odd way to describe hemorrhaging money.


But the article states they negotiated.


This was more a response to the comment I replied to, that cloud is always more expensive. And saying it more for everyone, not OP.

It's almost always less expensive at the start, which is super important for the early stages of a company (your capital costs are basically zero when choosing say AWS).

Then after you're established, it's still cheaper when considering opportunity costs (minor improvements in margin aren't usually the thing that will 10x a company's value, and adding headcount has a real cost).

But then your uniqueness as a company will come into play and there will be some outsized expense that seems obscene for the value you get. For the article writer, it was S3, for the OP, it's bandwidth. For me it's lambdas (and bizarrely, cloud watch alarms). That's when you need to have a hard look and negotiate. Sometimes the standard pricing model really doesn't consider how you're using a certain service, after all it's configured to optimize revenue in the general case. That doesn't mean the provider isn't going to be willing to take a much lower margin on that service if you explain why the pricing model is an issue for you.


Even starting out, with used/refurbed hardware you can put a lot of compute power into a colocation facility for very little money.


At what sort of scale can you do that? $1M, $10M, $100M, $1B?


So obviously this is an extreme, but I worked for a company that had long dismissed third party cloud providers as too expensive (customers would be routing all of their network traffic through our data centers, so obviously the bandwidth costs would just be too dang high). Then that company got purchased by a certain mega corporation who then negotiated an exclusive deal with GCP, and the math flipped. It was now far too expensive to run our own set of datacenters. Google was willing to take such a low margin on bandwidth that it made no sense not to.

So in this case, hundreds of billions. But the principle stands at lower company sizes, just with different numbers and amounts of leverage.


> hundreds of billions

That doesn't seem right. GCP's entire run rate is around $50B/yr.


Sorry I was giving the company's size, not their spend.


I don’t remember if our first enterprise agreement was at $1M or $2M, but it was low and in that neighborhood [but also 10 years ago, well before cloud was the default and had growth baked into it].

Cloud providers are looking for multi-year term, commitment to growth as much as/more than exact spend level now.


In my experience with GCP, go through a Google partner (that will aggregate multiple clients to get discounts) and you'll be able to get commitment discounts with $500K/year or even less. But don't save too much money during your commitment period: if you don't expend your commitment, you'll pay for it anyway, and you might even lose some discounts.

Also, one trick to inflate your commitment expenses is asking your SaaS providers if it's possible to pay them through AWS or GCP marketplaces: it often counts against your commitment minimum expense, so not everything has to be instances and storage.


You can commit right there in the console - no need to work with a partner unless you want “flex” commit where saving is less. Even with 3y commit its still nowhere near cheap compared to buying servers and renting colo space especially for bandwidth and storage


It's not the same commitment. When doing a commitment through a partner, you're doing an expense commitment (let's say, 600k in a year) in ALL your expenses. Well, except for Google Maps API it seems :P. So not tied to an specific product or type of instance, as the typical commitment, but to your whole GCP billing.

From this, you get a wide range of discounts in a bunch of products, not just instances. And I think those discounts go on top of some of the other discounts you regularly have, but I'm not sure and I'd had to check our billing.


Sounds like the trap for Middle Class.


Even without your own rack or colo, The math with AWS stops working as soon as you no longer fit in the free tier, since providers like Hetzner are 40% cheaper.


S3 is designed for 99.999999999% durability. Hetzner's Volume storage is just replication between 3 different physical servers.

In terms of durability that's a universe apart.


S3 is beyond impressive, but how many workloads truly need that? I’ve never had a single instance of data loss on a NetApp or Pure array.


Truly need, I don't know. But customers will request (and pay for) the 9's.


I suppose it’s like how someone who’s already made up their mind to buy a Lamborghini never questions whether they really need a 800HP engine.


On the other hand you have transient failures in the cloud (at least on Azure - this behavior is even documented) so does that count towards the 99.99999%?


That sounds like accessibility, not durability.


It's not a novel problem but it _is_ a relatively novel (bad) economic environment. We've been in "let the good times roll" mode longer than ten years. In comparison to 2009-2011, it was different. Many ops professionals are younger than that and have gone their entire careers without doing anything on premise.

I remember trying to convince some very talented but newly minted ops professionals -- my colleagues -- to go on prem for cost. This was last year. They were scared. They didn't know how that would work or what safety guarantees there would be. They had a point, because the org I was at then didn't have any on prem presence, since they were such a young organization that they started in the cloud during "the good times". They always hired younger engineers for cost, so nearly no one in the org even knew how to do on prem infra. Switching then would have been a mistake for that org, even though cloud costs (even with large commit agreements) were north of six figures a month.


What do you mean "for cost" in your comment? For cost savings / frugal purposes? Or using something like a sweetheart deal with a PEO?


I find it intuitively absolutely bizarre that Cloud does not outright win at any scale. In my mind everything about it seems more optimizable with more scale. Obviously I am missing something, but all Cloud pricing looks so significantly more expensive than I feel it should in a healthy and mature market.


It’s more than mere “convenience.” You’re also paying to avoid hiring a bunch of employees to physically visit data centers around the globe.

And if you’re not doing that you are hiring a bare metal servers provider that is still taking a portion of the money you’d be paying AWS.

Even if you don’t need to physically visit data centers thanks to your server management tools, the difference in the level of control you have between cloud and bare metal servers is large. You’re paying to enable workflows that have better automation and virtual networking capabilities.

I recently stood up an entire infrastructure in multiple global locations at once and the only reason I was able to do it in days instead of weeks or months was because of the APIs that Amazon provides that I can leverage with infrastructure automation tooling.

Once you are buying AWS reservations and avoiding their most expensive specialized managed products the price difference isn’t really worth trying to recover for many types of businesses. It’s probably worth it for Hey since they are providing a basic email service to consumers who aren’t paying a whole lot. But they still need something that’s “set it and forget it” which is why they are buying a storage solution that already comes with an S3 compatible API. So then I have to ask why they don’t save even more money and just buy Supermicro servers and install their own software? We all know why: because Amazon’s APIs are where the value is.

There is a lot of profit margin in software and usually your business is best spending their effort working on their core product rather than keeping the lights on, even for large companies. Plus, large companies get the largest discounts from cloud providers which makes data centers even less appealing.

“Convenience” isn’t just convenience, it’s also the flexibility to tear it all down and instantly stop spend. If I launch a product and it fails I just turn it off and it’s gone. Not so if I have my own data center and now I’ve got excess capacity.


I agree, but I don't think you're in the majority. I don't think most cloud-customers are utilizing all of those additional things that a big cloud provider offers.

How many are actually multi-region? How many actually do massive up/down-scaling on short notice? How many actually use many of those dozens to hundreds of services? How many actually use those complex permissions?

My experience tells me there are some, but there are more who treat AWS/GPC/Azure like a VPS-hoster that's 5-10x more expensive than other hosters. They are not multi-region, they don't do scaling, they go down entirely whenever the AZ has some issues etc. The most they do is maybe use RDS instead of installing mysql/pgsql themselves.


I can’t speak too much for small companies. But there are a lot of large enterprises and smaller businesses and government agencies that do use more AWS services than just compute + storage + web services. Do need the elasticity etc.

For instance, I was surprised how large the market was for Amazon Connect - Amazon’s hosted call centers. It’s one of the Amazon services I have some experience in and I still get recruiters contacting me for those jobs even though I don’t really emphasize that specialty.

My experience is from 7 years of working with AWS. First at a startup with a lot of complex ETL and used a lot of services. But the spend wasn’t that great.

My next 5 years was between working at AWS (Professional Services) and two years at a a third party consulting company (full time) mostly as an implementation lead.

Even though my specialty is “cloud native application development” and I avoid migrations like the plague, most of the money in cloud consulting are large companies deciding to move to the cloud because they decided that the redundancy, lower maintenance overhead, and other higher level services were worth it.


A lot more than you’re giving them credit for.

This idea that their basic users go down entirely when the AZ has some issues is ridiculous, a standard autoscaling group and load balancer basically forces you to be multi-AZ. Very much unlike a VPS.

Using RDS instead of self-installing SQL eliminates the need for an entire full time role for DB admin. So that’s kind of a big deal despite it being a “basic” use case.

A lot of services like ECS, elastic beanstalk, can make it so that you can wait longer to hire operations people and when you do they can migrate to more scalable solutions without having to do a major migration to some other provider or build up a self hosted solution custom. If you outgrow a VPS you have to do a major migration.

And if you take a look at the maturity and usefulness of the terraform providers SDKs, and other similar integrations of VPS and bare metal providers they are very basic when comparing to BOTO and the terraform provider.

I struggle to replicate the level of automation I can achieve with these cloud tools on my own homelab with Proxmox.


> Using RDS instead of self-installing SQL eliminates the need for an entire full time role for DB admin.

No it doesn't. The value in a skilled DB admin is not in keeping the DB up and running, because no special skills are required to do that; the DB admin is an expert in performance. They add considerable value in ensuring you get the most bang for your buck from your infrastructure.

A popular modern alternative to this of course is to throw more money at RDS until your performance problems go away.


Amen. How this lie continues to be perpetuated as gospel is beyond me.

I can look at any company’s RDBMS who doesn’t have a full-time DB[A,RE] on staff and find ten things wrong very quickly. Duplicate indices, useless indices, suboptimal column types, bad or completely absent tuning, poor query performance…

It’s only when a company hits the top end of vertical scaling do they think, “maybe we should hire someone,” and the problem then is that some changes are extremely painful at that scale, and they don’t want to hear it.


Yes it does.

While you’re not wrong about DB admins being important for performance optimizations, RDS stops you from having an inexperienced administrator lose data in stupid ways.

I know because I used to be that stupid person. You don’t want to trust your company’s data to a generalist that you told to spin up a database they’ve never configured before (me) and hope they got good answers when they googled how to set up backups/snapshots/replication.


Multi AZ is as much planning as anything else.

IaaS (Proxmox) is a different layer than PaaS as we know.

The same orchestration tools (Terraform) can orchestrate Proxmox or other hypervisors just fine. Discounted licenses for VMware are readily available on ebay if that is preferred.

Proxmox has built-in node mirroring between multiple servers, it just works after it's connected.


> How many are actually multi-region?

The fact half the internet seems to fall over whenever us-east-1 has a hiccup is quite telling.


This might be a little incomplete.

It's trivial, to get equipment at a datacenter, where the equipment is visited for you on your behalf if you wish.

You can place your own equipment in a datacenter to manage yourself (dedicated servers).

You can have varying amounts of the hardware up to the software layer managed for you as a managed server, where others on site will do certain tasks.

Both of these can still be cheaper than cloud (which provides a convenience and a large markup to make often open source tools easy to administer from a web browser), and then paying someone to manage the cloud.

Global location at once can still be done with the reality of hybrid-cloud or cloud-agnostic setup requirements (not to be tied to one cloud only for fallback and independence).


> The Math stops working when you grow to a certain point.

That point is different for every business. For most of them it depends on how big cloud is in your COGS (cost of goods sold) which affects gross margins, which in turn is one of the most meaningful measures of company financial health. Depending on the nature of your business and the amount of revenue you collect in sales, many companies will never reach the point where there's measurable payback from repatriating. Others may reach that point, but it's a lower priority than other things like opening up new markets.

Many commenters seem to hold very doctrinaire opinions on this topic, when it's mostly basic P&L math.


Around that certain point you can also talk to AWS or GCP and get very significant discounts. I'm surprised 37signals and AWS didn't find a number that worked for both.

I've seen a few of these deals with other vendors up close, the difference with public pricing is huge if you spend millions per year.


I worked/consulted for several companies who had multimillion per year cloud commits, sometimes with different clouds, and those discounts are not competitive with onprem like at all


If it takes talking to them to get discounts, might as well look at all the options and get the real discount of not being on the cloud.


DHH has said previously that they already have a very good deal when compared with list price. But AWS still couldn't come close to on prem costs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: