Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not a justifiable expense when no one else is resilient against their AWS region going down either. Also cross-cloud orchestration is quite dead because every provider is still 100% proprietary bullshit and the control plane is... kubernetes. We settled for kubernetes.


Also if you can't even do cross region, cross cloud won't happen


Cross region isn't simple when you have terabytes of storage in buckets in a region. Building services in other regions without that data doesn't really do any good. Maintaining instances in various regions is easy, but it's that data that complicates everything. If you need to use the instances in a different region because your main region is down, you still can't do anything because those cross region instances can't access the necessary data.


Entire terabytes?! My god, I can only barely fit that onto a single SD card the size of my pinky nail.

It is quite bizarre that such paltry amounts of data and problems with such tiny scale seem to pose challenging problems when done in the cloud.


Such a sophomoric response. It does not matter how large your storage use is exactly. The point is that nobody is going to pay to replicate that data in multiple clouds or within multiple regions of the same cloud provider.

Btw, I'd love to have a link to where I could buy an SD card the size of a pinky nail that holds terabytes of data.


It absolutely matters how large your storage use is. Terabytes of storage is easily manageable on even basic consumer hardware. Terabytes of storage costs just hundreds of dollars if you are not paying the cloud tax.

If you got resiliency and uptime for a extra hundred dollars a year, that would be a no-brainer for any commercial operation. The byzantine kafkaesque horror of the cloud results in trivial problems and costs ballooning into nearly insurmountable and cost-ineffective obstacles.

These are not hard or costly problems or difficult scales. They have been made hard and costly and difficult.


Your pedantry is just boring. Yes, I used the word terabyte instead I guess something more palatable to you for being large. Fine s/exabyte/terabyte/.

I work with buckets where single files are >1 terabyte. There's more than one of these files, hence terabytes. I'm not going to do a human-readable summary listing of an entire bucket to get the full size. The point of the actual size is irrelevant. When people are spending 5-6 digits on cloud storage per month, they are not going to do it in multiple places. period. Maybe the new storage unit should just be monthly cloud spend, but then your pedantry will say nonsense like which cloud sever, which storage solution type, blah blah blah.


Ah yes, let us just gloss over 6 orders of magnitude when we are discussing cost-effectiveness and feasibility. What is the difference between 100$ and 100,000,000$ of spend really? Basically the same thing.


Yes they exaggerated, it takes several pinky nail sized cards to store several TB. Only 1TB per microSD.


They have them at 2 TB [1] now for just 300$. And SanDisk announced 4 TB last year, but I do not see them for sale just yet.

[1] https://shop.sandisk.com/products/memory-cards/microsd-cards...


Bottomline is that AWS gives you the tools to survive this outage within their own ecosystem.

If there's an issue with relying only on AWS it has not been expressed in this outage.


exactly what tools helps make your large volume of data stored in a down region available to other regions without duplicating the monthly storage fees?


You duplicate the fees. But it's the same or worse trying to do multi cloud.


Which is precisely why it's not done


I seems to recall it was fairly common to have a read only versions of sites when there was a major outage - we did that a lot with deviantart in the early 2000s, did that fall out of favour or too complex with modern stacks or?


If only everything was a simple website. You're totally ignoring other types of workflows that would be impossible to use a read-only fall back. Not just impossible, but pointless.


HN does it too, but it's a simple site


I don't think storage cost is the reason, more that it's hard to design for regional failures. DB by itself as one example, cross region read replica usually introduces eventual consistency to a system that'd otherwise be immediately consistent.


Well yeah, but that's why we get paid the big bucks right?


We do, non-tech company's IT dept doesn't so much


Thanks for the helpful reply! Do you think that would be still true if one accepted a constraint of the "down" version of the property served had data that was stale, say 24 hours behind what the user would have seen had they been logged in?


Yeah except it would probably be delayed way less than 24h. And then you have to figure out how to merge the data back in after, unless you're ok just losing it permanently. And make sure things are handled right if other healthy DBs point to things in the failed-over DB that disappeared.


Data has a lot of gravity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: