Hacker Newsnew | past | comments | ask | show | jobs | submit | oxymoron's commentslogin

I’m an AWS engineer and I haven’t seen any evidence of engineering layoffs within AWS since early this year. As others have suggested we generally don’t have ”DevOps Workers” either. There’s definitely a push for AI tools, but there’s no indication that it was related to any off this from what I’ve seen.


"Amazon's AWS cloud computing unit cuts at least hundreds of jobs, sources say" - https://www.reuters.com/business/retail-consumer/amazons-aws...


Yes, but that wasn’t engineers.


Because a lot of the time, not everyone is impacted, as the systems are designed to contain the "blast radius" of failures using techniques such as cellular architecture and [shuffle sharding](https://aws.amazon.com/builders-library/workload-isolation-u...). So sometimes a service is completely down for some customers and fully unaffected for other customers.


"there is a 5% chance your instance is down" is still a partial outage. A green check should only mean everything (about that service) is working for everyone (in that region) as intended.

Downdetector reports started spiking over an hour ago but there still isn't a single status that isn't a green checkmark on the status page.


With highly distributed services there's always something failing, some small percentage.


Sure but you can still put a message up when it's some <numeric value> over some <threshold value> like errors are 50% higher than normal (maybe the SLO is 99.999% of requests are processed successfully)


Just note that aggregations like that might manifest as GCP didn't have any issues today actually.

E.g. it was mostly us-central1 region affected, and in there only some services (e.g. regular instances, and GKE kubernetes were not affected in any region). So if we ask "what the percentage of GCP is down", it might well be it's less than the threshold.

On the other hand, about a month ago, 2025-05-19 there was an 8-hour long incident with Spot VM instances affecting 5 regions, and which was way more important to our company, but it didn't make any headlines.


Just say it: they want to lie to 95% of customers.


> Because a lot of the time, not everyone is impacted

then such pages should report a partial failure. Indeed the GCP outage page lists an orange "One or more regions affected" marker, but all services show the green "Available" marker, which apparently is not true.


There's always a partial outage in large systems, some very small percentage. All clouds should report all red then.


It's not rocket science. Put a message up "The service is currently degraded and some users may see errors"


They still could show that so.e.issues exist. Their monitoring must know.

The issue is that they don't want to. (For claiming good uptime, which may even be true for average user, if most outages affect only small groups)


That is still 100% an outage and should be displayed as such


Countering advanced bits is a game of economics. Sure, we know that they can solve the captchas, but they usually can’t do so for free. Eg. Typical captcha solver services are around $1/thousand solved. Depending on the unit economics of a particular bot that might be cheap or it might completely destroy the business model. I’ve definitely seen a lot of professionally operated bots where they invest a lot of effort into solving the fewest captchas possible to keep the cost down.

That captchas are completely useless is a popular myth.


It was of course co-discovered by another woman, Lise Meitner, who understood the theory while taking a walk with Otto Frisch and discussing the experimental findings by Otto Hahn. Meitner and Frisch were friends with Hahn and learned about the experiment earlier than most, so it’s likely one of those contingencies of history. There’s a good discussion of exactly how it unfolded in _The Making of the Atomic Bomb_ which is generally a great book and a comprehensive intro to the history of nuclear physics.


I think it’s at least relevant to note that a lot of things relating to autism was completely redefined in DSM-V. DSM-IV had many different diagnosis such as classic autism, autism spectrum disorder, aspergers and PDD-NOS (Pervasive Developmental Disorder - Not Otherwise Specified). All of those was merged into a single diagnosis titled ”Autism Spectrum Disorder”, where the criterias are communication difficulties and stereotypical behavior. My understanding is that this was mostly due to poor diagnosis stability with the prior set of diagnosis. It seems at least plausible that this general simplification of diagnosis criteria has contributed to an increase in the number of diagnosis. (It’s also worth remembering that any comparison over time has to bundle all of the previously distinct diagnosis to come up with an apples-to-apples comparison.)


Redis latency is around 1ms including network round trip for most operations. In a single threaded context, waiting on that would limit you to around 1000 operations per second. Redis clients improve throughput by doing pipelining, so a bunch of calls are batched up to minimize network roundtrips. This becomes more complicated in the context of redis-cluster, because calls targeting different keys are dispatched to different cache nodes and will complete in an unpredictable order, and additional client side logic is needed to accumulate the responses and dispatch them back to the appropiate caller.


I think our understanding of the inner structure of the earth is another interesting example of something that we’ve deduced scientifically but never directly observed. It surprised me a bit when I first realized that the Earth’s crust had never been pierced (by humans) and that it was all based on indirect observation.


There is a talk by the S3 VP on Youtube which mentions some rough numbers, I think it’s from re:invent 2019. Also, they mention 100 trillion objects here https://aws.amazon.com/blogs/aws/amazon-s3s-15th-birthday-it...


When I started working for AWS as an SDE, I was hoping it’d be possible to visit a datacenter. I was surprised to find out that I’d require L11 (!) approval to so so! The only L11 in my reporting chain is Adam Selipsky.

I’m told the AWS data centers has red zones, which no harddrive can be taken out of, without being mechanically and violently destroyed first.


I’ve been using self-hosted TTRSS since Google Reader shut down.


Me too. I've been using the Bitnami builds. VM for ages, but currently the Windows app.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: