Hacker Newsnew | past | comments | ask | show | jobs | submit | hdjrudni's commentslogin

Ya, we don't know yet. Still sitting on zig but I like what I see so far.

Deliberately more verbose. Not sure how it'd be slower. And only a tiny bit more verbose if the language has nice keywords/syntax for you to use. The point is you want to be explicit when you're choosing to ignore an error.

Its not the same. You have to explicitly declare the errors and if you want to ignore/propagate them, you have to do so explicitly as well.

You cant invoke a function and pretend it'll never fail.

Also, try/catch with long try blocks and a the error handling at the very end is just bad. Which of the statements in the try is throwing? Even multiple perhaps? Each should be handled individually and immediately


Oh, i thought you were talking about this self hosted 1.5B model. You must be talking about the full model as a service?

Does this support stacked area charts? Like https://recharts.github.io/en-US/examples/StackedAreaChart/ ?

That's what I'm using now but I gave it too much data and it takes like a minute to render so I'm quite interested in this.


Not yet - area charts work but stacking isn't implemented. I'll add this today for you :)

It's implied that they intentionally tested it that way, without any assertions on the order. Not by oversight of incompetence, but because they didn't want to bake the requirement in due to uncertainty.

That would be silly to stick that tightly to a 40 year old standard. They can easily observe the behavior of every other public DNS resolver (they are Cloudflare, so gathering data on such a scale should be easy) and see how they return results.

Honestly, though, I’d be surprised if they actually even considered it. Everything about the article says to me that the engineer(s) who caused this problem are desperately trying to deflect blame for not having a comprehensive test suite. Sorry, but you don’t go tweaking order of results for such a long-standing, high volume, and crucial protocol just because the 40 year old spec isn’t clear about it.


That approach only makes sense if tests are immutable though. If you are unsure if the order matters you should still test for it so you get a reminder to re-check your assumptions when the order changes.

I don't know the official process, but as a human that sometimes reads and implements IETF RFCs, I'd appreciate updates to the original doc rather than replacing it with something brand new. Probably with some dated version history.

Otherwise I might go to consult my favorite RFC and not even know its been superseded. And if it has been superseded with a brand new doc, now I have to start from scratch again instead of reading the diff or patch notes to figure out what needs updating.

And if we must supersede, I humbly request a warning be put at the top, linking the new standard.


At one point I could have sworn they were sticking obsoletion notices in the header, but now I can only find them in the right side-bar:

https://datatracker.ietf.org/doc/html/rfc5245

I agree, that it would be much more helpful if made obvious in the document itself.

It's not obvious that "updated by" notices are treated in any more of a helpful manner than "obsoletes"


And I'm glad they came to it. Even if everyone else is wrong (I'm not saying they are) sometimes you just have to play along.

Hopefully Cloudflare documenting the expected behavior and that possibly getting standards tracked will make things easier for the next RFC readers.

"Warnings" are like the most difficult thing to 'send' though. If an app or service doesn't outright fail, warnings can be ignored. Even if not ignored... how do you properly inform? A compiler can spit out warnings to your terminal, sure. Test-runners can log warnings. An RPC service? There's no standard I'm aware of. And DNS! Probably even worse. "Yeah, your RRs are out of order but I sorted them for you." where would you put that?

> how do you properly inform?

Through the appropriate channels; in-band and out-of-band.


a content-less tautology

Randomly fail or (increasingly) delay a random subset of all requests.

That sounds awful and will send administrators on a wild goose chase throughout their stack to find the issue without many clues except this thing is failing at seemingly random times. (I myself would suspect something related to network connectivity, maybe requests are timing out? This idea would lead me in the completely wrong direction.)

It also does not give any way to actually see a warning message, where would we even put it? I know for a fact that if my glibc DNS resolver started spitting out errors into /var/log/god_knows_what I would take days to find it, at best the resolver could return some kind of errno with perror giving us a message like "The DNS response has not been correctly formatted", and then hope that the message is caught and forwarded through whatever is wrapping the C library, hopefully into our stderr. And there's so many ways even that could fail.


So we arrive at the logical conclusion: You send errors in morse code, encoded as seconds/minutes of failures/successes. Any reasonable person would be able to recognize morse when seeing the patterns on a observability graph.

Start with milliseconds, move on to seconds and so on as the unwanted behavior continues.


As a solo dev who just started his second cluster a few days ago... I like it.

Upfront costs a little higher than I'd like. I'm paying $24 for a droplet + $12 for a load balancer, plus maybe $1 for a volume.

I could probably run my current workload on a $12 droplet but apparently Cilium is a memory hog and that makes the smaller droplet infeasible, and it seems not practical to not run a load balancer.

But now I can run several distinct apps running different frameworks and versions of php, node, bun, nginx, whatever and spin them up and tear them down in minutes and I kind of love that. And if I ever get any significant amount of users I can press a button and scale up or horizontally.

I don't have to muck about with pm2 or supervisord or cronjobs, that's built in. I don't have to muck about with SSL certs/certbot, that's built in.

I have SSO across all my subdomains. That was a little annoying to get running, took a day and a half to figure out but it was a one time thing and the config is all committed in YAML so if I ever forget how it works I have something to reference instead of trying to remember 100 shell commands I randomly ran on a naked VPS.

Upgrades are easy. Can upgrade the distro or whatever package easily.

Downsides are deploys take a minute or two instead of sub-second.

It took weeks of tinkering to get a good DX going, but I've happily settled on DevSpace. Again it takes a couple minutes to start up and probably oodles of RAM instead of milliseconds but I can maintain 10 different projects without trying to keep my dev machine in sync with everything.

So some trade-offs but I've decided it's a net win after you're over the initial learning hump.


> I can run several distinct apps running different frameworks and versions > don't have to muck about with pm2 or supervisord or cronjobs, that's built in. I don't have to muck about with SSL certs/certbot

But doesn't literally any PaaS and provider with a "run a container" feature (AWS Fargate/ECS, etc) fit the bill without the complexity, moving parts and failure modes of K8s.

K8s makes sense when you need a control plane to orchestrate workloads on physical machines - its complexity and moving parts are somewhat justified there because that task is actually complex.

But to orchestrate VMs from a cloud provider - where the hypervisor and control plane already offers all of the above? Why take on the extra overhead by layering yet another orchestration layer on top?


Not the original poster but have tried all that that. It's far easier with Kubernetes - just deployment, service secret & ingress config and stuff just works cleanly in namespaces without stuff at any risk of clobberring each other.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: