Hacker Newsnew | past | comments | ask | show | jobs | submit | pwnna's commentslogin

I actually made this patch a while ago on lineageos but lost the patch. It is a very invasive change where I filtered for the world amber and the French equivalent...


Does this impact the western digital and SanDisk brands too? IIRC those brands got folded into Crucial.


For single-threaded, cooperative multitasking systems (such as JavaScript and what OP is discussing), async mutexes[1] IMO are a strong anti pattern and red flag in the code. For this kind of code, every time you execute code it's always "atomic" until you call an await and effectively yield to the event loop. Programming this properly simply requires making sure the state variables are consistent before yielding. You can also reconstruct the state at the beginning of your block, knowing that nothing else can interrupt your code. Both of these approaches are documented in the OP.

Throwing an async mutex to fix the lack of atomicity before yielding is basically telling me that "i don't know when I'm calling await in this code so might as well give up". In my experience this is strongly correlated with the original designer not knowing what they are doing, especially in languages like JavaScript. Even if they did understand the problem, this can introduce difficult-to-debug bugs and deadlocks that would otherwise not appear. You also introduce an event queue scheduling delay which can be substantial depending on how often you're locking and unlocking.

IMO this stuff is best avoided and you should just write your cooperative multi-tasking code properly, but this is does require a bit more advanced knowledge (not that advanced, but maybe for the JS community). I wish TypeScript would help people out here but it doesn't. Calling an async function (or even normal functions) does not invalidate type narrowing done on escaped variables probably for usability reasons, but is actually the wrong thing to do.

[1]: https://www.npmjs.com/package/async-mutex


Here's a use case - singleton instantiation on first request, where instantiation itself requires an async call (eg: to DB or external service).

    _lock = asyncio.Lock()
    _instance = None

    def get_singleton():
      if _instance:
        return _instance
      async with _lock:
        if not _instance:
          _instance = await costly_function()
      return _instance
How do you suggest to replace this?


The traditional thing would be to have an init() function that is required to be called at the top of main() or before any other methods that need it. But I agree with your point.


Now lets say its an async cache instead of singleton.


Return the cached item if it exists else spawn a task to update it (doing nothing if the task has already been spawned), await the task and return the cached item.


Thanks, that's a useful trick.


Also:

Developers should properly learn the difference between push and pull reactivity, and leverage both appropriately.

Many, though not all, problems where an async mutex might be applied can instead be simplified with use of an async queue and the accompanying serialization of reactions (pull reactivity).


It's kids taking toys of the shelve and playing with words. Give me a strong Angular vibe with it's miss use of "dependency injection".


How did angular misuse DI?


I'm not familiar with AngularJS so I did a quick google: https://angular.dev/guide/di#where-can-inject-be-used

It looks eerily familiar to Spring DI framework, yikes.


This is like fighting complexity with even more complexity. Nix and bazel are definitely not close to actually achieving hermetic build at scale. And when they break the complexity increases exponentially to fix.


What's not hermetic with Nix? Are you talking about running with the sandbox disabled, or and macOS quirks? It's pretty damn hard to accidentally depend on the underlying system in an unexpected way with Nix.


My experience with nix, at a smaller scale than what you're talking about, is that it only worked as long as every. single. thing. was reimplemented inside nix. Once one thing was outside of nix, everything exploded and writing a workaround was miserable because the nix configuration did not make it easy.


> every. single. thing. was reimplemented inside nix

That's kinda what hermetic means, though, isn't it? Whether that's painful or not, that's pretty much exactly what GGP was asking for!

> Once one thing was outside of nix, everything exploded and writing a workaround was miserable because the nix configuration did not make it easy.

Nix doesn't make it easy to have Nix builds depend on non-Nix things (this is required for hermeticity), but the other way around is usually less troublesome.

Still, I know what you mean. What languages were you working in?


It was the dev environment for a bunch of wannabe microservices running across node/java/python

And like, I'm getting to the point of being old enough that I've "seen this before"; I feel like I've seen other projects that went "this really hard problem will be solved once we just re-implement everything inside our new system" and it rarely works; you really need a degree of pragmatism to interact with the real world. Systemd and Kubernetes are examples of things that do a lot of re-implementation but are mostly better than the previous.


> Systemd and Kubernetes are examples of things that do a lot of re-implementation but are mostly better than the previous.

I feel the same way about systemd, and I'll take your word for it with respect to Kubernetes. :)

> "this really hard problem will be solved once we just re-implement everything inside our new system" [...] rarely works

Yes. 100%. And this is definitely characteristic of Nix's ambition in some ways as well as some of the most painful experiences users have with it.

> you really need a degree of pragmatism to interact with the real world

Nix is in fact founded on a huge pragmatic compromise: instead of beginning with a new operating system, or a new executable format with a new linker, or even a new basic build system (a la autotools or make)! Instead of doing any of those things, Nix's design manages to bring insights and features from programming language design (various functional programming principles and, crucially, memoization and garbage collection) to build systems and package management tools, on top of existing (even aging) operating systems and toolchains.

I would also contend that the Nixpkgs codebase is a treasure, encoding how to build, run, and manage an astonishing number of apps (over 120,000 packages now) and services (I'd guess at least 1,000; there are some 20,000 configuration options built into NixOS). I think this does to some extent demonstrate the viability of getting a wide variety of software to play nice with Nix's commitments.

Finally, and it seems you might not be aware of this, but there are ways within Nix to relax the normal constraints! And of course you can also use Nix in various ways without letting Nix run the show.[0] (I'm happy to chat about this. My team, for instance, uses Nix to power Python development environments for AWS Lambdas without putting Nix in charge of the entire build process.)

However:

  - fully leveraging Nix's benefits requires fitting within certain constraints
  - the Nix community, culturally, does not show much interest in relaxing those constraints even when possible[1], but there is more and more work going on in this area in recent years[2][3] and some high-profile examples/guides of successful gradual adoption[4]
  - the Node ecosystem's habit of expecting arbitrary network access at build time goes against one of the main constraints that Nix commits to by default, and *this indeed often makes packaging Node projects "properly" with Nix very painful*
  - Python packaging is a mess and Nix does help IME, but getting there can be painful
Maybe if you decide to play with Nix again, or you encounter it on a future personal or professional project, you can remember this and look for ways to embrace the "heretical" approach. It's more viable and more popular than ever :)

--

0: https://zimbatm.com/notes/nix-packaging-the-heretic-way ; see also the community discussion of the post here: https://discourse.nixos.org/t/nix-packaging-the-heretic-way/...

1: See Graham Christensen's 2022 NixCon talk about this here. One such constraint he discusses relaxing, build-time sandboxing, is especially useful for troublesome cases some Node projects: https://av.tib.eu/media/61011

2: See also Tom Bereknyei's NixCon talk from the same year; the last segment of it is representative of increasing interest among technical leaders in the Nix community on better enabling and guiding gradual adoption: https://youtu.be/2iugHjtWqIY?t=830

3: Towards enabling gradual adoption for the most all-or-nothing part of the Nix ecosystem, NixOS, a talk by Pierre Penninckx from 2024: https://youtu.be/CP0hR6w1csc

4: One good example of this is Mitchell Hashimoto's blog posts on using Nix with Dockerfiles, as opposed to the purist's approach of packaging your whole environment via Nix and then streaming the Nix packages to a Docker image using a Nix library like `dockerTools` from Nixpkgs: https://mitchellh.com/writing/nix-with-dockerfiles


So I don't understand where the meme of the blurry super-resolution based down sampling comes from. If that is the case, what is super-resolution antialiasing[1] then? Images when rendered at higher resolution than downsampled is usually sharper than an image rendered at the downsampled resolution. This is because it will preserve the high frequency component of the signal better. There are multiple other downsampling-based anti-aliasing technique which all will boost signal-to-noise ratio. Does this not work for UI as well? Most of it is vector graphics. Bitmap icons will need to be updated but the rest of UI (text) should be sharp.

I know people mention 1 pixel lines (perfectly horizontal or vertical). Then they go multiply by 1.25 or whatever and go like: oh look 0.25 pixel is a lie therefore fractional scaling is fake (sway documentation mentions this to this day). This doesn't seem like it holds in practice other than from this very niche mental exercise. At sufficiently high resolution, which is the case for the display we are talking about, do you even want 1 pixel lines? It will be barely visible. I have this problem now on Linux. Further, if the line is draggable, the click zones becomes too small as well. You probably want something that is of some physical dimension which will probably take multiple pixels anyways. At that point you probably want some antialiasing that you won't be able to see anyways. Further, single pixel lines don't have to be exactly the color the program prescribed anyway. Most of the perfectly horizontal and vertical lines on my screen are all grey-ish. Having some AA artifacts will change its color slightly but don't think it will have material impact. If this is the case, then super resolution should work pretty well.

Then really what you want is something as follows:

1. Super-resolution scaling for most "desktop" applications.

2. Give the native resolution to some full screen applications (games, video playback), and possibly give the native resolution of a rectangle on screen to applications like video playback. This avoids rendering at a higher resolution then downsampling which can introduce information loss for these applications.

3. Now do this on a per-application basis, instead of per-session basis. No Linux DE implements this. KDE implements per-session which is not flexible enough. You have to do it for each application on launch.

[1]: https://en.wikipedia.org/wiki/Supersampling


> So I don't understand where the meme of the blurry super-resolution based down sampling comes from. If that is the case, what is super-resolution antialiasing

It removes jaggies by using lots of little blurs (averaging)


Neat! I worked on a similar formal verification of Ghostferry, which is a zero downtime data migration tool that powers the shard balancing tool at Shopify, also using TLA+:

https://github.com/Shopify/ghostferry/blob/main/tlaplus/ghos...

I also was able to find an concrrency bug before a single line of code was written with the TLC which saved a lot of time. It took about 4 weeks to design and verify the system in spec and about 2 weeks to write the initial code version, which mostly survived to this day and reasonably resembles the TLA+ spec. To my knowledge (I no longer work there) the correctness of the system was never violated and it never had any sort of data corruption. Would be a much harder feat without TLA+.


Devil's advocate: my bridge fell down because I didn't know the concrete didn't meet spec still seem like negligence?


There is the kawai novus5 which is a digital piano with the action and soundboard of a real upright piano and enough speakers to sound almost exactly like a real piano. There are also some new roland models I haven't tried. Many dealers lump these into their acoustic piano offering and don't market them differently because they are that good.

See https://m.youtube.com/watch?v=4DaaafyAUqA and https://m.youtube.com/watch?v=oLsPK2ATJcY. He is a pianist and he bought a novus5 to replace his own upright piano...


If I want to learn about this field and is somewhat familiar with only the classical approach to robotics, how should I get started?


I am covering this on my blog Encylopedia Autonomica.


Take a hint from the title and ask ChatGPT?

It will give you a list of really obvious “why didn’t think of that” ways to get started.


Comparing gas vs electric seems incorrect. Should be comparing with induction instead. It is way more energy efficient (altho not necessarily more cost efficient). It produces no combustion byproduct like gas (which your air quality meter may or may not be able to detect), and it is way faster and gives you better control. It also is safer without flames or leaks, and less likely to burn you. It is the way to cook in 2024 imo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: