Literally every country worldwide does this. The question is simply to what extent and to what countries. The whole difference between being a native an an alien is the rights you get. It's not a human right to be able to freely go into any country you please.
> The whole difference between being a native an an alien is the rights you get. It's not a human right to be able to freely go into any country you please.
The first step for genocide is to dehumanize people.
They're not humans, they're aliens. Therefore it's fine if we treat them as filth and throw them away (or gas them).
It's interesting you got downvoted, perhaps for the sentence
> The whole difference between being a native an an alien is the rights you get.
A knee jerk and uncharitable reading might make this look bad, but it does require an uncharitable reading. It is clear what you mean.
However, the claim
> It's not a human right to be able to freely go into any country you please.
is not false. The idea that open borders are a good thing is a very odd idea. It seems to grow out of a hyperindividualistic and global capitalist/consumerist culture and mindset that doesn't recognize the reality of societies and cultures. Either that, or it is a rationalization of one's own very domestic and particular choices, for example. In any case, uncontrolled migration is well-understood (and rather obviously!) as something damaging to any society and any culture. In hyperindividualistic countries, this is perhaps less appreciated, because there isn't really an ethnos or cohesive culture or society. In the US, for example, corporate consumerism dominates what passes as "culture" (certainly pop culture), and the culture's liberal individualism is hostile to the formation and persistence of a robust common good as well as a recognition of what constitutes an authentic common good. It is reduced mostly to economic factors, hence globalist capitalism. So, in the extreme, if there are no societies, only atoms and the void, then who cares how to atoms go?
The other problem is that public discourse operates almost entirely within the confines of the false dichotomy of jingoist nationalism on the one hand and hyperindividualist globalism on the other (with the respective variants, like the socialist). There is little recognition of so-called postliberal positions, at least some of which draw on the robust traditional understanding of the common good and the human person, one that both jingoist nationalism and hyperindividualist globalism contradict. When postliberalism is mentioned, it is often smeared with false characterization or falsely lumped in with nihilistic positions like the Yarvin variety...which is not traditional!
Given the ongoing collapse of the liberal order - a process that will take time - these postliberal positions will need to be examined carefully if we are to avoid the hideous options dominating the public square today.
Pardon me if I’m misreading it but this sounds like disinformation. No examples in your example, a lot of abstract reasoning unmoored from facts.
>uncontrolled migration is well-understood (and rather obviously!) as something damaging to any society and any culture.
The US was built on unrestricted immigration for a long time. Was that destructive? I guess so if you count native Americans but not to the nation of USA.
Capitalism wants closed borders to labor and open borders to capital. Thats how they can squeeze labor costs while maximizing profits. The US is highly individualistic but wants closed borders so how does your reasoning align with the news?
I mean, naturally there's Doom on a vape [0], so as far as I'm concerned, that box is already ticked. Someone should give a good name to the law that every hardware device with a screen has a Doom port.
This is the only reasonable way to ever do this, requires no effort, just copy paste one of the examples and you're done. My only gripe is that the most secure option isn't the first example in the repo. Limit access to the actor and put it behind the debug only flag and you're good to go. Still, I remove it after the fact once I don't need it anymore since it feels a bit too sketch with secrets available.
Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there. No matter what I'm hosting, it's a lot more convenient to not have to worry about that even for a second.
> Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there
Are you sure that it isn't just port scanners? I get perhaps hundreds of connections to my STMP server every day, but they are just innocuous connections (hello, then disconnect). I wouldn't worry about that unless you see repeated login attempts, in which case you may want to deploy Fail2Ban.
Port scanners don't try to ssh into my server with various username/password combinations.
I prefer to hide my port instead of using F2B for a few reasons.
1. Log spam. Looking in my audit logs for anything suspicious is horrendous when there's just megs of login attempts for days.
2. F2B has banned me in the past due to various oopsies on my part. Which is not good when I'm out of town and really need to get into my server.
3. Zero days may be incredibly rare in ssh, but maybe not so much in Immich or any other relatively new software stack being exposed. I'd prefer not to risk it when simple alternatives exist.
Besides the above, using Tailscale gives me other options, such as locking down cloud servers (or other devices I may not have hardware control over) so that they can only be connected to, but not out of.
You can tweak rate thresholds for F2B, so that it blocks the 100-attempts-per-second attackers, but doesn't block your three-attempts-per-minute manual fumbling.
This is a good reason not to expose random services, but a wireguard endpoint simply won't respond at all if someone hits it with the wrong key. It is better even than key based ssh.
What do you mean, Russia has been doing the same thing for most of the war? The success relies on you controlling the territory, or at least territory close enough, so the results vary.
In a war zone any large high power jammer will be like supernova in the darkness visible for detectors from tens of kilometers away. So its gonna be immediately destroyed.
Iran protesters cant find or destroy jammers though.
Isn't Iran doing this from the air? That would be far more effective. In a contested space with AA everywhere that wouldn't be feasible (i.e. large parts of Ukraine)
I am an RF ignoramus. It all seems like black magic to me. I have seen "80% packet loss" being thrown around in these discussions, and also that it is just GPS spoofing.
My main question is that is there anything novel happening here? What is the actual range of disruption?
If you think cocaine and marijuana are comparable/interchangeable with heroin, you might want to educate yourself on the topic a bit more before trying to make a quip.
This is, in a way, why it's nice that we have companies like Red Hat, SUSE and so on. Even if you might not like their specific distros for one reason or another, they've found a way to make money in a way where they contribute back for everything they've received. Most companies don't do that.
Red Hat contributes to a broad spectrum of Linux packages, drivers, and of course the kernel itself [1].
One example is virtualization: the virtio stack is maintained by Red Hat (afaik). This is a huge driver behind the “democratization” of virtualization in general, allowing users and small companies to access performant virt without selling a kidney to VMware.
Also, Red Hat contributes to or maintains all of the components involved in OpenShift and OpenStack (one of which is virtio!).
Why should Red Hat be expected to contribute to Gentoo? A distro is funded by its own users. What distro directly contributes to another distro if it’s not a derivative or something?
Red Hat primarily contributes code to the kernel and various OSS projects, paid for by the clients on enterprise contracts. A paying client needs something and it gets done. Then the rest of us get to benefit by receiving the code for free. It’s a beautiful model.
If you look at lists of top contributors, Red Hat (along with the usual suspects in enterprise) are consistently at the top.
As others mentioned, Red Hat (and SUSE) has been amazing for the overall Linux community. They give back far more than what the GPL requires them to. Nearly every one of their paid "enterprise" products has a completely free and open source version.
For example:
- Red Hat Identity Management -> FreeIPA (i.e. Active Directory for Linux)
- Red Hat Satellite -> The Foreman + Katello
- Ansible ... Ansible.
- Red Hat OpenShift -> OKD
- And more I'm not going to list.
Okd was a mess when i tried to use it years ago. The documentation was just a 1:1 copy-paste of openshift docs despite significant differences in installation. It really wanted you to use OLM but the upstream operators like maestra (the istio based upstream of redhat service mesh) were often very out of date in the catalog to the point of being incompatible with the current version of okd. I raised the issue on GitHub and a redhat employee replied that they were not happy with the situation at the time but to keep asking to show there was interest. I switched to talos instead for a more vanilla k8s where i could actually get a service mesh installed.
Not really comparable to the experiences i have running keycloak where the upstream documentation is complete or freeipa where it’s identical to idm and you can just use the redhat docs. Those are both excellent pieces of software we are lucky to have.
It looks like they're second to Intel, at least by LF's metric. That said driver code tends to be take up a lot of space compared to other areas. Just look at the mass of AMD template garbage here: https://github.com/torvalds/linux/tree/master/drivers/gpu/dr...
Intel has long been a big contributor--mostly driver stuff as I understand it. (Intel does a lot more software work than most people realize.) Samsung was pretty high on the list at one point as well. My grad school roommate (now mostly retired though he keeps his hand in) was in the top 10 individual list at one point--mostly for networking-related stuff.
SuSE/openSuSE is innovating plenty of stuff which other distros find it worth to immitate, e.g. CachyOS and omarchy as Arch-derivatives felt that openSuSE-style btrfs snapshots were pretty cool.
It's a rock-solid distro, and if I had a use for enterprise support, I'd probably look into SLES as a pretty serious contender.
The breadth of what they're doing seems unparalleled, i.e. they have rolling release (Tumbleweed), delayed rolling release (Slowroll) which is pretty unique in and of itself, point release (Leap), and then both Tumbleweed and Leap are available in immutable form as well (MicroOS, and Leap Micro respectively), and all of the aforementioned with a broad choice of desktops or as server-focused minimal environments with an impressively small footprint without making unreasonable tradeoffs. ...if you multiply out all of those choices it gives you, it turns into quite a hairy ball of combinatorics, but they're doing a decent job supporting it all.
As far as graphical tools for system administration go, YaST is one of the most powerful and they are currently investing in properly replacing it, now that its 20-year history makes for an out-of-date appearance. I tried their new Agama installer just today, and was very pleased with the direction they're taking.
...so, not quite sure what you're getting at with your "Back in the day..." I, too, remember the days of going to a brick-and-mortar store to buy Linux as a box set, and it was between RedHat and SuSE. Since then, I think they've lost mindshare because other options became numerous and turned up the loudness, but I think they've been quiety doing a pretty decent job all this time and are still beloved by those who care to pay attention.
SUSE has a lot of ex-Red Hatters at high levels these days. Their CEO ran Asia-Pacific for a long time and North America commercial sales for a shorter period.
SUSE has always been pretty big in Europe but never was that prominent in North America except for IBM mainframes, which Red Hat chipped away at over time. (For a period, SUSE supported some mainframe features that Red Hat didn't--probably in part because some Red Hat engineering leadership was at least privately dismissive of the whole idea of running Linux on mainframes.)
I'm sorry but this is just completely disconnected from reality. Wayland is being successfully used every single day. Just because you don't like something doesn't mean it's inherently bad.
Red hat certainly burns a lot of money in service of horrifyingly bad people. It's nice we get good software out of it, but this is not a funding model to glorify. And of course american businesses not producing open source is the single most malignant force on the planet.
I guess Debian, SUSE, Canonical, etc get that email from Red Hat just go along with it. We better make the switch, we don’t want our ::checks notes:: competitor made at us.
Maybe. The background of my comment: in the end of 90's I worked in a company doing professional audio in windows. We had multiple cards, with multiple inputs and outputs, different sampling frequencies, channels, bits per sample... The API was trivial. I learned it in 1 hour.
FF to last year, I was working with OpenGL (in linux), I thought "I will add sound" boy... I was smashed by the zoo of APIs, subsystems one on top of another, lousy documentation... Audio, which for me was WAY easier as video, suddenly was way more complicated. From the userland POV, last year I also wanted to make a kind of BT speaker with a raspeberry pi, and also was terrible experience.
So, I don't know... maybe I should give a try to pipewire, at the time I was done after fighting with alsa and pulseaudio, the first problem I killed it.
I don't know that Red Hat is a positive force. They seem to be on a crusade to make the Linux desktop incomprehensible to the casual user, which I suppose makes sense when their bread and butter depends on people paying them to fix stuff, instead of fixing it themselves.
This, despite the fact that Rocky, Alma, Oracle Enterprise Linux, etc exist because of the hard work and money spent by Red Hat.
And what are those companies doing to fix this issue you claim Red Hat causes? Nothing. Because they like money, especially when all you have to do is rebuild and put your name on other people’s hard work.
And what exactly is incomprehensible? What exactly is it that they’re doing to the Linux desktop that make it so that people can’t fix their own problems? Isn’t the whole selling point of Rocky and Alma by most integrators is that it’s so easy you don’t need red hat to support it?
I think it's fair to say that Red Hat simply doesn't care about the desktop--at least beyond internal systems. You could argue the Fedora folks do to some degree but it's just not a priority and really isn't something that matters from a business perspective at all.
Can you name a company which does care about the linux desktop? Over the years i’m pretty sure redhat contributed a great deal to various desktop projects, can’t think of anyone who contributed more.
Well Red Hat did make a go at a supported enterprise desktop distro for a time and, as I wrote, Fedora--which Red Hat supports in a variety of ways for various purposes--is pretty much my default Linux distro.
So I'm not being critical. Yes, Red Hat employees do contribute to projects that are most relevant to the desktop even if doing so is not generally really the focus of their day jobs. And, no, other companies almost certainly haven't done more.
Certainly, Ubuntu used to be friendlier to new would-be Linux desktop users for a variety of reasons. (And we could get into some controversial decisions/directions it's taken but I won't.) I'm sure lots of people still run Ubuntu although Canonical is less prominent these days. My impression is that Canonical was sort of a passion project of Mark Shuttleworth's and they're just a lot lower key at this point.
It's not just systemd, though. You have to look at the whole picture, like the design of GNOME or how GTK is now basically a GNOMEy toolkit only (and if you dare point this out on reddit, ebassi may go ballistics). They kind of take more and more control over the ecosystem and singularize it for their own control. This is also why I see the "wayland is the future", in part, as means to leverage away even more control; the situation is not the same, as xorg-server is indeed mostly just in maintenance work by a few heroes such as Alanc, but wayland is primarily, IMO, a IBM Red Hat project. Lo and behold, GNOME was the first to mandate wayland and abandon xorg, just as it was the first to slap down systemd into the ecosystem too.
The usual semi conspiratorial nonsense. GNOME is only unusable to clickers that are uncomfortable with any UI other than what was perfected by windows 95. And Wayland? Really? Still yelling at that cloud?
I expect people will stop yelling about Wayland when it works as reliably as X, which is probably a decade away. I await your "works for me!" response.
I don't get your point. People regularly complain that Wayland has lots of remaining issues and there are always tedious "you're wrong because it works perfectly for me!" replies, as if the fact that it works perfectly for some people means that it works perfectly for everyone.
These days Wayland is MUCH smoother than X11 even with an Nvidia graphics cards. With X11, I occasionally had tearing issues or other weird behavior. Wayland fixed all of that on my gaming PC.
NixOS is anything but a light abstraction (I say this as a NixOS user).
Tbh it feels like NixOS is convenient in a large part because of systemd and all the other crap you have to wire together for a usable (read compatible) Linux desktop. Better to have a fat programming language, runtime and collection of packages which exposes one declarative interface.
Much of this issue is caused by the integrate-this-grab-bag-of-tools-someone-made approach to system design, which of course also has upsides. Redhat seems to be really helping with amplifying the downsides by providing the money to make a few mediocre tools absurdly big tho.
How is it not a light abstraction? If you're familiar with systemd, you can easily understand what the snippet below is doing even if you know nothing about Nix.
NixOS let you build the abstraction you want, and mix them with abstractions provided by others, and this single line illustrates this point extremely well as `sops` is not yet part of NixOS.
Other package managers also provide some abstraction over the packages, and would likely see the same systemd configuration abstracted the same way in post-install scripts. Yet, the encrypted file for `rclone.conf` would come as a static path in `/etc`.
You could resume NixOS as having moved the post-install script logic before the installation, yet this tiny detail gives you additional abilities to mix the post-install scripts and assert consistency ahead of making changes to the system.
Hah I just wrote something similar today to periodically push backups to another server from my NAS.
I agree the systemd interface is rather simple (just translate nix expression to config file).
But NixOS is a behemoth; Completely change the way how every package is built, introduce a functional programming language and filesystem standard to somehow merge everything together, and then declare approximately every package to ever exist in this new language + add a boatloat of extra utilities and infra.
I was referring to working with systemd specifically on NixOS. But yes, the Nix ecosystem is not easy to learn, but once it clicks there is no going back.
Not easy to learn is a bit of a red herring imo. Its also a disproportionate amount of stuff to hold in your head once you have learned it for what it is.
An OS is first of all is a set of primitives to accomplish other things. What classic worse-is-better Unix does really well is do just enough to make you able to get on with whatever those things are. Write some C program to gather some simulation data, pipe its output to awk or gnuplot to slice it. Maybe automate some of that workflow with a script or two.
Current tools can do a bit more and can do it nicer or more rigorously sometimes, but you loose the brutal simplicity of a bunch of tools all communicating with the same conventions and interfaces. Instead you get a bunch of big systems all with their own conventions and poor interop. You've got Systemd and the other Redhat-isms with their custom formats and bad CLI interfaces. You've got every programming language with it's own n package managers.
A bunch of useful stuff sure, but encased in a bunch of reinvented infrastructure and conventions.
Granted I have not used this library myself, so this is not coming from experience, but this type of copy does not instill confidence:
let count = track(0);
<button onClick={() => @count++}>{@count}</button>
No useState, ref(), .value, $:, or signals.
You could replace `track` with `useState`, or `@` with `$` and it's pretty much the same thing. Whether you use syntax that's explicit or magic symbols you have to look up to understand is a matter of preference, but this does not really set it apart from any other library.
Use Hammerspoon [0][1], it comes with a lot of macOS integrations out of the box and you write Lua, which takes zero effort to pick up and use. For me a big benefit is that you don't need to touch Xcode at all.
reply