his article does not make sense at all. i do not ... know why it is even here and why some other commenters are inserting additional "points" to that article just to make it seem sane. :)
Writing SATA to NVME adapter is nonsensical endeavor. It is as writing RoCE to HTTPS adapter. Makes no sense at all. And im not even talking about voltages etc.
Any NVME disk can be connected even over PCIE3 x1 so there is plenty of capability on DESKTOP computers he is "managing".
And what is he writing and how is he writing it is unbelievable that he can not seem to understand what SAS expander is etc.
No, it makes as much sense as USB-NVMe, which does exist, and as the article mentions, so does the other direction (a PCIe SATA controller in the shape of an M.2 SSD) but there's just not enough of a market for NVMe-SATA yet. They're just block device protocols, and the conversion between them is well-defined.
A bidirectional example is IDE/SATA, for which plentiful cheap adapters in both directions (one IC automatically detects its role) exist; IDE host to SATA device, or SATA host to IDE device.
For another "directional" example, it's worth noting that SATA to MMC/(micro/mini)SD(HC/XC)/TF adapters exist which let you use those cards (often multiple, even in RAID!) as a SATA drive, but the opposite direction, exposing a SATA drive as an SD card, does not (yet).
USB is on same "OSI model" layer as PCIE is. SATA is not.
PCIE sata controllers in m.2 ssd are a thing, as are m.2 direct sata ssd, as are sata controllers on m.2 card with sata connectors capable of connecting 4-6 disks ( ASM1166 ). so i do not seem to see point you want to make there
sata -> memory card is solution for embedded market of 2000s not today, for refurbishments or efforts to keep using old embedded stuff. and again it has nothing to do with guys point, it is absolutelly different use case. he is talking about servers, ( servers with a lot of drives have expanders ) ! ! ! ! M.2 NVMe to CFexpress Extender is something else entirely, so depends highly on what EXACTLY are we talking about ! ! ! !
simple reason why it is nonsense - how much does 256 GB m.2 ssd cost? so just use that.
or use M.2 to PCI-E 4X 1X Riser Card ( adt-link K42ST ) and connect standard ubiquitous SATA/SAS/NVME HBA/RAID card into it and use any freaking disks.
or
M.2 Key M to SFF-8643 and use cable to connect it to something like H3platform Falcon 4118... which is "just" PCIE switch + psu + connectors.
or
m.2 to pcie connector and use HBA with optics to connect to 5 miles remote ARRAY.
OSI model makes no sense when applied to USB vs PCIE vs SATA :)
But if you really insisted SATA and USB would occupy layers 1 and 2 while PCIE goes all the way to 3, so would ancient Firewire. PCIE (and FW) support bus mastering, USB does not. USB/SATA devices are purely host pooled and unable to initiate any transfers, PCIE device can talk to any other PCIE device without host help.
I don't think that's true. USB is accessible in lots of places where SATA and PCIe is not, i.e. as external connectors. Yes eSATA is a thing but eSATA without being able to use USB or PCIe?
Or in other words, SATA->NVMe would at best serve users unwilling to upgrade their legacy racks while USB->NVMe has plenty of non-legacy use cases.
do not encourage nonsense. pls :) it is totally pointless endeavor because you CAN still buy NEW sata disks.... so why the.. you need to have 60 dollar converters added on top of essentially same disk ??? so no sata -> nvme is total nonsense and provides absolutelly nothing in technical terms to anybody in any situation.
RHEL and friendly clones like Alma linux do run totally without X11 already !! Only wayland. so no need to remove x11 after installing distro. (on some distros it is not even possible, distro will break.) so finally we can have "clean" distros. it can provide nicer experience.
Both LLVM and GCC is being supported by processor manufacturers directly. Yes, Apple and Intel has their own LLVM versions, but as long as don't break compatibility with GCC and doesn't prevent porting explicitly, I don't see a problem.
I personally use GCC suite exclusively though, and while LLVM is not my favorite compiler, we can thank them for spurring GCC team into action for improving their game.
> ... and while LLVM is not my favorite compiler, we can thank them for spurring GCC team into action for improving their game.
Exactly. I think people have forgotten just how poor GCC was 15 years ago. Both teams are doing excellent work. Even M$ has been upping it's game with it's compiler!
Can you be more explicit? Is it because they are optimizing too much to a single platform that isn't generalizable to other compilers or architectures? What's your specific gripe?
A commercial enterprise is dropping support for older cpu architectures in their newer OSs so they can improve the average performance of the deployed software?
Don't see how that's controversial. It's something that doesn't matter to their customers or their business.
The newest x86_64-v1 server is older than a decade now, and I'm not sure -v2 is deprecated. RockyLinux 9 is running happily on -v2 hardware downstairs.
Oh, -v2 is deprecated for RH10. Not a big deal, honestly.
From a fleet perspective, I prefer more code uses more advanced instructions on my processors. Efficiency goes up on hot code paths possibly. What's not to love?
Rocky linux is in cahoots with Oracle, do not supporting that with anything, not even with words. Go Alma linux if you need Red Hat but with different name but for love of anything good in this world, boycott everyone friendly with Ellison.
In our case OS selection is done on a case by case basis, and we don't take sides. In our case depreciation of V2 has no practical implications, either.
This is also same on the personal level. I use the OS which is most suitable for the task at hand, and the root OS (Debian / RedHat / etc.) doesn't matter. I'm comfortable with all of them the same.
No, v1. I mean, you can't buy a x86_64-v1 server for a decade now, and if you have one and it's alive, it's a very slim chance it's working unless it's new old stock.
If it has seen any decent amount of workload during its lifetime, it possibly has a couple of ICs which reached their end of their electronic life and malfunctioning.
newer, smaller (physically,epitaxy, not capacity) ssd "cell" = more times per year you have to rewrite(refresh) that cell / whole disk so you do not lose data, anyway.
any sane person uses FS / system with dedup in it, so you can have 7+4+12 snapshots for 5TB of data taking only 7TB of space. etc
you want snapshots, for example Manjarolinux (arch based) does use BTRFS capable of snapshots. so before every update it will make snapshot so if update fails, you can just select to go back into working state in grub...
Alma linux uses BTRFS too but im not sure if they have this functionality too.
ZFS, bcache, BTRFS, checksums
MD-INTERGITY inside of linux kernel can provide checksums for any fs essentially. just "
lvcreate --type raidN --raidintegrity y " and you have checksumms + raid in linux
Intel RSTe / VROC it is integrated directly to your CPU/CHIPSET. you just populate that in "BIOS" and linux,BSD,windows will nofuss boot / install on top of that.
or every linux distro,bsd has ZFS available,
or every linux distro has LVM raid available,
or BTRFS has raid 1, 0, 10,
or windows has their own software raid, just open Storage Spaces / Disk Management console,
so whole unraid / nonraid are just nonsensical waste of effort for everyone
why would i invest time and effort to this technology by small team, if i can have technology supervised, maintained by linux kernel devs ? ? makes no sense.
and things i mentioned here are here longer than unraid/nonraid existed. so it was nonsensical from start.
his article does not make sense at all. i do not ... know why it is even here and why some other commenters are inserting additional "points" to that article just to make it seem sane. :)