Someone needs to gain physical access to the ballot after voting in order to erase it. If they can do that they can just as well make it invalid using a pen, or they can just tear it up.
On the other hand, disappearing ink has been around for a long time.
You're right, of course - I was completely messing up in my mind what r-value reference parameters actually do, and thinking that they need to be moved to, when the whole point is that they don't, they're just a reference.
The state of a moved-from value is valid but unspecified (note, not undefined). IIRC the spec says vector must be `empty()` after a move. So all implementations do the obvious thing and revert back to an empty vector.
A private monopoly sounds like a great idea. A profit incentive for access to social media definitely won't result in the price of these tokens skyrocketing to extract as much money as possible.
It doesn't even have to be a private monopoly, it can be a public service.
For example in Quebec, liquor stores are managed by the government, called "Société des alcools du Québec (SAQ)" or legal cannabis is managed by "Société québécoise du cannabis (SQDC)".
I don't see why other restrictions can't follow the same pattern?
Generally these kinds of private monopolies also have public-set prices.
Which is a huge disaster for expensive things (like your power bill), but is much less of one for a token that takes 50 cents of human labour and 0.5 cents of computing to produce.
Only because people aren't putting in the effort to build their binaries properly. You need to link against the oldest glibc version that has all the symbols you need, and then your binary will actually work everywhere(*).
We are using Nix to do this. It’s only a few lines of code. We build a gcc 14 stdenv that uses an old glibc.
But I agree that this should just be a simple target SDK flag.
I think the issue is that the Linux community is generally hostile towards proprietary software and it’s less of an issue for FLOSS because they can always be compiled against the latest.
But to link against an old glibc version, you need to compile on an old distro, on a VM. And you'll have a rough time if some part of the build depends on a tool too new for your VM. It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all.
> It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all.
It's definitely not easy, but it's possible: using the `.symver` assembly (pseudo-)directive you can specify the version of the symbol you want to link against.
Ok, so you agree with him except where he says “in a VM” because you say you can also do it “in a container”.
Of course, you both leave out that you could do it “on real hardware”.
But none of this matters. The real point is that you have to compile on an old distro. If he left out “in a VM”, you would have had nothing to correct.
I'm not disagreeing that glibc symbol versioning could be better. I raised it because this is probably one of the few valid use cases for containers where they would have a large advantage over a heavyweight VM.
But it's like complaining that you might need a VM or container to compile your software for Win16 or Win32s. Nobody is using those anymore. Nor really old Linux distributions. And if they do, they're not really going to complain about having to use a VM or container.
As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.
But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task. Go figure.
> But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task.
But does the lack of a stable ABI have any (negative) effect on the reliability of the platform?
Only for people who want to use it as a desktop replacement for Windows or MacOS I guess? There are no end of people complaining they can't get their wifi or sound card or trackpad working on (insert-obscure-Linux-distribution-here).
Like many others, I have Linux servers running over 2000-3000 days uptime. So I'm going to say no, it doesn't, not really.
>As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.
You must really be behind the times. Arch and Gentoo users wouldn't complain because an old game doesn't run. In fact the exact opposite would happen. It's not implausible for an Arch or Gentoo user to end up compiling their code on a five hour old release of glibc and thereby maximize glibc incompatibility with every other distribution.
Glibc strives for backwards (but not forwards) compatibility so barring exceptions (which are extremely rare but nobody is perfect), using a newer glibc than what something was built for does not cause any issues, only using an older glibc would.
I think even calling it a "design" is dubious. It's an attribute of these systems that arose out of the circumstance, nobody ever sat down and said it should be this way. Even Torvalds complaining about it doesn't mean it gets fixed, it's not analogous to Steve Jobs complaining about a thing because Torvalds is only in charge of one piece of the puzzle, and the whole image that emerges from all these different groups only loosely collaborating with each other isn't going to be anybody's ideal.
In other words, the Linux desktop as a whole is a Bazaar, not Cathedral.
> In other words, the Linux desktop as a whole is a Bazaar, not Cathedral.
This was true in the 90s, not the 2020s.
There are enough moneyed interests that control the entirety of Linux now. If someone at Canonical or Red Hat thought a glibc version translation layer (think WINE, but for running software targeted for Linux systems made more than the last breaking glibc version) was a good enough idea, it could get implemented pretty rapidly. Instead of win32+wine being the only stable abi on Linux, Linux could have the most stable abi on Linux.
I don’t understand why this is the case, and would like to understand. If I want only functions f1 and f2 which were introduced in glibc versions v1 and v2, why do I have to build with v2 rather than v3? Shouldn’t the symbols be named something like glibc_v1_f1 and glibc_v2_f2 regardless of whether you’re compiling against glibc v2 or glibc v3? If it is instead something like “compiling against vN uses symbols glibc_vN_f1 and glibc_vN_f2” combined with glibc v3 providing glibc_v1_f1, glibc_v2_f1, glibc_v3_f1, glibc_v2_f2 and glbc_v3_f2… why would it be that way?
It allows (among other things) the glibc developers to change struct layouts while remaining backwards compatible. E.g. if function f1 takes a struct as argument, and its layout changes between v2 and v3, then glibc_v2_f1 and glibc_v3_f1 have different ABIs.
Individual functions may have a lot of different versions. They do only update them if there is an ABI change (so you may have e.g. f1_v1, f1_v2, f2_v2, f2_v3 as synbols in v3 of glibc) but there's no easy way to say 'give me v2 of every function'. If you compile against v3 you'll get f2_v3 and f1_v2 and so it won't work on v2.
Why are they changing? And I presume there must be disadvantages to staying on the old symbols, or else they wouldn’t be changing them—so what are those disadvantages?
Depends on the functions. One example is memcpy - the API specifies that the source and destination must not overlap but the original implementation in glibc didn't care. When they wanted to optimize the implementation they decided to introduce a new version of memcpy rather than breaking older binaries that inadvertently relied on on the existing behavior even if that was never guaranteed by the API. Old binaries keep getting the older but slower memcpy, new binaries get the optimized memcpy.
Other examples are keeping bug compatibility, when standards are revised to require incompatible behavior, or to introduce additional safety features that require additional (hidden) arguments automatically passed to functions which changes their ABI.
> x86-64 is typically limited to about 5 instructions
Intel Lion-cove decodes 8 instructions per cycle and can retire 12. Intel Skymont's triple decoder can even do 9 instructions per cycle and that's without a cache.
AMD's Zen 5 on the other hand has a 6K cache for instruction decoding allowing for 8 instructions per cycle, but still only a 4-wide decoder for each hyper-thread.
And yet AMD is still ahead of intel in both performance and performance-per-watt. So maybe this whole instruction decode thing is not as important as people are saying.
> littered with too many special symbols and very verbose
This seems kinda self-contracticting. Special symbols are there to make the syntax terse, not verbose. Perhaps your issue is not with how things are written, but that there's a lot of information for something that seems simpler. In other words a lot of semantic complexity, rather than an issue with syntax.
I think it's also that Rust needs you to be very explicit about things that are very incidental to the intent of your code. In a sense that's true of C, but in C worrying about those things isn't embedded in the syntax, it's in lines of code that are readable (but can also go unwritten or be written wrong). In the GCed languages Rust actually competes with (outside the kernel) — think more like C# or Kotlin, less like Python — you do not have to manage that incidental complexity, which makes Rust look 'janky'.
> Apple demonstrated to the world that it can be extremely fast and sip power.
Kinda. Apple silicon sips power when it isn't being used, but under a heavy gaming load it's pretty comparable to AMD. People report 2 hours of battery life playing cyberpunk on Macs, which matches the steam deck. It's only in lighter games where Apple pulls ahead significantly, and that really has nothing to do with it being ARM.
Not for Linux they're not. IIRC Audio and camera don't work, and firmware is non-redistributable and so you need to mooch it off a Windows partition. On top of that the performance on Linux hasn't been great either.
On the other hand, disappearing ink has been around for a long time.
reply