> Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are.
That's not the issue. Calling anything with opt-in reference counting a GC language. You're just fudging definitions to get to the desired talking point. I mean, C is, by that definition, a GC language. It can be equipped with
> That it's not as easily quantifiable doesn't make it any less real.
It makes it more subjective and easy to bias. Rust has a clear purpose. To put a stop to memory safety errors. What does it's painful to use? Is it like Lisp to Haskell or C to Lisp.
> For example, it would be hard to distinguish between Java and Haskell.
It would be possible to objectively distinguish between Java and Haskell, as long as they aren't feature-by-feature compatible.
If you can make a program that halts on that feature, you can prove you're in language with that feature.
> If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust.
Yeah, because you fight a strawman. Having a safe language is a precondition but not enough. I want it to be as performant as C as well.
Second, even if you have the goal of moving to ATS, developing ATS-like isn't going to help. You need a mass of people to move there.
> Calling anything with opt-in reference counting a GC language
Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC". And it does. Saying that it's "opt in" when most Rust programs use it (albeit to a lesser extent than Java or Go programs, provided we don't consider Rust's special case of a single reference to be GC) is misleading.
> Rust has a clear purpose. To put a stop to memory safety errors.
Yes, but 1. other languages do it, too, so clearly "stopping memory errors" isn't enough, 2. Rust does it in a way that requires much more use of unsafe escape hatches than other languages, so it clearly recognises the need for some compromise, and 3. Rust's safety very much comes at a cost.
So its purpose may be clear, but it is also very clear that it makes tradeoffs and compromises, which implies that other tradeoffs and compromises may be reasonable, too.
But anyway, having a very precise goal makes some things quantifiable, but I don't think anyone thinks that's what makes a language better than another. C and JS also have very clear purposes, but does that make them better than, say, Python?
> Having a safe language is a precondition but not enough. I want it to be as performant as C as well... You need a mass of people to move there.
So clearly you have a few prerequisites, not just memory safety, and you recognise the need for some pragmatic compromises. Can you accept that your prerequisites and compromises might not be universal and there may be others that are equally reasonable, all things considered?
I am a proponent of software correctness and formal methods (you can check out my old blog: https://pron.github.io) and I've learnt a lot over my decades in industry about the complexities of software correctness. When I choose a low-level language, to switch away from C++ my prerequisites are: a simple language with no implicitness (I want to see every operation on the page) as I think it makes code reviews more effective (the effectiveness of code reviews has been shown empirically, although not the relationship to language design) and fast compilation to allow me to write more tests and run them more often.
I'm not saying that my requirements are universally superior to yours, and my interests also lie in a high emphasis on correctness (which extends far beyond mere memory safety), it's just that my conclusions and perhaps personal preferences lead me to prefer a different path to your preferred one. I don't think anyone has any objective data to support the claim that my preferred path to correctness is superior to yours or vice-versa.
I can say, however, that in the 1970s, proponents of deductive proofs warned of an impending "software crisis" and believed that proofs are the only way to avoid it (as proofs are "quantifiably" exhaustive). Twenty years later, one of them, Tony Hoare, famously admitted he was wrong, and that less easily quantifiable approaches turned out to be more effective than expected (and more effective than deductive proofs, at least of complicated properties). So the idea that an approach is superior just because it's absolute/"precise" is not generally true.
Of course, we must be careful not to extrapolate and generalise in either direction, but my point is that software correctness is a very complicated subject, and nobody knows what the "best" path is, or even if there is one such best path.
So I certainly expect a Rust program to have fewer memory-safety bugs than a Zig programs (though probably more than a Java program), but that's not what we care about. We want the program to have the fewest dangerous bugs overall. After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection. Do I expect a Rust program to have fewer serious bugs than a Zig program? No, and maybe the opposite (and maybe the same) due to my preferred prerequisites I listed above. The problem with saying that we should all prefer the more "absolute" approach, though it could possibly harm less easily-quantifiable aspects, because it's at least absolute in whatever it does guarantee is that this belief has already been shown to not be generally true.
(As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one. The main tradeoff is RAM footprint. In fact, the cornerstone of tracing algorithms is that they can reduce the cost of memory management to be arbitrarily low given a large-enough heap. In practice, of course, different algorithms make much more complicated pragmatic tradeoffs. Basic refcounting collectors primarily optimise for footprint.)
> Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC".
Ok, semantics aside, my point still stands. C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.
> Can you accept that your prerequisites and compromises might not be universal
Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.
> As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one
Yeah, but it has a negative impact on memory. As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.
> After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection.
Sure, but those that see fewer UAF errors have more time to deal with logic errors. Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
> C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.
Come on. The majority of Rust programs use the GC. I don't understand why it's important to you to debate this obvious point. Rust has a GC and most Rust programs use it (albeit to a much lesser extent than Java/Python/Go etc.). I don't understand why it's a big deal.
You want to add the caveat that some Rust programs don't use the GC and it's even possible to not use the standard library at all? Fine.
> Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.
This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades and is continuing to do so.
> As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.
What you say about RAM prices is true, but it still doesn't change the economics of RAM/CPU sufficiently. There is a direct correspondence between how much extra RAM a tracing collector needs and the amount of available CPU (through the allocation rate). Regardless of how memory management is done (even manually), reducing footprint requires using more CPU, so the question isn't "is RAM expensive?" but "what is the relative cost of RAM and CPU when I can exchange one for the other?" The RAM/CPU ratios available in virtually all on-prem or cloud offerings are favourable to tracing algorithms.
If you're interested in the subject, here's an interesting keynote from the last International Symposium on Memory Management (ISMM): https://youtu.be/mLNFVNXbw7I
> Sure, but those that see fewer UAF errors have more time to deal with logic errors.
I think that's a valid argument, but so is mine. If we knew the best path to software correctness, we'd all be doing it.
> Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
I understand that's something you believe, but it's not supported empirically, and as someone who's been deep in the software correctness and formal verification world for many, many years, I can tell you that it's clear we don't know what the "right" approach is (or even that there is one right approach) and that very little is obvious. Things that we thought were obvious turned out to be wrong.
It's certainly reasonable to believe that the Rust approach leads to more correctness than the Zig approach, and some believe that, and it's equally reasonable to believe that the Zig approach leads to more correctness than the Rust approach, and some people believe that. It's also reasonable to believe that a different approaches is better for correctness in different circumstances. We just don't know, and there are reasonable justifications in both directions. So until we know, different people will make different choices, based on their own good reasons, and maybe at some point in the future we'll be able to have some empirical data that gives us something more grounded in fact.
> Come on. The majority of Rust programs use the GC.
This part is false. You make a ridiculous statement and expect everyone to just nod along.
I could see this being true iff you say all Rust UI programs use "RC".
> This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades
Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).
GC will be a mostly unacceptable overhead in numerous instances. I'm not saying it will be fully gone, but I don't think the current crop of C-likes is accidental either.
> I understand that's something you believe, but it's not supported empirically
> Stable and high-quality changes differentiate Rust. DORA uses rollback rate for evaluating change stability. Rust's rollback rate is very low and continues to decrease, even as its adoption in Android surpasses C++.
So for similar patches, you see fewer errors in new code. And the overall error rate still favors Rust.
> Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).
The memory overhead of a moving collector is related only to the allocation rate. If the memory/CPU is sufficient to cover that overhead, which, in turn help save more costly CPU, it doesn't matter if the relative cost reduced (also, it's not even reduced; you're simply speculating that one day it could be).
> I'm not saying it will be fully gone
That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years with no reversal in trend. This is like saying that more people will find the cost of typing a text message unacceptable so we'll see a rise in voicemail messages, but of course text messaging will not be fully gone.
Even embedded software is increasingly written in languages that rely heavily on GC. Now, I don't know the future market forces, and maybe we won't be using any programming languages at all but LLMs will be outputting machine code directly, but I find it strange to predict with such certainty that the trend we've been seeing for so long will reverse in such full force. But ok, who knows. I can't prove that the future you're predicting is not possible.
> It's supported by Google's usage of Rust.
There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing, and therefore in the total reduction of bugs, and you said that maybe a complex language like Rust, with lots of implicitness but also temporal memory safety could perhaps have a positive effect on other bugs, too, in comparison. What you linked to is something about Rust vs C and C++. Zig is at least as different from either one as it is from Rust.
> And the overall error rate still favors Rust.
Compared to C++. What does it have to do with anything we were talking about?
> That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years
I wish I knew what you mean by programs relying primarily on GC. Does that include Rust?
Regardless, but extrapolating current PL trends so far is a fools errand. I'm not looking at current social/market trends but limits of physics and hardware.
> There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing
No, let me remind you:
> > [snip] Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
> I understand that's something you believe, but it's not supported empirically
we were talking how not having to worry about UB allows for easier defect catching.
> Compared to C++.
Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time. Even if it isn't a 1-to-1 comparison with Zig, we have other examples like Bun vs Deno, where Bun incurs more segfaults (per issue).
Also don't see how much of Zig design could really assist code reviews and testing.
No. Most memory management in Rust is not through it's GC, even though most Rust programs do use the GC to some extent.
> I'm not looking at current social/market trends but limits of physics and hardware.
The laws of physics absolutely do not predict that the relative cost of CPU to RAM will decrease substantially. Unforeseen economic events may always happen, but they are unforeseen. It's always possible that current trends would reverse, but that's a different matter from assuming they are likely to reverse.
> Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time.
I don't know how reasonable it is to think that. If Rust's value comes from eliminating spatial and temporal memory safety issues, surely there's value in eliminating the more dangerous of the two, which Zig does as well as Rust (but C++ doesn't).
But even if you think that's reasonable for some reason, I think it's at least as reasonable to think the opposite, given that in almost 30 years of programming in C++, by far my biggest issue with the language has been its complexity and implicitness, and Zig fixes both. Given how radically different Zig is from C++, my preferenece for Zig stems precisely from it solving what is, to me, the biggest issue with C++.
> Also don't see how much of Zig design could really assist code reviews and testing.
Because it's both explicit and simple. There are no hidden operations performed by a routine that do not appear in that routine's code. In C++ (or Rust), to know whether there's some hidden call to a destructor/trait, you have to examine all the types involved (to make matters worse, some of them may be inferred).
> No. Most memory management in Rust is not through it's GC, even though most Rust programs do use the GC to some extent.
Most? You still haven't proved that. So most Rust programs mostly use GC, yet it's not a GC language; those are some very mind-contorting definitions.
> The laws of physics absolutely do not predict that the relative cost of CPU to RAM will decrease substantially.
Laws of physics do absolutely tell you that more computation means more heat. Also trying to approach the size of atoms is another no-go. That's why current chip densities have stalled but have been kept on life support via chip stacking and gate redesigns. The 2nm process is mostly a marketing term (https://en.wikipedia.org/wiki/2_nm_process) the actual gate is around 45x20nm.
Not to mention that when working with the way atoms work (i.e. their random nature) and small scales, small irregularities mean low yields.
They put a soft cap on any exponential curve. And hard cap by placing a literal singularity.
> I don't know how reasonable it is to think that.
Why not? With modern collections (std::vector, std::span, and std::string) and modern pointers (std::unique_ptr, std::shared_ptr) you get decent memory safety.
> Because it's both explicit and simple.
Being a simple language doesn't guarantee lack of complexity in implementation (see Brainfuck). The question is how much language complexity buys implementation simplicity. C++ of course has neither because it started with a backwards compatibility goal (it did get abandoned at some point).
By Zig's explicitness, you mean everything is public? I've seen that stuff backfire spectacularly, because you don't get any encapsulation, which means maximum coupling.
> So most Rust programs mostly use GC, yet it's not a GC language; those are some very mind-contorting definitions.
I don't think these definitions are very meaningful. In memory management literature, any technique that reclaims heap memory after a heap object is not reachable is called "garbage collection". Call it a "GC language" or not, it collects heap memory using techniques in the GC literature after objects become unreachable using reference counting with a special construct for the single reference case.
There isn't too much that you can learn just by saying "GC", because the memory/CPU tradeoffs can be more different between two GCs than between one GC and some memory management style in C. So the debate on terminology is less substantial and more about how different people colloquially refer to things with different terms. But Rc/Arc are a very common, established, and traditional GC implementation (and a simple one if you ignore the large and rather elaborate implementation of malloc/free we have these days in the runtime, which is necessary for decent performance on multicore machines).
> They put a soft cap on any exponential curve. And hard cap by placing a literal singularity.
How does any of this predict that processing will become cheaper relative to memory? Note that the trend over the past 4 decades has been the opposite.
> Being a simple language doesn't guarantee lack of complexity in implementation (see Brainfuck).
I didn't think that every simple language is easy to understand, but I find Zig simple and easy to understand.
> By Zig's explicitness, you mean everything is public?
What I meant was that there are no calls/operations performed in a subroutine that aren't visible in the text of the subroutine. This is very important to me in low-level programming. Of course, different people from different domains may have different preferences. Much of my low-level programming was in safety-critical hard realtime, and Zig just appeals more to how I like to think about control over the hardware and about correctness. It's not universal, and I'm sure Rust appeals more to other low-level programmers.
That's not the issue. Calling anything with opt-in reference counting a GC language. You're just fudging definitions to get to the desired talking point. I mean, C is, by that definition, a GC language. It can be equipped with
> That it's not as easily quantifiable doesn't make it any less real.
It makes it more subjective and easy to bias. Rust has a clear purpose. To put a stop to memory safety errors. What does it's painful to use? Is it like Lisp to Haskell or C to Lisp.
> For example, it would be hard to distinguish between Java and Haskell.
It would be possible to objectively distinguish between Java and Haskell, as long as they aren't feature-by-feature compatible.
If you can make a program that halts on that feature, you can prove you're in language with that feature.
> If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust.
Yeah, because you fight a strawman. Having a safe language is a precondition but not enough. I want it to be as performant as C as well.
Second, even if you have the goal of moving to ATS, developing ATS-like isn't going to help. You need a mass of people to move there.