I have been thinking about this myself. I'm working on some custom dictionaries for words I discover from my corpus of movie subtitles. Which I'm sure is not a new idea, but it's fun, because it gives me a dictionary that only contains the words that people "actually use", and with "real" example sentences. (words in quotes because movie dialogue isn't 100% as real as I'd like.)
I'm sure this is not a remotely new idea, but I'm having fun with it. I also like that I can see how common every form of every word is. I was surprised to learn that almost none of the most common words are nouns. And in my internal tools I can filter by movies released a certain date to track changes, which is neat.
if your movie collection is big enough that might be really useful for language learning. Create your own frequency lists and common phrases.
I would be curious how it stacks up against the written word.
I mean all words were added to a dictionary because someone was using them. It's just that they may not be used by people in your particular region or time.
rust would be pretty unusable without references. affine lambda calculus isn’t even turing complete. however, you’re right that a borrow checker is unnecessary, as uniqueness types (the technical term for types that guarantee single ownership) are implemented in clean and idris without a borrow checker. the borrow checker mainly exists because it dramatically increases the number of valid programs.
Supporting single-ownership in a language doesn't mean you can't have opt-in copyability and/or multiple-ownership. This is how Rust already works, and is independent of the borrow checker.
If we consider a Rust-like language without the borrow checker, it's obviously still Turing-complete. For functions that take references as parameters, instead you would simply pass ownership of the value back to the caller as part of the return value. And for structs that hold references, you would instead have them hold reference-counted handles. The former case is merely less convenient, and the latter case is merely less efficient.
Well, it's not quite that easy because someone still has to test the agent's output and make sure it works as expected, which it often doesn't. In many cases, they still need to read the code and make sure that it does what it's supposed to do. Or they may need to spend time coming up with an effective prompt, which can be harder than it sounds for complicated projects where models will fail if you ask them to implement a feature without giving them detailed guidance on how to do so.
Definitely, but that's kind of my point: the maintainers are still going to be way better at all of that than some random contributor who just wants a feature, vibe codes it, and barely tests it. The maintainers already know the codebase, they understand the implications of changes, and they can write much better plans for the agent to follow, which they can verify against. Having a great plan written down that you can verify against drastically lowers the risk of LLM-generated code
You can do all the steps I mentioned as a random contributor. I've done it before. But I agree that donations are better than just prompting claude "implement this feature, make no mistakes" and hoping it one-shots it. Honestly, even carefully thought-out feature requests are much more valuable than that. At least if the maintainer vibe-codes it they don't have to worry that you deliberately introduced a security vulnerability or back door.
Yeah. You cannot achieve native performance with web apps, but most tasks are simple enough that wasm is plenty fast. If you generate a frame in 7ms or 1ms, the user can't tell the difference.
I think cloud-first design is natural because webapps have nowhere good to store state. On Safari, which is the only browser that matters for many web developers, everything can be deleted at any time. So if you don't want to have a horrible user experience, you have to force your users to make an account and sync their stuff to the cloud. Then, the most natural thing to do is to just have the user's frontend update when the backend updates (think old-school fully-SSR'd apps). You can do much better than that with optimistic updates but it adds a lot of complexity. The gold standard is to go fully local-first, but to really do that right requires CRDTs in most cases, which are their own rabbit hole. (That's the approach I take in my apps because i'm a perfectionist, but I get why most people wouldn't think it's worth it)
With the files API, apps could actually replicate the microsoft word experience of drafting a file and saving it to your desktop and praying that your hard drive doesn't fail, but despite offering great benefits in terms of self-custody of data it was never a great user experience for most people.
> With the files API, apps could actually replicate the microsoft word experience of drafting a file and saving it to your desktop and praying that your hard drive doesn't fail,
Even withou the files API, with local storage, web apps can (and some—mostly extremely casual games that are free—do!) duplicate that experience with the extra risk of your data being lost because your disk became too full or some other event causing the local storage to be cleared.
I once ran out of disk space while Chrome was running and, despite me clearing the space again shortly after, the damage was already done and Chrome had already decided to wipe all my local storage and cookies. It didn't keep it in memory to save again once there was space, it just deleted it all permanently.
I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python. It just does not match my experience at all. I've been a professional rust developer for about three years. Every time I look at python code, it's doing something insane where the function argument definition basically looks like line noise with args and kwargs, with no types, so it's impossible to guess what the parameeters will be for any given function. Every python developer I know makes heavy use of the repl just to figure out what methods they can call on some return value of some underdocumented method of a library they're using. The first time I read pandas code, I saw something along the lines of df[df["age"] < 3] and thought I was having a stroke. Yet python has a reputation for being easy to learn and use. We have a python developer on our team and it probably took me about a day to onboard him to rust and get him able to make changes to our (fairly complicated) Rust codebase.
Don't get me wrong, rust has plenty of "weird" features too, for example higher rank trait bounds have a ridiculous syntax and are going to be hard for most people to understand. But, almost no one will ever have to use a higher rank trait bound. I encounter such things much more rarely in rust than in almost any other mainstream language.
The language itself is not more complex to onboard. For Scala also not. It feels great to have all these language features to ones proposal. The added complexity is in the way how expert code is written. The experts are empowered and productive, but heightens the barrier of entry for newcomers by their practices. Note that they also might expertly write more accessible code to avoid the issue, and then I agree with (though I can't compare to Python, never used it).
Hm, you claim that Rust and Scala are not more complex to onboard than Python... but then you say you never used Python? If that's the case, how do you know? Having used both, I do think Rust is harder to onboard, just because there is more syntax that you need to learn. And Rust is a lot more verbose. And that's before you are exposed to the borrow checker.
Well, the parent wrote "I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python." And you wrote "The language itself is not more complex to onboard." So... to contract Rust with Scala, I think it's clearer to write "The language itself is not more complex to onboard _than Scala_."
To that, I completely agree! Scala is one of the most complex languages, similar to C++. In terms of complexity (roughly the number of features) / hardness to onboard, I would have the following list (hardest to easiest): C++, Scala, Rust, Zig, Swift, Nim, Kotlin, JavaScript, Go, Python.
I see the confusion. ChadNauseam mentions Python to another comment of mine, where I mentioned Gleam. In your list hardest-to-easiest perhaps Gleam is even easier than Python. They literally advertise it as "the language you can learn in a day".
Thanks a lot! I wasn't aware of Gleam, it really seems simple. I probably wouldn't say "learn in a day", any I'm not sure if it's simpler than Python, but it's statically typed, and this adds some complexity necessarily.
> I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python.
Most people conflate "complexity" and "difficulty". Rust is a less complex language than Python (yes, it's true), but it's also much more difficult, because it requires you to do all the hard work up-front, while giving you enormously more runtime guarantees.
Doing the hard work up front is easier than doing it while debugging a non-trivial system. And there are boilerplate patterns in Rust that allow you to skip the hard work while doing throwaway exploratory programming, just like in "easier" languages. Except that then you can refactor the boilerplate away and end up with a proper high-quality system.
You didn't mention parametric polymorphism, which is incredibly useful and important to the language. I'm guessing you intentionally excluded async, but to describe it as "not that useful" would just be wrong, there is a large class of programs that can be expressed very simply using async rust but would be very complicated to express in sync rust (assuming equivalent performance).
No, stackful coroutines requires a runtime. Not going to work on embedded, which is where async rust shines the strongest.
If you don't care about embedded that is fine. But almost all systems in the world are embedded. "Normal" computers are the odd ones out. Every "normal" computer has several embedded systems in it (one or more of SSD controller, NIC, WiFi controller, celular modem, embedded controller, etc). And then cars, appliances, cameras, routers, toys, etc have many more.
It is a use case that matters. To have secure and reliable embedded systems is important to humanity's future. We need to turn the trend of major security vulnerabilities and buggy software in general around. Rust is part of that story.
A stackfull coroutine is brittle and doesn't compose as cleanly as stackless coroutines. As a default language primitive, the latter is almost always the robust choice. Most devs should be using stackless coroutines for async unless they can articulate a technical justification for introducing the issues that stackfull coroutines bring with them.
I've implemented several stackfull and stackless async engines from scratch. When I started out I had a naive bias toward stackfull but over time have come to appreciate that stackless is the correct model even if it seems more complicated to use.
That said, I don't know why everyone uses runtimes like tokio for async. If performance is your objective then not designing and writing your own scheduler misses the point.
I understand what that is but I just don’t care. I am guessing the vast majority of people using rust also don’t care. Justifying the decision to create this mess by saying it is for embedded makes no sense to me.
Also don’t understand why you would use rust for embedded instead of c
Embedded systems vastly outnumber classical computers. Every classical computer has several embedded systems in them. As does appliances, cars, etc. So yes they are an incredible important use case to secure our modern infrastructure.
"Tight control over memory use" sounds wrong considering every single allocation in rust is done through the global allocator. And pretty much everything in rust async is put into an Arc.
I don't understand what kind of use case they were optimizing for when they designed this system. Don't think they were optimizing only for embedded or similar applications where they don't use a runtime at all.
Using stackfull coroutines, having a trait in std for runtimes and passing that trait around into async functions would be much better in my opinion instead of having the compiler transform entire functions and having more and more and more complexity layered on top of it solve the complexities that this decision created.
> "Tight control over memory use" sounds wrong considering every single allocation in rust is done through the global allocator.
In the case of Rust's async design, the answer is that that simply isn't a problem when your design was intentionally chosen to not require allocation in the first place.
> And pretty much everything in rust async is put into an Arc.
IIRC that's more a tokio thing than a Rust async thing in general. Parts of the ecosystem that use a different runtime (e.g., IIRC embassy in embedded) don't face the same requirements.
I think it would be nice if there were less reliance on specific executors in general, though.
> Don't think they were optimizing only for embedded or similar applications where they don't use a runtime at all.
I would say less that the Rust devs were optimizing for such a use case and more that they didn't want to preclude such a use case.
> having a trait in std for runtimes and passing that trait around into async functions
Yes, the lack of some way to abstract over/otherwise avoid locking oneself into specific runtimes is a known pain point that seems to be progressing at a frustratingly slow rate.
I could have sworn that that was supposed to be one of the improvements to be worked on after the initial MVP landed in the 2018 edition, but I can't seem to find a supporting blog post so I'm not sure I'm getting this confused with the myriad other sharp edges Rust's sync design has.
> > And pretty much everything in rust async is put into an Arc.
> IIRC that's more a tokio thing than a Rust async thing in general. Parts of the ecosystem that use a different runtime (e.g., IIRC embassy in embedded) don't face the same requirements.
Well, if you're implementing an async rust executor, the current async system gives you exactly 2 choices:
1) Implement the `Wake` trait, which requires `Arc` [1], or
2) Create your own `RawWaker` and `RawWakerVTable` instances, which are gobsmackingly unsafe, including `void*` pointers and DIY vtables [2]
Sure, but those are arguably more like implementation details as far as end users are concerned, aren't they? At least off the top of my head I'd imagine tokio would require Send + Sync for tasks due to its work-stealing architecture regardless of whether it uses Wake or RawWaker/RawWakerVTable internally.
I find it interesting that there's relatively recent discussion about adding LocalWaker back in [0] after it was removed [1]. Wonder what changed.
I don't think this would be considered capital gains if it's being paid to him. You typically pay income taxes on your income, if it's in the form of money given to you by your employer.
I'm having fun using it to make websites. Rust→WASM works really well. Definitely a very enjoyable way to make web apps. I've been trying to think how I can contribute to the ecosystem, seeing as I enjoy it so much. Rust gives you a control over memory that is impossible to replicate in javascript, and which allows much more performant code
I don't understand why. People spend money on other things besides housing. Because people spend money on multiple things, it doesn't really make that much sense to say that our index of inflation should track be one thing. I mean, if the price of food and healthcare tripled, I think you would probably say that the inflation metrics should go up.
Ofc, focusing on just one thing is very convenient for people who want to tell a particular story. (inflation is so bad! look at housing! there's so much deflation! look at food and TVs!)
I think it's because housing is the biggest expenditure for my family. Like I said, you should build your own index, not using the CPI or other people's index. Similarly, change in life can increase expenditures, too, e.g. getting a child prompts the family to buy a house instead of staying in a condo.
For my family, housing is easily the primary expenditure -- around 6,000 CAD while (food + vehicle) amount to less than 2,500 CAD monthly. For a similar family in the same area with on vehicle, I estimate that housing probably takes at least half of their expenditure.
Yes, that would be stupid, which is why it doesn't work that way. The basket is weighted according to how much people spend on each item. Eggs are not weighted the same as rent.
The CPI does have a problem with not updating the basket as frequently as it could, which means it doesn't catch substitution effects and tends to overstate inflation.
They may be beautiful, but the fact remains if you could produce and sell Patek Philippe Nautilus for $200 no one would be interested in it. The same is not true for most other beautiful objects
Well firstly, they don't charge $200 for them because they can't produce them for $200. But the point I'm making is he seems to be trying to say they aren't beautiful. He says he's describing this "dark" world or "strange" watches. I do actually think he probably thinks the watches look strange. I don't think he thinks they're beautiful, maybe he'll find a brand to fall in love with one day. I doubt it because he seems to have too much of himself invested in this. But the people buying them don't think they're strange, they think they're beautiful. I don't go out telling everyone that they shouldn't buy a Ferrari because my Honda Civic can do the same job.
You're confusing the price of something with how much it costs to make it. Prices are just a made up number. Hopefully, the amount someone will pay you for the watch you made is more than it costs you to make it, and you have a sustainable business, but the funny thing about capitalism is that is not at all guaranteed. If the company wants to juice sales, they'll have a limited time discount. Or how about when the company is bankrupt and out of business? Then theres a fire sale and the price of something is pennies on the dollar. So they could sell the watches for $200, or they could give them away for free, or they can charge $100k, or they could barter for them. It's all a matter of business.
I think you are missing his point - the items are desireable because of the brand. The stories, the movie stars, the songs and so on.
They don’t possess a universal, objectively valuable beauty that motivates the desire. If they did, fakes would be equally desireable and they are not.
I have a set of very expensive hand made japanese irons (golf clubs). I assure you I did not buy them from social influence or clout. In fact, nobody ever really sees them except me. I bought them because the craftsmanship and how truly beautiful they are. They make me smile.
I'm sure this is not a remotely new idea, but I'm having fun with it. I also like that I can see how common every form of every word is. I was surprised to learn that almost none of the most common words are nouns. And in my internal tools I can filter by movies released a certain date to track changes, which is neat.
reply