> In order to use this, you need to both install the extension in your browser and install a small program (separate from the browser) on your computer. This extra program is called the "native stub".
Idk. Sure it has security advantages, but that sounds like to much of a hassle for 90% of users.
Because of government pressure. It was delisted by lots of exchanges purely based on government fear of privacy and independence, not any technical or demand reasons.
The CEX that do list it, it is essentially a trap. As soon as you do something with XMR they start freezing your account and demanding all sorts of KYC/AML. That is my experience after playing with it by pulling out a couple hundred $ and doing nothing with it other than putting it back on an exchange.
> interestingly, it's one of the least-used least-hyped options. It's as though we didn't actually want privacy in our money system.
There is lots of interest from
individuals. But governments all around the world have done their best to suppress it. They indeed do not like privacy and independence. They are the ones who sued and pressured exchanges into delisting Monero.
> if you're a developer who wants to exploit the multiplicative factor of a truly flexible and extensible programming language with state of the art features from the cutting-edge of PL research, then maybe give Lean a whirl!
Does not sound that appealing to me. Sounds like little consistency and having to learn a new language for every project.
> I host meetups for indie founders, and several attendees earn their living through solo businesses. When I go to conferences like Microconf, I meet lots more.
I'm not claiming that all indie founders are successful. I'm disputing the claim that almost all indie founders are struggling by saying I regularly meet indie founders who are successful. Not like driving exotic cars successful, but making a good living, in some cases with income on par with mid-to-senior FAANG dev jobs.
You don't need to trust anyone. GPT 5.4 xhigh is available and you can test it for $20, to verify it is actually able to find complex bugs in old codebases. Do the work instead of denying AI can do certain things. It's a matter of an afternoon. Or, trust the people that did this work. See my YouTube video where I find tons of Redis bugs with GPT 5.4.
I did not claim or deny anything. You cited the model card, I just pointed out that this is no reliable source. If you have better sources, like your YT video, you should cite those instead.
You are claiming something: that the model card is not reliable, therefore it's as useful as nothing. Sowing doubt without a possible solution adds little value to the conversation. Moreover, your rebuttal is unsubstantiated.
Guys, think about all the security vulnerabilities you're aware of; now, think about how many of those you know how to technically reproduce. Now imagine that you actually don't know how to reproduce most things and you're never actually be able to judge the result.
Well, just cause these are all AI people doesn't mean they verified enough of the output of these models to actually provide the significant security implications they're advertising.
And overfitting benchmarks can easily be gamed. Yet here we are with the top HN comment on the HN Mythos thread outlining it's benchmarking performance gains.
The whole discussion started out as an attempt to disprove/verify anthropics (model card) claims.
He also transfers the logic of their claims to the actual real world. You can say that model cards are marketing garbage. You have to prove that experienced programmers are not significantly better at security.
> You have to prove that experienced programmers are not significantly better at security.
That has not been my experience. It's true that they are "better at security" in the sense that they know to avoid common security pitfalls like unparamaterized SQL, but essentially none of them have the ability to apply their knowledge to identify vulnerabilities in arbitrary systems.
An expert level human doesn't have to be expert at every programming category. A webdev wouldn't spot a use after free. A systems engineer wouldn't know about CSRF. That is if both don't research security beyond their field. Requiring a programmer to apply their knowledge to an arbitrary system is asking too much. On the other hand and LLM can be expert level in every programming field, able to spot and combine vulnerabilities creatively. That is all pretty hard and I don't think an security expert with vast knowledge would say "that's easy".
My point is that more experienced programmers are better at security on average, not that they are security experts.
I would think pwn2own competitions would signal the opposite. I'm consistently and often amazed at how a unique combination of exploits can bring a larger exploit and often in ways that most wouldn't even consider. I think it takes a level of knowledge, experience, creativity and paranoia to be really good with security issues all around as a person.
> essentially none of them have the ability to apply their knowledge to identify vulnerabilities in arbitrary systems.
I've found it to be the opposite. Many of them do have the ability to apply their knowledge in that fashion. They're just either not incentivised to do so, or incentivised to not do so.
reply