Hacker Newsnew | past | comments | ask | show | jobs | submit | thinkharderdev's commentslogin

I think they are saying what you want them to say. In the past they got a bunch of AI slop and now they are getting a lot of legit bug reports. The implication being that the AI got better at finding (and writing reports of) real bugs.

> This is flat-earther level

Ok, so do you have a counterexample?


Here's mine. It's not big or important (at all!) but I think it is a perfectly valid app that might be useful to some people. It's entirely vibe-coded including code, art and sounds. Only the idea was mine.

https://apps.apple.com/us/app/kaien/id6759458971


This is horrible. Children of that age should not be glued to a computer screen. If handing your kids over to the care of a bot is your idea of parenthood, I'm sure glad I'm not your kid.

The exact point of the app is to be as un-sticky as possible. I deliberately used calm colours, slow transitions, and a simple gameplay routine with a limited shelflife, after seeing how other apps for kids were designed like fruit machines.

If you simply think that children should never be exposed to screens, then I can sympathise with that point of view, but I think it's better to introduce them in a thoughtful and limited way.

Your last sentence is unnecessarily overblown and inflammatory, and adds nothing useful to the discussion.


Yes and no [0]. There's no chance I'm the only one. And no, it's not a chatbot or automation tool or anything else that's "selling shovels", it's an end product. I've had multiple people reach out to me organically with how much it has helped them, reviews are very good and so on.

But really, you don't even need this counterexample because it's trivial. It's like a C fanatic saying "No useful software can be made using Python", and then asking for a counterexample. Take all useful small applications created. Here's one, Maccy [1]. There's zero reason every line of its code has to have been written by hand rather than prompted. Maybe some of it in fact was. It's a nifty little app, does its job well.

[0] https://news.ycombinator.com/item?id=47477440

[1] https://maccy.app/


Are you saying Maccy was vibe-coded or that it was written in Python? I don't think either are true. I've definitely been using it (you're right, it's great!) since before vibe-coding was a thing. And looking at the GitHub it seems to be 100% in Swift.

> It's like a C fanatic saying "No useful software can be made using Python", and then asking for a counterexample

At which point you could provide them many, many counterexamples?

I like AI coding assistants as much as the next red-blooded SWE and find them incredibly useful and a genuine productivity booster, but I think the claims of 10/100/1000x productivity boosts are unsupported by evidence AFAICT. And I certainly know I'm not 10x as productive nor do any of my teammates who have embraced AI seem to be 10x more productive.


Can you give me any new (i.e. released in 2026) app that does something useful? There's just not many good app ideas left after all..


That has some strong "Everything that can be invented has been invented" vibes.

If that would be true then all these AIs are useless. Who needs them to built something that already exists?


"Everything that can be invented has been invented"

Ah my favorite, entirely made up quote.

Apocraphyly attributed to the U.S. Patent Office Commissioner in 1899.


I wrote my own note sharing app using free Claude. It's self-hosted, allows for non-simultaneous editing by multiple users (uses locks), it has no passwords on users, it shows all notes in a list. Very simple app, over all. It's one Go file and one HTML file. I like it, it's exactly what I want for sharing notes like shopping and todo lists with my partner.

The AI wouldn't have been able to do it by itself, but I wouldn't have been arsed to do it alone either.


Current, a brand-new handcoded RSS reader for i(Pad)OS/macOS is one of the best apps I've ever used. Seriously. I gladly purchased it and use it every day now (with Feedbin as the backend).


This will obviously depend on which implementation you use. Using the rust arrow-rs crate you at least get panics when you overflow max buffer sizes. But one of my enduring annoyances with arrow is that they use signed integer types for buffer offsets and the like. I understand why it has to be that way since it's intended to be cross-language and not all languages have unsigned integer types. But it does lead to lots of very weird bugs when you are working in a native language and casting back and forth from signed to unsigned types. I spent a very frustrating day tracking down this one in particular https://github.com/apache/datafusion/issues/15967


There's an old saying among cyclists attributed to Greg Lemond: "It doesn't get easier, you just go faster"


> Keep in mind, our parents (age specific) and/or their parents parents paid for news and didn't question that setup

I don't think this is quite right. Our parents paid for the newspaper but the newspaper was basically the internet of their time. That is where they got sports scores, movie/tv listings, etc. The fact that this was bundled with hard news was mostly a side-effect.


> To have better performance in benchmarks

Yes, exactly.


> Except a consumer can discard an unprocessable record?

It's not the unproccessable records that are the problem it is the records that are very slow to process (for whatever reason).


> If it were written with async it would likely have enough other baggage that it wouldn't fit or otherwise wouldn't work

I'm unclear what this means. What is the other baggage in this context?


In context (embedded programming, which in retrospect is still too big of a field for this comment to make sense by itself; what I meant was embedded programming on devices with very limited RAM or other such significant restrictions), "baggage" is the fact that you don't have many options when converting async high-level code into low-level machine code. The two normal things people write into their languages/compilers/whatever (the first being much more popular, and there do exist more than just these two options) are:

1. Your async/await syntax desugars to a state machine. The set of possible states might only be runtime-known (JS, Python), or it might be comptime-known (Rust, old-Zig, arguably new-Zig if you squint a bit). The concrete value representing the current state of that state machine is only runtime-known, and you have some sort of driver (often called an "event loop", but there are other abstractions) managing state transitions.

2. You restrict the capabilities of async/await to just those which you're able to statically (compile-time) analyze, and you require the driver (the "event loop") to be compile-time known so that you're able to desugar what looks like an async program to the programmer into a completely static, synchronous program.

On sufficiently resource-constrained devices, both of those are unworkable.

In the case of (1) (by far the most common approach, and the thing I had in mind when arguing that async has potential issues for embedded programming), you waste RAM/ROM on a more complicated program involving state machines, you waste RAM/ROM on the driver code, you waste RAM on the runtime-known states in those state machines, and you waste RAM on the runtime-known boxing of events you intend to run later. The same program (especially in an embedded context where programs tend to be simpler) can easily be written by a skilled developer in a way which avoids that overhead, but reaching for async/await from the start can prevent you from reaching your goals for the project. It's that RAM/ROM/CPU overhead that I'm talking about in the word "baggage."

In the case of (2), there are a couple potential flaws. One is just that not all reasonable programs can be represented that way (it's the same flaw with pure, non-unsafe Rust and with attempts to create languages which are known to terminate), so the technique might literally not work for your project. A second is that the compiler's interpretation of the particular control flow and jumps you want to execute will often differ from the high-level plan you had in mind, potentially creating more physical bytecode or other issues. Details matter in constrained environments.


That makes sense. I don't know anything about embedded programming really but I thought that it really fundamentally requires async (in the conceptual sense). So you have to structure your program as an event loop no matter what. Wasn't the alleged goal of rust async to be zero-cost in the sense that the program transformation of a future ends up being roughly what you would write by hand if you have to hand-roll a state machine? Of course the runtime itself requires a runtime and I get why something like Tokio would be a non-started in embedded environments, but you can still hand-roll the core runtime and structure the rest of the code with async/await right? Or are you saying that the generated code even without the runtime is too heavy for an embedded environment?


> fundamentally requires async (in the conceptual sense)

Sometimes, kind of. For some counter-examples, consider a security camera or a thermostat. In the former you run in a hot loop because it's more efficient when you constantly have stuff to do, and in the latter you run in a hot loop (details apply for power-efficiency reasons, but none which are substantially improved by async) since the timing constraints are loose enough that you have no benefit from async. One might argue that those are still "conceptually" async, but I think that misses the mark. For the camera, for example, a mental model of "process all the frames, maybe pausing for a bit if you must" is going to give you much better results when modeling that domain and figuring out how to add in other features (between those two choices of code models, the async one buys you less "optionality" and is more likely to hamstring your business).

> zero-cost

IMO this is a big misnomer, especially when applied to abstractions like async. I'll defer async till a later bullet point, looking instead at simpler abstractions.

The "big" observation is that optimization is hard, especially as information gets stripped away. Doing it perfectly seemingly has an exponential cost (active research problem to reduce those bounds, or even to reduce constant factors). Doing it approximately isn't "zero"-cost.

With perfect optimization being impossible for all intents and purposes, you're left with a world where equivalent units of code don't have the same generated instructions. I.e., the initial flavor of your code biases the generated instructions one way or another. One way of writing high-performance code then is to choose initial representations which are closer to what the optimizer will want to work with (basically, you're doing some of the optimization yourself and relying on the compiler to not screw it up too much -- which it mostly won't (there be dragons here, but as an approximate rule of thumb) because it can't search too far from the initial state you present to it).

Another framing of that is that if you start with one of many possible representations of the code you want to write, it has a low probability of giving the compiler the information it needs to actually optimize it.

Let's look at iterators for a second. The thing that's being eliminated with "zero-cost" iterators is logical instructions. Suppose you're applying a set of maps to an initial sequence. A purely runtime solution (if "greedy" and not using any sort of builder pattern) like you would normally see in JS or Python would have explicit "end of data" checks for every single map you're applying, increasing the runtime with all the extra operations existing to support the iterator API for each of those maps.

Contrast that with Rust's implementation (or similar in many other languages, including Zig -- "zero-cost" iterators are a fun thing that a lot of programmers like to write even when not provided natively by the language). Rust recognizes at compile-time that applying a set of maps to a sequence can be re-written as `for x in input: f0(f1(f2(...(x))))`. The `for x in input` thing is the only part which actually handles bounds-checking/termination-checking/etc. From there all the maps are inlined and just create optimal assembly. The overhead from iteration is removed, so the abstraction of iteration is zero-cost.

Except it's not, at least not for a definition of "zero-cost" the programmer likely cares about (I have similar qualms about safe Rust being "free of data-races", but those are more esoteric and less likely to come up in your normal day-to-day). It's almost always strictly better than nested, dynamic "end of iterator" checks, but it's not actually zero-cost.

Taking as an example something that came up somewhat recently for me, math over fields like GF(2*16) can be ... interesting. It's not that complicated, but it takes a reasonable number of instructions (and/or memory accesses). I understand that's not an every-day concern for most people, but the result will illustrate a more general point which does apply. Your CPU's resources (execution units, instruction cache, branch-prediction cache (at several hierarchial layers), etc) are bounded. Details vary, but when iterating over an array of data and applying a bunch of functions, even when none of that is vectorizable, you very often don't want codegen with that shape. You instead want to pop a few elements, apply the first function to those elements, apply the second function to those results, etc, and then proceed with the next batch once you've finished the first. The problems you're avoiding include data dependencies (it's common for throughput for an instruction to be 1-2/cycle but for latency to be 2-4 cycles, meaning that if one instruction depends on another's output it'll have to wait 2-4 cycles when it could in theory otherwise process that data in 0.5-1 cycles) and bursting your pipeline depth (your CPU can automagically resolve those data dependencies if you don't have too many instructions per loop iteration, but writing out the code explicitly guarantees that the CPU will _always_ be happy).

BUT, your compiler often won't do that sort of analysis and fix your code's shortcomings. If that approximate layout of instructions doesn't exist in your code explicitly then the optimizer won't solve for it. The difference in performance is absolutely massive when those scenarios crop up (often 4-8x). The "zero-cost" iterator API won't yield that better codegen, since it has an output that the optimizer can't effectively turn into that better solution (yet -- polyhedral models solve some similar problems, and that might be something that gets incorporated in modern optimizers eventually -- but it doesn't exist yet, it's very hard, and it's illustrative of the idea that optimizers can't solve all your woes; when that one is fixed there will still exist plenty more).

> zero-cost async

Another pitfall of "zero-cost" is that all it promises is that the generated code is the same as what you would have written by hand. We saw in the iterator model that "would have written" doesn't quite align between the programmer and the compiler, but it's more obvious in their async abstraction. Internally, Rust models async with state machines. More importantly, those all have runtime-known states.

You asked about hand-rolling the runtime to avoid Tokio in an embedded environment. That's a good start, but it's not enough (it _might_ be; "embedded" nowadays includes machines faster than some desktops from the 90s; but let's assume we're working in one of the more resource-constrained subsets of "embedded" programming). The problem is that the abstraction the compiler assumes we're going to need is much more complicated than an optimal solution given the requirements we actually have. Moreover, the compiler doesn't know those requirements and almost certainly couldn't codegen its assumptions into our optimal solution even if it had them. If you use Rust async/await, with very few exceptions, you're going to end up with both a nontrivial runtime (might be very light, but still nontrivial in an embedded sense), and also a huge amount of bloat on all your async definitions (along with runtime bloat (RAM+CPU) as you navigate that unnecessary abstraction layer).

The compiler definitely can't strip away the runtime completely, at least for nontrivial programs. For sufficiently simple programs it does a pretty good job (you still might not be able to afford supporting the explicit state machines it leaves behind, but whatever, most machines aren't _that_ small), but past a certain complexity level we're back to the idea of zero-cost abstractions not being real because of optimization impossibility, when you use most of the features you might want to use with async/await you find that the compiler can't fully desugar even very simple programs, and fully dynamic async (by definition) obviously can't exist without a runtime.

So, answering your question a bit more directly, my answer is that you usually can't fix the issue by hand-rolling the core runtime since it won't be abstracted away (resulting in high RAM/ROM/CPU costs), and even in sufficiently carefully constructed and simple code that it will be abstracted away you're still left with full runtime state machines, which themselves are overkill for most simple async problems. The space and time those take up can be prohibitive.


Right, because this would deadlock. But it seems like Zig would have the same issue. If I am running something in a evented IO system and then I try and do some blocking IO inside it then I will get a deadlock. The idea that you can write libraries that are agnostic to the asynchronous runtime seems fanciful to me beyond trivial examples.


Honestly I don't see how that is different than how it works in Rust. Synchronous code is a proper subset of asynchronous code. If you have a streaming API then you can have an implementation that works in a synchronous way with no overhead if you want. For example, if you already have the whole buffer in memory sometimes then you can just use it and the stream will work exactly like a loop that you would write in the sync version.


serde is a pull parser and it would take significant modification to convert it into an incremental push parser without blocking a thread.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: