Hacker Newsnew | past | comments | ask | show | jobs | submit | OsrsNeedsf2P's commentslogin

Haven't investigated that much so there might be some big catch I'm missing, but I really wish people would stop making custom OSes and just help out an existing project

I wish everyone thought exactly like me too, it’d be great getting exactly what I wanted and basically the world would become utopia over night.

In the worst cases you get people who just want to say they made an OS by slapping their name on an existing distro after changing the default background image or making a couple tweaks. That's a lot easier to than to contribute something meaningful to an existing project.

More charitably, it's faster and a lot less complicated to modify a distro to your liking than trying to get a major distribution to cater to your whims and philosophy and making your version avilable to others gives them the option of easily installing your new features/modifications as is or using what parts they want. That's the joy of of FOSS. Even when the changes are modest, if they are shared and even a little useful then someone else can incorporate them or build off of them.


What do they need help with?

I work in the space. This article would not have been published if the team responsible was on the chopping block

How is progressive discovery not more expensive due to the increased number of steps?

I assume because the discovery is branching. If the an agent using the CLI for for GitHub needs to make an issue, it can check the help message for the issue sub-command and go from there, doesn't need to know anything about pull requests, or pipelines, or account configuration, etc, so it doesn't query those subcommands.

Compare this to an MCP, where my understanding is that the entire API usage is injected into the context.


> How is progressive discovery not more expensive due to the increased number of steps?

Why not run the discovery (whether MCP or CLI) in a subagent that returns only the relevant tools. I mean, discovery can be done on a local model, right?


In short: JSON. Plan prose or markdown is way more token efficient than JSON. I think that responding in JSON was always a mistake in the spec; it should have been free-form text (which could then be JSON if required).

It depends on what your "currency" is: inference cost vs. models getting dumber/slower with a fuller context.

Our app is a desktop integration and last year we added a local API that could be hit to read and interact with the UI. This unlocked the same thing the author is talking about - the LLM can do real QA - but it's an example of how it can be done even in non-web environments.

Edit: I even have a skill called release-test that does manual QA for every bug we've ever had reported. It takes about 10 hours to run but I execute it inside a VM overnight so I don't care.


i got me a windows mcp setup running in a sandbox, so it can look at screenshots, see the UIA, and click things either by coordinate or by UIA.

i let it run overnight against a windows app i was working on, and that got it from mostly not working to mostly working.

the loop was

1. look at the code and specs to come up with tests 2. predict the result 3. try it 4. compare the prediction against rhe result 5. file bug report, or call it a success

and then switch to bug fixing, and go back around again. Worked really well in geminicli with the giant context window


What to do after getting scammed: get scammed by this guy next

Wow, something I can answer!

I've been working on Ziva[0], an AI plugin for Godot that's explicitly for game development. Game development is hard and a different paradigm than regular coding, so there may be a learning curve, but we're working to flatten that out.

Happy to answer any questions!

[0] https://ziva.sh


Recently found out about this project when my friend made a Runescape webclient that works in the browser, reminded me of the low friction gaming back in 2006 ^^

Still working bringing AI agents to Godot. We recently hit 1k MRR.

Product link: https://ziva.sh/


What's your MRR?

Good questions, happy to answer all of them:

Revenue: Comes from client projects — I run a small tech agency (SaaS builds, AI integrations, brand websites). The agents handle lead gen and content so I can focus on delivery. MindThread (Threads automation SaaS) is also generating subscription revenue.

Multiple accounts: MindThread is a legitimate SaaS product using Meta's official Threads API. Clients manage their own accounts through it. The agents post to my company's own accounts, not fake ones.

Bot policies: We use official APIs only (Meta Threads API, Discord API). No scraping, no unofficial endpoints. Content is AI-assisted but goes through quality gates (generate → self-review → rewrite if score < 7/10).

Actual product/service: ultralab.tw — 7 product lines including MindThread (Threads automation), UltraProbe (AI security scanner), SaaS development, and AI integration services.


The comment you're replying to only asked a single question, which you actually failed to answer.

My monthly side income now covers 20x my Claude subscription plus a pork rice bowl with an extra egg — and it only took three months, without touching my main job hours. Honestly that's kind of insane, which is why I had to share this.

tl;dr Hetzner has best performance for the price. But also Hetzner just bumped prices like 30%

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: