And as a bonus: GPT is slow. I’m doing a lot of RE (IDA Pro + MCP), even when 5.4 gives a little bit better guesses (rarely, but happens) - it takes x2-x4 longer. So, it’s just easier to reiterate with Opus
I've been messing with using Claude, Codex, and Kimi even for reverse engineering at https://decomp.dev/ it's a ton of fun.
Great because matching bytes is a scoring function that's easy for the models to understand and make progress on.
This. People drastically underestimate how much more useful a lightning fast slightly dumb model is compared to a super smart but mega slow model is. Sure, u may need to bust out the beef now and then. However, the overwhelming majority of work the fast stupid model is a better fit.
There's a lot of 'single serve' software being written now by AI. People using Claude Code to make stuff that solves problems they have. It's wild watching people who don't know how to code just use it to solve problems they have. Even if the solutions can be considered awkward by traditional software engineering standards, to the people just looking to solve their problems, that doesn't matter, so long as it works. I'm a software engineer by trade and don't know shit about ML, but I want a nice tool to be able to do RLHF / DPO on Z-Image, so I'm working with Claude to build one, and so far it can use ComfyUI to generate the image pairs, and allows you to pick A vs. B then start a training run with layer offloading enabled so it fits in 16GB VRAM, and I haven't finished a training run yet, but steps are increasing and loss is changing so... I dunno... I see lots of software being created that wasn't before.
These are all local, though - if ideas were all that mattered, we'd see widely available ones, too.
I am not seeing them. (I would love to be proven wrong, because "how well does this work for not-one-off software" is a really important question for me)
you can do it on wasmer's workers, their last wasm/python approach is pretty solid (compatibility, performance). it's sad to say, but after 4 years of "beta" Python support on CF workers - it's still ugly. I dunno who was responsible for such a neglect, but even with the last changes - total fiasco
It's only django-related third-party packages comparison (and SSR itself), would be a bit strange to compare with a different language/stack and/or framework
With focus on LiveView, I think it’s interesting to see how the runtime influences the results. Django and Phoenix have a very different concurrency model
Six years ago when I was working with a Phoenix API, we were measuring responses in microseconds on local dev machines, and under 5 ms in production with zero optimization. In comparison the identical Django app had a 50 ms floor.
If it's only about Django ecosystem, true that. But if it's about pushing the limits how fast you can server-side render doom, then there are more possibilities to be tested:)
Never met a single person who used Replit/Lovable/Bolt/v0, all folks have either custom harnesses around top-tier labs' models, or use their dev-tools (cc, codex, etc). It feels like Manus & Co are very niche thing
It's okay till it's not. Everyone I know who had Celery in production was looking for a substitution (custom or third-party) on a regular basis. Too many moving pieces and nuances (config × logic × backend), too many unresolved problems deep in its core (we've seen some ghosts you can't debug), too much of a codebase to understand or hack. At some point we were able to stabilize it (a bunch of magic tricks and patches) and froze every related piece; it worked well under pressure (thanks, RabbitMQ).
How is the WASM target looking nowadays? I tried it in 2023 and early 2024, and it was far from complete (JS interop, poor documentation scattered across GitHub issues, and so on). I still can't find a single trustworthy source of documentation on how to proceed. C# would look great at the edge (Cloudflare Workers).
reply