Hacker Newsnew | past | comments | ask | show | jobs | submit | joecarpenter's commentslogin

Great analysis!

The Go binary was also compromised, but there's almost no information what the compromised binary did. Did it drop a python script? Did it do direct scanning?

If trivy docker image was used, what's the scope (it does not include python).


Reverse engineering with LLMs is very underrated for some reason.

I'm working on a hobby project - reverse-engineering a 30 year old game. Passing a single function disassembly + Ghidra decompiler output + external symbol definitions RAG-style to an agent with a good system prompt does wonders even with inexpensive models such as Gemini 3 Flash.

Then chain decompilation agent outputs to a coding agent, and produced code can be semi-automatically integrated into the codebase. Rinse and repeat.

Decompiled code is wrong sometimes, but for cleaned up disassembly with external symbols annotated and correct function signatures - decompiled output looks more or less like it was written by a human and not mechanically decompiled.


I've found that Gemini models often produce pseudocode that seems good at first glance but is typically wrong or incomplete, especially for larger or more complex functions. It might produce pseudocode for 70% of the function, then silently drop the last 30%. Or it might elide the inside of switch blocks or if statements, only including a comment explaining what should happen.

Alternatively, Claude Opus generally output actual code that included more of the original functionality. Even Qwen3-30B-A3B performs better than Gemini, in my experience.

It's honestly really frustrating. The huge context size available with Gemini makes the model family seem like a boon for this task; PCode is very verbose, impinging on the headroom needed for the model's response.


In my case I'm decompiling into C and it does a pretty good job at translation. There were situations where it missed an important implementation detail. For example, there is an RLE decompressor and Gemini generated plausible, but slightly incorrect code. Gemini 3 Pro was not able to find the bug and produced code that was similar to Gemini 3 Flash.

The bug was one-shotted by GPT 5.2.


Isn't it the opposite? From the link: Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.

Gemini 3 Flash scored +13 in the test, more correct answers than incorrect.


Nope lower is better compared to recent open ai models this is bad. I am looking at AA-Omniscience Hallucination Rate


One thing I don't understand is how come Gemini Pro seems much cheaper than Gemini Flash in the scatter graph.


Well, there's also mine https://github.com/VectorOps/know with some details what it does and how: https://vectorops.dev/blog/post-1/


Ah, the memories! Around the fall of the Soviet Union, there was an IBM-compatible clone called Poisk. It was not 100% compatible with with IBM PC, had 128 KB built-in RAM (extensible to 640 KB with a card), had CGA graphics with a composite output only, no floppy interface without an addon, etc. But it was cheap, like really cheap and only needed a TV and a tape recorder to get going. I'd say Poisk was #2 home PC after gazillion of inexpensive ZX Spectrum clones.

Article mentions that tape interface was rarely used - that was definitely not the case in the (ex)USSR.

Anyway, having spent so much time with Poisk with cassette interface after ZX Spectrum, I can still distinguish PC vs ZX tapes by just listening to them - they have slightly different tonality.


Which is really weird. Because the code is reverse-engineered and I'm 99% sure the author used Hex-Rays Decompiler (due to variable names, etc).


Typography is horrible in this one


They're being used side by side and running in separate processes.

For Campus Bubble, tornado app is a relatively simple push broker. Whenever something happens, Flask app pushes notification to a broker and broker pushes them in fan-out fashion to subscribers.

I even wrote another blog post a while ago about possible approach: http://mrjoes.github.io/2013/06/21/python-realtime.html


Wonderful article. Btw, you might find the Pushpin project interesting. It runs as a proxy so you don't need to split your app in two.


Can't speak much on Flask, but I've seen Tornado used similarly with Django.


There were requests to create read-only views, but they're domain specific and can be done with existing machinery.


I'd have thought that viewing the details of a model instance (attributes and some relationships) would be very generic use case? But I probably just don't understand how to do this easily with flask-admin as it is. Is there a good place in the documentation/examples to start looking?


I think the generic idea is to merge 'update/view' into the same thing so that you "view" by looking at the "update" form without updating anything. I've seen this in several places, though I'm not partial to the idea. I like having a separation between 'I am making changes' and 'I am viewing' just so that you don't end up with accidental updates.


Right yeah, that explains things. I don't mind this when it's just for myself, but actual users definitely get scared when they see the editable fields and the submit buttons...


Uh, that's typical rhetoric about default Bootstrap interface. Like if someone used default bootstrap skin then he's just lazy and didn't really care. Thus unprofessional.

Personally, I think that Bootstrap interface is good enough, but its widespread adoption lead to the perception I mentioned above.


I understand, and that's why I'm a bit annoyed, it's a very developer-centric attitude propagated by the sentence above.

Personally, an end-user has never asked me to make their software/web-admin look unique. A portfolio site? Sure, but an admin page? Not other than their official color/logo in the header.

Uniqueness is just not a concern with "normal" people, and with good reason as I mentioned above. Still, congrats on the nice work on the site/post.


The fact that Bootstrap is widespread, and therefore perceived by some unprofessional, does not make a user using it unprofessional, nor does it make the framework unprofessional.

Are Django or Flask unprofessional since they are widespread?


Unfortunately, you can't compare development frameworks to UI frameworks. No one knows that you're a dog on the Internet. But anyone without technical knowledge can see a site that's built with the default Bootstrap skin and say - hey, I saw it before.

Bootstrap is good and clean framework, it is _great_ for developers. But just because it is so widespread and highly visible, there's some of prejudice around it.

Nonetheless, I'm using Bootstrap in my projects and happy with it.


I think this prejudice only exists within this community where if you aren't that special snowflake you're not cool. I'd guess 99% of web traffic has no clue what Bootstrap is, what it looks like by default, how it is different when it is skinned. I would guess more than anything that they just think, this is what a website looks like now-a-days.

I'm happy to use it too with very minimal tweaking because it looks good out of the box, way more pleasant than browser defaults, it works well, fairly well maintained.


While that is true I think that most users will be oblivious to the commonalities - as developers we are not only familiar with bootstrap but also with the myriad developer-built sites that are more likely to use it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: