Hacker Newsnew | past | comments | ask | show | jobs | submit | sourdoughness's commentslogin

Using Opus 4.5 through VScode/CoPilot gives so much better results than anything else I’ve tried that I kept paying when they briefly made it 3x token rate.

I really like the interaction flows better than Gemini 3 or Codex, though I can’t quite quantify why. The amount of explanation/supporting material in Opus’s output feels just right to me.


Interesting. I treat VScode Copilot as a junior-ish pair programmer, and get really good results for function implementations. Walking it through the plan in smaller steps, noting that we’ll build up to the end state in advance ie. “first let’s implement attribute x, then we’ll add filtering for x later”, and explicitly using planning modes and prompts - these all allow me to go much faster, have good understanding of how the code works, and produce much higher quality (tests, documentation, commit messages) work.

I feel like, if a prompt for a function implementation doesn’t produce something reasonable, then it should be broken down further.

I don’t know how others define “vibe-coding”, but this feels like a lower-level approach. On the times I’ve tried automating more, letting the models run longer, I haven’t liked the results. I’m not interested in going more hands-free yet.


We have a low-tech version of something like this in South Australia: we pay the wholesale rate for electricity, which updates at 5 minute intervals. During the day when there’s oversupply of wind and solar, the rate is super low or even negative, which we take advantage of to charge an EV (and we’ll be adding a home battery soon).

The power company can integrate with car chargers and battery controllers to control all of this automatically, though we don’t bother - just check the app for the cheapest/greenest times and schedule the car to charge then.

It’s allowed us to switch to an EV without even really noticing any extra power cost for charging it.


We’re not talking about AI writing books about the systems, though. We’re talking about going from an undocumented codebase to a decently documented one, or one with 50% coverage going to 100%.

Those orgs that value high-quality documentation won’t have undocumented codebases to begin with.

And let’s face it, like writing code, writing docs does have a lot of repetitive, boring, boilerplate work, which I bet is exactly why it doesn’t get done. If an LLM is filling out your API schema docs, then you get to spend more time on the stuff that’s actually interesting.


A much better options is to use docstrings[0] and a tool like doxygen to extract an API reference. Domain explanations and architecture can be compiled later from design and feature docs.

A good example of the kind of result is something like the Laravel documentation[1] and its associated API reference[2]. I don't believe AI can help with this.

[0]: https://en.wikipedia.org/wiki/Docstring

[1]: https://laravel.com/docs/12.x

[2]: https://api.laravel.com/docs/12.x/


My understanding of this idea is that once the universe reaches a state of maximum entropy (this is the “heath death” of the universe, where everything is a uniform, undifferentiated cloud of photons, then time stops being meaningful because there can be no change from moment to moment. In a sense, time _is_ the change from low to high entropy - if you don’t have any entropy gradient, you can’t have any time either.


I've always rejected the idea that time is entropy change.

First, in many local processes entropy moves from high to low (e.g. life). Nobody says that time is moving backwards for living things. It only increases if you consider the system it is embedded in as well. So this idea that entropy is time is something that only applies to the entire universe?

It's true that we don't see eggs unbreaking, or broken coffee cups flying off the floor and reassembling. This increase in entropy seems to give an "arrow" of time, but to my mind this view (ironically) confuses cause with effect.

If you have any causal system (cause preceding effects) then you will always see this type of entropic increase, by simple statistics. There are just many, many more ways for things to be scrambled and high entropy than ordered and low entropy.

So yes, entropy does tend to increase over time, but that's an effect of being in a causal system, not the system itself. At least, that's my view.


Could you expand on your comment that life has entropy moving from high to low? Doesn't aging increase the entropy in our biological system? I have always thought that we are at our most structured in the early phases of conception with entropy increasing constantly as we age.


Life is essentially a process of creating order (lower entropy) building complex cells and so on using energy and matter from its environment.

Perfectly true that entropy gets us in the end as we age, as the system breaks down and cannot sustain itself any longer. Although if we could fix those systems, there's no reason in principle we couldn't halt aging entirely.


I took it as capital-L Life is moving from high to low. As evolution continues Life seems to evolve ever higher -> lower/more-ordered organisms (as more complex organisms depend on the systems created by simpler organisms prior to themselves).

I am slightly blending the concept of entropy and complexity. But "ordered complexity" is how I imagine it.


I don’t think entropy ever moves from high to low overall, it only ever distills some local low out of an higher entropy area, and in doing so, the overall entropy increases.

It works a bit like air conditioning: yeah, you can make one room cold, but only by making more heat outside the room. The overall temperature of the system increases.


I can’t resist: https://youtu.be/i6rVHr6OwjI?feature=shared (The a capella Billy Joel meta-cover “Entropic Time”)


Nice!


This sounds sort of like the "if a tree falls in a forest and no one hears it, did it make a sound".

if time passes and there's no observable difference, did it pass? I guess it makes no meaningful difference, but it's not really answering the underlying question of if some variable is advancing or not.


I enjoy this thought experiment.

If nobody logs in to a multiplayer game, does the game world still exist?

Sure there are files sitting on a server somewhere waiting to be read when the first user logs in, there may even be a physics engine polling abstract data structures for updates, but the game world doesn't render without players present with their computers bringing all this data into a coherent structure.

Also, for an extra existential kick, realize that it renders /independently/ in the GPU/CPU/RAM of each player's computer.


I remember the book "Now - Physics of Time" by Richard Muller (a Berkley physics professor) touching on the subject of entropy linked to time, but I never got to finish the book and sadly I can't provide more insight.


stuff can still happen after the heat death. the universe will keep expanding, and quantum foam will keep foaming.

heat death just implies no work can be done. time still flows


And potentially leads to things like Boltzmann Brains, given enough time! Quantum fluctuations can still create wildly improbable things, even if only briefly.


The “resetable” aspect seems to be a crucial part of true “time-lock” encryption… rather than what feels like a “proof of work” mode where an amount of computation stands in as a proxy for “time elapsed”. But regardless of how good a proxy it is, “time-elapsed” is not really what we want - we don’t want a lock that takes “at least n seconds of effort to open”, we want a lock that will not open before a timestamp.

It feels like a true time-lock solution should be impossible to unlock before a particular date, but trivial afterwards, rather than assuming the opener started work on it at the moment it was released and has been burning cycles at some maximal rate since then.

I don’t think the universe contains unfakeable timestamps, which seem to be a requirement for that true solution - it feels like it’s not compatible with relativity, so maybe proxies are the best we can do on a fundamental level.


You can turn watch history off, and then you don’t get any recommendations at all - just the channels you subscribe to. YouTube doesn’t like it - and goes out of its way to make the apps feel a bit broken with out it, but I much prefer to curate my own list of channels without an algorithm trying to keep me watching longer than I mean to.


Driving any car like that will result in increased stress on components and wear. A lot of EVs have high-end sports-car levels of acceleration, and those aren’t known for being low-maintenance.


The difference to me is how that is applied. For most cars, including performance vehicles, you can minimize the impact of this by waiting until the car is warmed up, using the correct oil weight and changing it regularly, changing the air filter, etc. With an electric vehicle the wear seems more integral to the usage of it compared to an ICE.


Yeah, that’s true - it’s integral because an EV retains all of its components throughout its lifespan.

If we consider an ICE car’s fuel as a “component”, then it’s an interesting comparison: fuel is basically maximally degraded - it accumulates as much “wear” as possible - and then it’s jettisoned so what remains attached to the vehicle is comparatively less worn.


I don’t agree that they’re just props: they’re analog controllers and so the form factor and manner of their use directly affects the music being played. Even the controllers that are just “big heavy knobs on bearings” are fundamentally different to a typical knob on a synthesiser in terms of the signal they produce and the sound that ends up being made.


Vacuum clamping would get tricky when the surface you’re trying to slurp onto is also the thing you’re actively cutting holes in, right?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: