this prevents claude from directly reading certain files, but doesn't prevent claude from running a command that dumps the file on stdout and then reading stdout... claude will just try to "cat" the file if it decides it wants to see it.
To me, the docs answer it pretty clearly. The defined directories persist until you destroy().
The part that's unclear to me is how billing works for a sandbox's disk that's asleep, because container disks are ephemeral and don't survive sleep[2] but the sandbox pricing points you to containers which says "Charges stop after the container instance goes to sleep".
I'll stop you right there. I've been using Claude Code for almost a year on production software with pretty large codebases. Both multi-repo and monorepo.
Claude is able to create entire PRs for me that are clean, well written, and maintainable.
Can it fail spectacularly? Yes, and it does sometimes. Can it be given good instructions and produce results that feel like magic? Also yes.
For finicky issues like that I often find that, in the time it takes to create a prompt with the necessary context, I was able to just make the one line tweak myself.
In a way that is still helpful, especially if the act of putting the prompt together brought you to the solution organically.
Beyond that, 'clean', 'well written' and 'maintainable' are all relative terms here. In a low quality, mega legacy codebase, the results are gonna be dogshit without an intense amount of steering.
> For finicky issues like that I often find that, in the time it takes to create a prompt with the necessary context, I was able to just make the one line tweak myself.
I don't run into this problem. Maybe the type of code we're working on is just very different. In my experience, if a one-line tweak is the answer and I'm spending a lot of time tweaking a prompt, then I might be holding the tool wrong.
Agree on those terms being relative. Maybe a better way of putting it is that I'm very comfortable putting my name on it, deploying to production, and taking responsibility for any bugs.
I decided to give perplexity another try a few days ago, and it still seems to hallucinate things. Given the same exact tasks/prompts both Claude and Chatgpt got the facts correct.
Perplexity uses those same models without "deep research" on, don't see how the result would be any different. I haven't gotten have any problem with it. Claude should be good but they rate limit too much their desktop and site it's almost unusable every time I tried.
Doesn't support the "from anywhere" part, but the resume strategies are pretty cool.
[0] https://github.com/pchalasani/claude-code-tools#aichat-sessi...
reply