Yeah, I have the same issue too. Even for a file with several thousand lines, they will "forget" earlier parts of the file they're still working in resulting in mistakes. They don't need full awareness of the context, but they need a summary of it so that they can go back and review relevant sections.
I have multiple things I'd love LLMs to attempt to do, but the context window is stopping me.
I do take that as a sign to refactor when it happens though. Even if not for the sake of LLM compatibility with the codebase it cuts down merge conflicts to refactor large files.
In fact I've found LLMs are reasonable at the simple task of refactoring a large file into smaller components with documentation on what each portion does even if they can't get the full context immediately. Doing this then helps the LLM later. I'm also of the opinion we should be making codebases LLM compatible. So if it happens i direct the LLM that way for 10mins and then get back to the actual task once the codebase is in a more reasonable state.
I'm trying to use LLMs to save me time and resources, "refactor your entire codebase, so the tool can work" is the opposite of that. Regardless of how you rationalize it.
Right, but the discussion we're having here is context size. I, and others, are saying that the current context size is a limitation on when they can use the tool to be useful.
The replies of "well, just change the situation, so context doesn't matter" is irrelevant, and off-topic. The rationalizations even more so.
A huge context is a problem for humans too, which is why I think it's fair to suggest maybe the tool isn't the (only) problem.
Tools like Aider create a code map that basically indexes code into a small context. Which I think is similar to what we humans do when we try to understand a large codebase.
I'm not sure if Aider can then load only portions of a huge file on demand, but it seems like that should work pretty well.
As someone who's worked with both more fragmented/modular codebases with smaller classes and shorter files vs ones that span thousands of lines (sometimes even double digits), I very much prefer the former and hate the latter.
That said, some of the models out there (Gemini 2.5 Pro, for example) support 1M context; it's just going to be expensive and will still probably confuse the model somewhat when it comes to the output.
Interestingly, this issue has caused me to refactor and modularize code that I should have addressed a long time ago, but didn't have the time or stamina to tackle. Because the LLM can't handle the context, it has helped me refactor stuff (seems to be very good at this in my experience) and that has led me to write cleaner and more modular code that the LLMs can better handle.
I have multiple things I'd love LLMs to attempt to do, but the context window is stopping me.