Without hiring/being a cloud expert, it's hard to be sure that you didn't leave some door wide open due to a configuration error. Both approaches offer more than enough opportunities to royally screw up.
You are correct that a Linux installation is ineligible for support from Microsoft. Not that that means anything for private usage.
Also, Linux has a great track record for not dropping support for older hardware. I think that is a lot more informative than whatever statement Microsoft's legal team has managed to come up with.
The same way you know that your browser session secrets, bank account information, crypto private keys, and other sensitive information is never uploaded. That is to say, you don't, really - you have to partially trust Microsoft and partially rely on folks that do black-box testing, network analysis, decompilation, and other investigative techniques on closed-source software.
Your focus on startup speed feels really alien to me. When working on a project I just keep vscode open. I reboot maybe once a week and starting vscode again takes about a second, and then maybe 10s of seconds of background processing, depending on the project size, for the language server to become fully operational. That's more than good enough for me.
I've done a lot of shell-driven development in the 00s though, and I remember it did involve frequently firing up vim instances for editing just a single file. I no longer understand the appeal of that approach. Navigating between files (using fuzzy search or go-to-definition) is just a lot faster and more convenient.
> starting vscode again takes about a second, and then maybe 10s of seconds of background processing
Yet I'm doing the same thing instantly or near instantly.
I don't reboot often and I'm still lazy and will leave projects open often, but honestly, have you considered that your workflow is an adaptation to the wait time?
> Navigating between files (using fuzzy search or go-to-definition) is just a lot faster and more convenient.
I agree? But why do you think people don't fuzzy search in vim? Or the terminal? There's been tools to do this for a very long time. Fzf is over a decade old and wasn't the first
If you're using vim as an IDE (which is if course perfectly doable), then why does it matter if startup time is 50 or 1000 ms. You typically leave them running.
> Yet I'm doing the same thing instantly or near instantly.
Does vim somehow allow LSP servers to index faster? Or are you not actually doing the same thing?
Why are you leaving them running? Because they are slow to load?
Yes, Neovim supports LSP and it is very very fast.
I'm not sure why any of this is surprising. We're talking about the same company who is speeding up their file browser by loading it at boot time rather than actually trying to fix the actual fucking problems. Why is it surprising that everything else they make is slow and bloated as shit (even more as they've shoving AI into everything)
The point of LSP is that neovim is using the same servers for this as vscode. So I guess you work on smaller projects or with languages that have faster (usually meaning less fully featured) LSP servers.
LazyVim includes a bunch of pre-configured plugins that turn NeoVim into an IDE. Fuzzy search by filename, search by text, file explorer, go to definition, go to reference... Even debugging and unit test runners, it's all there. Yet when I'm at the command line and I need to make a quick edit to one file, e.g. `nvim ~/.bashrc`, I don't pay the startup cost of waiting for 50 plugins I'm not going to use. So it's the best of both worlds.
They're not using the remote VM as a server but as the development machine though. You don't want to have to git commit and push every time you need to run or even type-check your code.
I think what GP describes is actually a pretty okay solution for orgs that don't want to provider their devs with local admin privileges.
You can develop locally if you want to, and lots of people do, but it’s community support. The environment that someone else is obligated to fix for you is the remote one (which they can do by blowing away the container and then you recover your state from Git).
Doing actual journalism is expensive and not many people are still willing to pay to read the news. These companies are definitely not printing money. That's why billionaires can buy them on the cheap. Not for the expected profit, but for the influence it brings them.
If you can, please support serious journalism with your subscription dollars!
The problem is that it's a zero-sum game for a large part. People are hopefully not going to buy more medecine because of pharmacy ads. So the excessive spend on ads is driving up costs for everybody, and causing insane profits for a handful of companies.
I have no solution to offer though. Just thinking out loud... what effects would a limit on advertisement budgets relative to total expenditures have? That would force companies to spend their money mostly on actually creating value for the customer, instead of just selling the hell out of a mediocre offering.. ?
> Furthermore, all of the major LLM APIs reward you for re-sending the same context with only appended data in the form of lower token costs (caching).
There's a little more flexibility than that. You can strip of some trailing context before appending some new context. This allows you to keep the 'long-term context' minimal, while still making good use of the cache.
reply