I have been thinking about this. How do I make my git setup on my laptop secure? Currently, I have my ssh key on the laptop, so if I want to push, I just use git push. And I have admin credentials for the org. How do I make it more secure?
1) Get 1Password, 2) use 1Password to hold all your SSH keys and authorize SSH access [1], 3) use 1Password to sign your Git commits and set up your remote VCS to validate them [2], 4) use GitHub OAuth [3] or the GitHub CLI's Login with HTTPS [4] to do repository push/pull. If you don't like 1Password, use BitWarden.
With this setup there are two different SSH keys, one for access to GitHub, one is a commit signing key, but you don't use either to push/pull to GitHub, you use OAuth (over HTTPS). This combination provides the most security (without hardware tokens) and 1Password and the OAuth apps make it seamless.
Do not use a user with admin credentials for day to day tasks, make that a separate user in 1Password. This way if your regular account gets compromised the attacker will not have admin credentials.
Okay great advice, thanks. I'm already using Bitwarden and found out they have an SSH Agent feature too [1]. I've tried lastpass, Bitwarden, 1password and I prefer Bitwarden (good UX, very affordable)
One approach I started using a could of years ago was storing SSH private keys in the TPM, and using it via PKCS11 in SSH agent.
One benefit of Microsoft requiring them for Windows 11 support is that nearly every recent computer has a TPM, either hardware or emulated by the CPU firmware.
It guarantees that the private key can never be exfiltrated or copied. But it doesn't stop malicious software on your machine from doing bad things from your machine.
So I'm not certain how much protection it really offers on this scenario.
That's what I do. For those of us too lazy to read the article, tl;dr:
ssh-keygen -t ed25519-sk
or, if your FIDO token doesn't support edwards curves:
ssh-keygen -t ecdsa-sk
tap the token when ssh asks for it, done.
Use the ssh key as usual. OpenSSH will ask you to tap the token every time you use it: silent git pushes without you confirming it by tapping the token become impossible. Extracting the key from your machine does nothing — it's useless without the hardware token.
Looks like on the server side this can be mitigated somewhat by the MaxStartups¹ setting for OpenSSH or equivalent behavior for other services that support SSH auth (e.g., Git forges like GitHub):
MaxStartups
Specifies the maximum number of concurrent unauthenticated
connections to the SSH daemon. Additional connections
will be dropped until authentication succeeds or the
LoginGraceTime expires for a connection. The default is
10:30:100.
Alternatively, random early drop can be enabled by
specifying the three colon separated values
start:rate:full (e.g. "10:30:60"). sshd(8) will refuse
connection attempts with a probability of rate/100 (30%)
if there are currently start (10) unauthenticated
connections. The probability increases linearly and all
connection attempts are refused if the number of
unauthenticated connections reaches full (60).
So it looks like it's possible to support ControlMaster while still somewhat hampering mass-cloning thousands of repos via SSH key without reauthenticating.
Admittedly I'd put this more in the category of making endpoint compromise easier to detect than that of actually preventing any particular theft of data or manipulation of systems. But it might still be worth doing! If it means only a few dozen or only a hundred repos get compromised before detection instead of a few thousand, that's a good thing.
Besides all that (or MaxSessions, as another user mentions), if an attacker compromises a developer laptop and can only open those connections as long as the developer is online, that's one thing. But a plaintext key that they can grab and reuse from their own box is obviously an even sweeter prize!
"The SSH key on my YubiKey is useless to attackers" is obviously the wrong way to think about this, but using a smartcard for SSH keys is still a way to avoid storing plaintext secrets. It's good hygiene.
There is no defense against a compromised laptop. You should prevent this at all cost.
You can make it a bit more challenging for the attacker by using secure enclaves (like TPM or Yubikey), enforce signed commits, etc. but if someone compromised your machine, they can do whatever you can.
Enforcing signing off on commits by multiple people is probably your only bet. But if you have admin creds, an attacker can turn that off, too. So depending on your paranoia level and risk appetite, you need a dedicated machine for admin actions.
It's more nuanced than that. Modern OSes and applications can, and often do, require re-authentication before proceeding with sensitive actions. I can't just run `sudo` without re-authenticating myself; and my ssh agent will reauthenticate me as well. See, e.g., https://developer.1password.com/docs/ssh/agent/security
The malware can wait until you authenticate and perform its actions then in the context of your user session. The malware can also hijack your PATH variable and replace sudo with a wrapper that includes malicious commands.
It can also just get lucky and perform a 'git push' while your SSH agent happens to be unlocked. We don't want to rely on luck here.
Really, it's pointless. Unless you are signing specific actions from an independent piece of hardware [1], the malware can do what you can do. We can talk about the details all day long, and you can make it a bit harder for autonomously acting malware, but at the end of the day it's just a finger exercise to do what they want to do after they compromised your machine.
Do you have evidence or a reproducible test case of a successful malware hijack of an ssh session using a Mac and the 1Password agent, or the sudo replacement you suggested? I assume you fully read the link I sent?
I don't think you're necessarily wrong in theory -- but on the other hand you seem to discount taking reasonable (if imperfect) precautionary and defensive measures in favor of an "impossible, therefore don't bother" attitude. Taken to its logical extreme, people with such attitudes would never take risks like driving, or let their children out of the house.
You get the idea. It can do something similar to the git binary and hijack "git commit" such that it will amend whatever it wants and you will happily sign it and push it using your hardened SSH agent.
You say it's unlikely, fine, so your risk appetite is sufficiently high. I just want to highlight the risk.
It could have created a bash alias then. And I don't think a dev wants to be restricted in creating executables. Again, if a dev can do it, so can the malware.
A compromised laptop should always be treated as a fully compromised. However, you can take steps that drastically reduce the likelihood of bad things happening before you can react (e.g. disable accounts/rotate keys).
Further, you can take actions that inherently limit the ability for a compromise to actually cause impact. Not needing to actually store certain things on the machine is a great start.
You can add a gpg key and subkeys to a yubikey and use gpg-agent instead of ssh-agent for ssh auth. When you commit or push, it asks you for a pin for the yubikey to unlock it.
1 store my ssh key in 1Password and use the 1Password ssh agent. This agents asks for access to the key(s) with Touch ID. Either for each access or for each session etc. one can also whitelist programs but I think this all reduces the security.
There is the FIDO feature which means you don’t need to hackle with gpg at all. You can even use an ssh key as signing key to add another layer of security on the GitHub side by only allowing signed commits.
You can set up your repo to disable pushing directly to branches like main and require MFA to use the org admin account, so something malicious would need to push to a benign branch and separately be merged into one that deploys come from.
There's nothing wrong with pushing to main, as long as you don't blindly treat the head of the main branch as production-ready. It's a branch like any other; Git doesn't care what its name is.
They can't with git by itself, but if you're also signed in to GitHub or BitBucket's CLI with an account able to approve merges they could use those tools.
I’ve started to get more and more paranoid about this. It’s tough when you’re running untrusted code, but I think I’ve improved this by:
not storing SSH keys on the filesystem, and instead using an agent (like 1Password) to mediate access
Stop storing dev secrets/credentials on the filesystem, injecting them into processes with env vars or other mechanisms. Your password manager could have a way to do this.
Develop in a VM separate from your regular computer usage. On windows this is essential anyway through using WSL, but similar things exist for other OSs
This is what agents are for. You load your private key into an agent so you don't have to enter your passphrase every time you use it. Agents are supposed to be hardened so that your private key can't be easily exfiltrated from them. You can then configure `ssh` to pass requests through the agent.
There are lots of agents out there, from the basic `ssh-agent`, to `ssh-agent` integrated with the MacOS keychain (which automatically unlocks when you log in), to 1Password (which is quite nice!).
This is a good defense for malware that only has read access to the filesystem or a stolen hard drive scenario without disk encryption, but does nothing against the compromised dev machine scenario.
This seems to be the standard thing people miss. All the things that make security more convenient also make it weaker. They boast about how "doing thing X" makes them super secure, pat on the back and done. Completely ignoring other avenues they left open.
A case like this brings this out a lot. Compromised dev machine means that anything that doesn't require a separate piece of hardware that asks for your interaction is not going to help. And the more interactions you require for tightening security again the more tedious it becomes and you're likely going to just instinctively press the fob whenever it asks.
Sure, it raises the bar a bit because malware has to take it into account and if there are enough softer targets they may not have bothered. This time.
Classic: you only have to outrun the other guy. Not the lion.
Like, I see the comment about the Keychain integration and all that. But in the end I fail to see (without further explanation but I'm eager to learn if there's something I am unaware of) where this isn't different from what I am saying.
Like yes, my ssh key has a passphrase of course. Which is different from my system one actually. As soon as I log into the system I add the key, which means entering the passphrase once, so I don't have to enter it all the time. That would get old real fast. But now ssh can just use my key to do stuff and the agent doesn't know if it's me or I got compromised by npm installing something. And if you add a hardware token you "just have to tap" each time that's a step back into more security but does add tedium. Depending on how often my workflow uses ssh (or something that uses the key) in the background this will become something most people just blindly "tap" on. And then we are back towards less security but with more setup steps, complications and tedium.
I saw the "or allow for a session", which is a step towards security again, because I may be able to allow a script that does several things with ssh with a single tap, which is great of course. Hopefully that cuts the taps down so much that I don't just blindly tap on every request for it. Like the 1password thing you mentioned. If I do lots of things that make it "ask again" often enough I get pushed into "yeah yeah, I know the drill, just tap" security hole.
Keep in mind that not every agent is so naive as to allow a local client to connect to it without reauthenticating somehow.
1Password, for example, will, for each new application, pop up a fingerprint request on my Mac before handling the connection request and allow additional requests for a configurable period of time -- and, by default, it will lock the agent when you lock your machine. It will also request authentication before allowing any new process to make the first connection. See e.g. https://developer.1password.com/docs/ssh/agent/security
You memorize it, or keep it in 1Password. 1Password can manage your SSH keys, and 1Password can/does require a password, so it's still protected with something you know + something you have.
Passphrases, when strong enough, are fine when they are not traversing a medium that can be observed by a third party. They're not recommended for authenticating a secure connection over a network, but they’re fine for unlocking a much longer secret that cannot be cracked via guessing, rainbow tables, or other well known means. Hell, most people unlock their phones with a 4 digit passcode, and their computers with a passphrase.
> when they are not traversing a medium that can be observed by a third party
Isn't that why all those security experts are pushing for SSL everywhere and 30 second certificate expiration? To make the medium unobservable by a third party?
If you believe them, passphrases should be okay over fiber you don't control too.
One thing I forgot to mention is what the trust relationship looks like. Passphrases used for authentication are known by both parties and could be leaked by the other side or stolen from them, while private keys remain only available to you. With public key authentication, the other party only has your public key, which is freely shareable.
And yes, we all know that 2FA, passkeys, etc. are all better than passphrases, and that layer 3 wire encryption is important.
I’m merely responding to your blanket assertion that passphrases aren’t “secure enough,” but sometimes they are.
My SSH keys aren't on my computer: they're safely hidden on a hardware token, behind a secure element, like a Yubikey.
Devices like the Yubikey do precisely exist because computers aren't things to be trusted. So their reason for being is to offer a minimal attack surface.
When I git fetch/pull/push I just do it. But it requires me to physically use my Yubikey. It's not 100% foolproof but it's way better than having SSH keys only protected by a password.
So Git over SSH, on a Git/SSH server that supports Yubikeys.
Not a perfect defense, but sufficient to make your key much harder to exploit: Use a Yubikey (or similar) resident SSH key, with the Yubikey configured to require a touch for each authentication request.
I wouldn't say that's better. Now your .config directory contains a github token that can do more than just repo pull/push, and it is trivially exfiltrated. Though similar thing could be said for browser cookies.
password-protect your key (preferably with a good password that is not the same password you use to log in to your account). If you use a password it's encrypted; otherwise its stored on plaintext and anybody who manages to get a hold of your laptop can steal the private key.
You can't use Warp+ to control your egress point, unlike many other VPN services, so you can't use it to bypass geographic blocks. However, since Warp+ (not Warp) routes you within Cloudflare's network (using their Argo routing from participating datacenters), I'd guess GP gets a more stable and faster connection to their server than the public internet would provide.
Is there a way to use SSM without using the access keys? I feel using access keys is incredibly not secure because rotating keys is a hassle and people might not do them all the time.
Also can u restrict access to ssh through ssm to certain ips?
Maybe I might have missed these, so any help would be appreciated.
> Is there a way to use SSM without using the access keys? I feel using access keys is incredibly not secure because rotating keys is a hassle and people might not do them all the time.
Technically there is, you can use federated login. Might not be very convenient, depending on your identity provider.
A solution I use, while not technically "not using access kys" is storing them in the system credential store with aws-vault [0]. Works on Windows, Linux and Mac. And you can combine this with multi factor auth.
> Also can u restrict access to ssh through ssm to certain ips?
Yes, with an IAM policy. The policy below requires connecting with an MFA and from a specific IP range. It only allows connecting to a specific instance.
I think I qualify as a poweruser by some standards, I tend to have 100's of tabs open and I've never left Firefox for speed reasons or others because it has held up extremely well over the years. My machine has a lot of RAM so maybe that's one reason things have been quick, and the main drive is an SSD. Is there anything specific about the 57 release that makes you feel caused the speed bottle-neck to disappear?
57 is lightning fast. Like, almost uncomfortably so on my machine.
Broay this is because quantum is much further along in nightly than on the other channels. Specifically, they're more aggressive with multithreaded settings, multiprocess is enabled, and the quantum css component is turned on, too.
Is also helps that we compare to chrome. On Linux chrome doesn't even do GPU rendering. So the HN crowd probably has a disproportionately bad experience with it
> 57 is lightning fast. Like, almost uncomfortably so on my machine
As others have said here, I just installed Firefox Nightly after reading this, and you're 100% correct. I think it might even be faster than Chromium for me.
Wait, what? I know that hw accelerated video decoding is generally not available on linux chrome, but I'm pretty sure that lots of other things are. From my chrome://gpu
Firefox has always been unbearably slow on Linux for me. It's the main reason I swapped to Chrome in the first place. I tried switching back to Firefox about 6 months ago, same problem of extremely slow and laggy UI. Interesting to hear 57 is faster now, it may be worth trying again.
Have you tried using a fresh profile, just to narrow the possible variables?
If you've checked both of those things, and you can characterize the slowness in the context of specific operations (e.g. what "laggy" means, precisely), then I'd recommend filing a bug and working with the developers to track down the problem. Because that is not normal.
Firefox 57 should be a much, much improved experience. e10s with multiple processes started rolling out to users in Firefox 54 and tons of small performance issues have been fixed in 55-57.
On Linux, manually enabling GPU acceleration as user ac29 describes below can a big difference. Unfortunately, a lot of Linux GPU drivers have issues that prevent acceleration from being enabled by default for some users.
Same here, it's why I switched to Vivaldi. I had to restart Firefox every day or so and it still lagged. I've read the other recent post on here that shows 100+ tabs has almost no startup burden in FF57 compared to previous builds. I should try it again.
For those who don't notice the speed, multiprocess may be disabled due to certain addons. Go to about:support to check if it enabled. The addon compatibility checker addon can tell you which are the offending addons. For me 1password was the big blocker, but they recently released a beta that works with the new API. Sorry for the lack of links, I'm on mobile.
Oh interesting. Firefox still seems slow and a bit of a memory hog compared to Chrome on my machine. Sure enough I checked and three add-ons are marked as legacy (and presumably disabled the multiprocess stuff): uBlock Origin, Websocket Disabler, and NoScript.
I may have to try Firefox again once 57 is in portage. A few months back I switched to Vivaldi, begrudgingly. I really liked Firefox, but the speed issues were becoming unbearable.
Vivaldi has been a lot faster and mostly an alright experience, and I've watched many of the issues I've found for it get addresses in recent builds. Still there are many things I miss about Firefox, not to mention supporting that community and the browser I've used for well over a decade (including back when it was Phoenix, the original Mozilla and the old Netscape 6 that proceeded them).
How has the nightly been for you in terms of stability? Been thinking of switching to it from beta. The memory issues on MacOS are killing me, and I believe there are fixes in nightly now.
IME, desktop nightly hasn't ever outright crashed (less lucky with mobile nightly). Sometimes I notice minor rendering glitches on various sites, but I also have Servo's CSS engine enabled which might be doing it (it's not yet the default).
Well, it used to crash a lot at some point : when e10s was first activated by default in Nightly (in 2013 IIRC) it crashed several times a day. I stopped using nightly for a year afterwards.