The downside of this (at least in my personal view) is it's a regression from the elevated security you got with non-resident FIDO/U2F MFA.
The moment you go "passkey" and have to use a system like the one you suggest, you need to trust software based storage of long term credentials.
That isn't the case with a hardware FIDO2/U2F token, which has unlimited capacity for non-resident MFA keys the server holds for you to decrypt and use locally to sign login attempts.
I liked that FIDO seemed to get towards hardware backed security modules for login, without cognitive load of worrying about number of sites and yubikey slot capacity. Resident Webauthn keys limit the number of sites you can have, and push you towards software based solutions (so you lose out on doing the crypto on the single purpose, limited platform that's dedicated to generating those signatures).
I agree that it's annoying that there's now a limit on the amount of credentials you can store on hardware keys. But while older Yubikeys only support 25 resident keys, models with firmware 5.7 onwards support 100. That probably makes it feasible to exclusively store passkeys in hardware.
https://www.yubico.com/blog/empowering-enterprise-security-a...
However, I don't know whether it's possible to delete only a single resident key you no longer need.
Yeah, a fair point (though if you can't manage keys one by one that seems a massive usability issue and oversight with no safe path to resolution).
This adds another step needing considered for a user, as finite storage means a whole edge case to consider (can't register as slots full), and no simple actionable step to take ("which account would you like to never be able to log into again?" or "sorry you need to wipe this key and lose everything, or buy another one")
I feel there is a usability aspect of FIDO2 (for non-resident MFA) that is being overlooked - the paradigm was simple - a physical key you don't lose, and you can have multiple keys. The gotcha was no way to replicate backup keys, which becomes fairly difficult for users. But hey - passkeys launched with no export or migration process between closed device ecosystems!
From my perspective though, I won't use passkeys until I get sufficient control over them to be allowed to decide if I want to make them "resident" or not. (I don't want resident keys!!)
I want to use non-resident keys everywhere as a hardware-backed second factor that is phishing resistant, without capacity limitations (so zero cognitive burden on whether to use or not).
It feels like a regression for passkeys to be forgetting about what (for me at least) was the core basic use-case of FIDO2 - as a highly secure second factor for someone who already can manage storage of secrets in software, and just wants high assurance phishing resistant MFA during their conventional login process.
I'm honestly very annoyed with Yubico that they just froze their product line-up circa 2018 and pretend the major changes in firmware (5.2, 5.7) don't matter at all and don't warrant a separate SKU.
TOTP codes are phishable and repayable in real-time - both via web (visiting the wrong site which asks for a TOTP and relays it within a few seconds), and via social engineering over the phone (give us one of the codes to prove it's you and we can keep your account safe).
Adding number matching or similar helps ensure that the same user is initiating the session as is approving it - an issue when people discovered that Microsoft (among others) would do push messages to authenticate a login, and that users (if spammed late at night with constant requests), would often eventually hit allow to stop the notifications.
Which? is a UK brand, and in the UK it's fairly common to see the postcode (or rather the first couple of digits) used to determine which regional programming a user wants, given the (to some extent historical) TV regions and channel numbering.
I've seen "always offline" pre internet era satellite TV set top boxes do this, and use the first 2 or 3 characters of the postcode to work out the correct region and show the correct programme guide information and channel numbering.
I imagine that on a modern internet enabled device this might also pre-select some of the "watch later" apps, based on your region.
Doesn't mean there's no wider issue about data gathering and exfiltration over the internet, but just to add some regional context as this postcode point might not be as egregious as it sounds at first.
Readeck saves an archived copy of the links you save (where it can).
From their docs,
"Every bookmark is stored in a single, immutable, ZIP file. Parts of this file (HTML content, images, etc.) are directly served by the application or converted to a web page or an e-book when needed."
The issue so far seems to be that most OSs don't really have an effective way to restrict that file to a single application. User-oriented filesystem permissions don't work, as all software runs "as" the user.
If you assume there's a way to restrict permissions by application (a bit like TCC on Mac for certain folders), you need to then go down a rabbit-hole of what matcher you use to decide what is a "single application" - Mac OS can use developer Team ID (i.e. app signature identity), or similar. You wouldn't want to rely on path or binary name, as those could be spoofed or modified by a rogue app.
So in short, in a multi-user OS, generally the filesystem (asides from Mac OS, under certain circumstances) is fairly widely readable by other software running as the current user. At least in my experience, Mac OS is the desktop OS that is closest to having some level of effective protections against apps accessing "everything" owned by the user (but belonging to other apps).
When running a browser performance benchmark, generally not - the ad block extension adds an overhead to the page. I saw this when experimenting with Orion Browser on Mac, which uses the Webkit engine, but adds support for many Firefox and Chrome web extension APIs.
In experimenting with that, I noticed that enabling extensions and using many extensions during benchmarks could easily impact on scores. Even just an ad blocker like uBO had a measurable impact on a benchmark, from my recollection.
Orion is using a web engine that was never really designed to support what they are trying to make it do. Obviously it's just code in the end but a lot of the changes will be made for convenience rather than in ways that make sense if you look at WebKit holistically, because the team is just too small to actually do that. So it is natural that performance suffers.
One other potential area of variability could come from browser extensions - I imagine that users who compare browser power performance are more technical than the median user, and are more likely to run browser extensions (e.g. ad blockers, etc).
Given Chrome has a larger and more extensive collection of extensions, perhaps users who see differences are running more browser extensions in their Chrome installation, which impacts on performance/power usage?
Certainly interesting to see these assumptions put to the test though, and get some data around them. While it looks like it may have fallen behind again a little, I noticed Firefox's browsing performance on Speedometer caught up for a while, contrary to what I had thought/assumed.
A good point - perhaps the focus is too heavy on paperwork or "measurable compliance".
From experience in this sector though, I think the real issue is a lack of technical awareness and competency with enough breadth to extend into the "digital" domain - often products like these are developed by people from the "power" domain (who don't necessarily recognise off the top of their head that 512-bit RSA is a #badthing and not enough to use to protect aggregated energy systems that are controllable from a single location).
Clearly formal diplomas/certificates are not needed for that - some practical hands-on knowledge and experience would help a lot there.
When a product gets a network interface on it, or runs programmable firmware, we should hear discussions about A/B boot, signatures, key revocation, crypto agility to enable post quantum cryptography algorithms, etc. Instead, the focus will be on low-cost development of a mobile app, controlled via the lowest-possible-cost vendor server back-end API that gets the product shipped to market quickly.
Let's not even go near the "embedded system" mindset of not patching and staying up to date - embedded systems are a good place to meet Linux 2.4 or 2.6, even today... Vendors ship whatever their CPU chipset vendor gives them as a board support package, generally as a "tossed over the wall" lump of code.
I doubt many of these issues (which seem to be commercial/price driven) will be resolved through paperwork, as you say.
In the rest of the tech industry, what you did to get your diploma gives you about 18 months of momentum. If you haven’t learned multiple new technologies by that point, you’re in trouble. Success in this industry means perpetually redeveloping your own skills, and liking it.
How someone would wave a 20 year old piece of paper as evidence that they know how to use solar tech that was developed last year, I don’t know.
I mean, electrical engineering teaches you a lot of the math,physics,and control systems theory, and power systems that guides the design and operating characteristic of power systems devices like inverters. Sure EE doesn’t help with cybersecurity per se, but inverters and solar panels existed 20 years ago so I feel like my 20 year old electrical engineering degree is pretty darn relevant
It certainly does - if you remain current then not a lot has really changed.
If you understand the principles of control systems and how an electrical grid works, this is broadly "just" a grid stability concern.
To some extent this feels like an issue of IoT-ification of things that we otherwise understood just fine! Maybe the real issue is how we blend cyber security knowledge into other sectors, and quantify and ensure it is present?
Fair enough, the parent comment mentioned “solar tech” and old pieces of paper and silly me didn’t realize that the power systems side works is a given and the problem is essentially hooking it up to computers and the internet to gain a modicum of convenience.
Given the challenges of techniques like TLS interception (i.e. through pinning and other good security features), about the only measure I can see left is network isolation.
You can set up a local network that has no WAN connectivity on it. About anything else is difficult to verify even the most basic of security properties. Certifying is another step up (although you could argue certifying is just a third party saying something passed a finite list of tests) - the real challenge is defining a meaningful certification scheme.
The trouble is that these set out principles, but it's hard to validate those principles without having about the same amount of knowledge as required to build an equivalent system in the first place.
If you at least know the system is not connected to a WAN, you can limit the assurance required (look for WiFi funcitonality, new SSIDs, and attempts to connect to open networks), but at a certain point you need to be able to trust the vendor (else they could put a hard-coded "time bomb" into the code for the solutions they develop).
I don't see much value in the academic/theoretical approaches to verification (for a consumer or stakeholder concerned by issues like these), as they tend to operate on an unrealistic set of assumptions (i.e. source code or similar levels of unrealistic access) - the reality is it could take a few days for a good embedded device hacker to even get binary firmware extracted from a device, and source code is likely a dream for products built to the lowest price overseas and imported.
We're not just talking about random consumer hardware : with security issues like these, I don't see why closed source software would not be just banned.
They don't need to break into separate sites though - the issue at hand is that a single failure in the centralised "control plane" from the vendor (i.e. the API server that talks to consumers' apps) can be incredibly vulnerable.
Here's a recent example where a 512-bit RSA signing key was being used to sign JWTs, allowing a "master" JWT to be signed and minted, giving control of every system on that vendor's control system.
The moment you go "passkey" and have to use a system like the one you suggest, you need to trust software based storage of long term credentials.
That isn't the case with a hardware FIDO2/U2F token, which has unlimited capacity for non-resident MFA keys the server holds for you to decrypt and use locally to sign login attempts.
I liked that FIDO seemed to get towards hardware backed security modules for login, without cognitive load of worrying about number of sites and yubikey slot capacity. Resident Webauthn keys limit the number of sites you can have, and push you towards software based solutions (so you lose out on doing the crypto on the single purpose, limited platform that's dedicated to generating those signatures).