Hacker Newsnew | past | comments | ask | show | jobs | submit | QuiEgo's commentslogin

Crunch time in those days sucked. I remember mandatory nights and weekends and the managers ordering in pizza for everyone.

Yes. See attacks like Pegasus.

Almost every modern SoC has efuse memory. For example, this is used for yield management - the SoC will have extra blocks of RAM and expect some % to be dead. At manufacturing time they will blow fuses to say which RAM cells tested bad.

100%, if you steal a phone from the Apple store they just remote brick it.


Very hard. FIB is the only known way to do this but even then, that's the type of thing where you start with a pile of SoCs and expect to maybe get lucky with one in a hundred. A FIB machine is also millions of dollars.

Open the case and pogo pin on a flash programmer directly to the pins of the flash chip.

Sophisticated actors (think state-level actors like a border agent who insists on taking your phone to a back room for "inspection" while you wait at customs) can and will develop specialized tooling to help them do this very quickly.


It depends. Usually there are enough "knobs" that adding that many balls to the package would be crazy expensive at volume.

Most SoCs of even moderate complexity have lots of redundancy built in for yield management (e.x. anything with RAM expects some % of the RAM cells to be dead on any given chip), and uses fuses to keep track of that. If you had to have a strap per RAM block, it would not scale.


I grew up poor in the US. It was not super awesome, but not as bad as the article would make you fear. The public schools (and activities tied to them) were great, even in my "bad" district. Libraries were everywhere and very accessible, and the libraries in my schools were giant and frequently used. I never went hungry a day in my life, at times thanks to food stamps. It was possible to find cheap enough housing to survive on low income without government aid.

The biggest problem, by far, was medical care. I didn't see a dentist for the first time until I was in my 20s. Any medical problem felt like a disaster that could put us on the street if not managed carefully. I'm very envious of Canada on this front.

Interestingly, I have a similar feeling of gratitude to the US the author has to Canada. Food stamps, and eventually tuition wavers and scholarships, let me break out of poverty. I'm so, so grateful I had those opportunities.

Like the author, I feel we could do a hell of a lot better in a lot of ways (especially lately!), but the core we have is still pretty dang good and I still feel lucky for having access to it.


It'd be ideal if the phone manufacturer had a way to delegate trust and say "you take the risk, you deal with the consequences" - unlocking the bootloader used to be this. Now we're moving to platforms treating any unlocked device as uniformly untrusted, because of all of the security problems your untrusted device can cause if they allow it inside their trust boundary.

We cant have nice things because bad people abused it :(.

Realistically, we're moving to a model where you'll have to have a locked down iPhone or Android device to act as a trusted device to access anything that needs security (like banking), and then a second device if you want to play.

The really evil part is things that don't need security (like say, reading a website without a log in - just establishing a TLS session) might go away for untrusted devices as well.


> We cant have nice things because bad people abused it :(.

You've fallen for their propaganda. It's a bit off topic from the Oneplus headline but as far as bootloaders go we can't have nice things because the vendors and app developers want control over end users. The android security model is explicit that the user, vendor, and app developer are each party to the process and can veto anything. That's fundamentally incompatible with my worldview and I explicitly think it should be legislated out of existence.

The user is the only legitimate party to what happens on a privately owned device. App developers are to be viewed as potential adversaries that might attempt to take advantage of you. To the extent that you are forced to trust the vendor they have the equivalent of a fiduciary duty to you - they are ethically bound to see your best interests carried out to the best of their ability.


> That's fundamentally incompatible with my worldview and I explicitly think it should be legislated out of existence.

The model that makes sense to me personally is that private companies should be legislated to be absolutely clear about what they are selling you. If a company wants to make a locked down device, that should be their right. If you don't want to buy it, that's your absolute right too.

As a consumer, you should be given the information you need to make the choices that are aligned with your values.

If a company says "I'm selling you a device you can root", and people buy the device because it has that advertised, they should be on the hook to uphold that promise. The nasty thing on this thread is the potential rug pull by Oneplus, especially as they have kind of marketed themselves as the alternative to companies that lock their devices down.


I don't entirely agree but neither would I be dead set against such an arrangement. Consider that (for example) while private banks are free not to do business with you at least in civilized countries there is a government associated bank that will always do business with anyone. Mobile devices occupy a similar space; there would always need to be a vendor offering user controllable devices. And we would also need legal protections against app authors given that (for example) banking apps are currently picking and choosing which device configurations they will run on.

I think it would be far simpler and more effective to outlaw vendor controlled devices. Note that wouldn't prevent the existence of some sort of opt-in key escrow service where users voluntarily turn over control of the root of trust to a third party (possibly the vendor themselves).

You can already basically do this on Google Pixel devices today. Flash a custom ROM, relock the bootloader, and disable bootloader unlocking in settings. Control of the device is then held by whoever controls the keys at the root of the flashed ROM with the caveat that if you can log in to the phone you can re-enable bootloader unlocking.


>and then a second device if you want to play.

With virtualization this could be done with the same device. The play VM can be properly isolated from the secure one.


How is that supposed to fix anything if I don't trust the hypervisor?

It's funny, GP framed it as "work" vs "play" but for me it's "untrusted software that spies on me that I'm forced to use" vs "software stack that I mostly trust (except the firmware) but BigCorp doesn't approve of".


Then yes you will need a another device. Same if you don't trust the processor.

> Same if you don't trust the processor.

Well I don't entirely, but in that case there's even less of a choice and also (it seems to me) less risk. The OEM software stack on the phone is expected to phone home. On the other hand there is a strong expectation that a CPU or southbridge or whatever other chip will not do that on its own. Not only would it be much more technically complex to pull off, it should also be easy to confirm once suspected by going around and auditing other identical hardware.

As you progress down the stack from userspace to OS to firmware to hardware there is progressively less opportunity to interact directly with the network in a non-surreptitious manner, more expectation of isolation, and it becomes increasingly difficult to hide something after the fact. On the extreme end a hardware backdoor is permanently built into the chip as a sort of physical artifact. It's literally impossible to cover it up after the fact. That's incredibly high risk for the manufacturer.

The above is why the Intel ME and AMD PSP solutions are so nefarious. They normalize the expectation that the hardware vendor maintains unauditable, network capable, remotely patchable black box software that sits at the bottom of the stack at the root of trust. It's literally something out of a dystopian sci-fi flick.


OTP memory is a key building block of any secure system and likely on any device you already have.

Any kind of device-unique key is likely rooted in OTP (via a seed or PUF activation).

The root of all certificate chains is likely hashed in fuses to prevent swapping out cert chains with a flash programmer.

It's commonly used to anti rollback as well - the biggest news here is that they didn't have this already.

If there's some horrible security bug found in an old version of their software, they have no way to stop an attacker from loading up the broken firmware to exploit your device? That is not aligned with modern best practices for security.


> they have no way to stop an attacker from loading up the broken firmware to exploit your device

You mean the attacker having a physical access to the device plugging in some USB or UART, or the hacker that downgraded the firmware so it can use the exploit in older version to downgrade the firmware to version with the exploit?


Sure. Or the supply chain attacker (who is perhaps a state-level actor if you want to think really spicy thoughts) selling you a device on Amazon you think is secure, that they messed with when it passed through their hands on its way to you.

The state level supply chain attacker can just replace the entire chip, or any other part of the product. No amount of technical wizardry can prevent this.

Modern devices try to prevent this by cryptographically entangling the firmware on the flash to the chip - e.x. encrypting it with a device-unique key from a PUF. So if you replace the chip, it won't be able to decrypt the firmware on flash or boot.

The evil of the type of attack here is that the firmware with an exploit would be properly signed, so the firmware update systems on the chip would install it (and encrypt it with the PUF-based key) unless you have anti-rollback.

Of course, with a skilled enough attacker, anything is possible.


If this would be the reason, downgrading would wipe the device not brick it permanently.

> You mean the attacker having a physical access to the device plugging in some USB or UART

... which describes US border controls or police in general. Once "law enforcement" becomes part of one's threat model, a lot of trade-offs suddenly have the entire balance changed.


Example of evil maid attack. On laptops prevented automatically by secure boot or manually by encryption and checking fingerprints, not by bricking whole device.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: