Hacker Newsnew | past | comments | ask | show | jobs | submit | least's commentslogin

https://html.spec.whatwg.org/multipage/nav-history-apis.html...

The spec kind of goes into it, but aside from the whole SPAs needing to behave like individual static documents, the big thing is that it's a place to store state. Some of this can be preserved through form actions and anchor tags but some cannot.

Let's say you are on an ecommerce website. It has a page for a shirt you're interested in. That shirt has different variations - color, size, sleeve length, etc.

If you use input elements and a form action, you can save the state that way, and the server redirects the user to the same page but with additional form attributes in the url. You now have a link to that specific variation for you to copy and send to your friend.

Would anyone really ever do that? probably not. More than likely there'd just be an add to cart button. This is serviceable but it's not necessarily great UX.

With the History API you can replace the url with one that will embed the state of the shirt so that when you link it to your friend it is exactly the one you want. Or if you bookmark it to come back to later you can. Or you can bookmark multiple variations without having to interact with the server at all.

Similarly on that web page, you have an image gallery for the shirt. Without History API, maybe You click on a thumbnail and it opens a preview which is a round trip to the server and a hard reload. Then you click next. same thing. new image. then again. and again. and each time you are adding a new item to the history stack. that might be fine or even preferred, but not always! If I want to get back to my shirt, I now have to navigate back several pages because each image has been added to the stack.

If you use the History API, you can add a new url to the stack when you open the image viewer. then as you navigate it, it updates it to point to the specific image, which gives the user the ability to link to that specific image in the gallery. when you're done. If you want to go back you only have to press back once because we weren't polluting the stack with history state with each image change.


Thanks for the detailed and thoughtful reply! I agree that in both of the scenarios you mentioned, this API does provide better usability.

I guess what feels wrong to me is the implicitness of this feature, I'm not sure whether clicking on something is going to add to history or not (until the back button breaks, then I really know).


The History API is pretty useful. It creates a lot of UX improvement opportunities when you're not polluting the stack with unnecessary state changes. It's also a great way to store state so that a user may bookmark or link something directly. It's straight up necessary for SPAs to behave how they should behave, where navigating back takes you back to the previous page.

This feels like a reasonable counter-measure.


Yeah but all of this is a symptom of a broader problem rather than reasons why the history API is useful.

SPAs, for example, require so many hacks to work correctly that I often wonder to myself if they’re not really just a colossal mistake that the industry is too blinded to accept.


As a user, I really don't care about the supposed purity or correctness of a website's tech stack. When I click "back" I want to go back to what I think the previous page was.

As a user, I don’t really care about the building materials used in construction. But that doesn’t mean builders should cut corners.

A building collapse and a poorly built website UI are completely different in terms of actual risk.

A building collapsing isn’t the only way people are affected by choices in construction. But if you want to talk about worst case scenarios then I can pick out some examples in IT too:

We constantly see people’s PII leaked on the internet, accounts hacked and money stolen, due to piss poor safeguards in the industry. And that’s without touching on the intentional malpractice of user tracking.

And yes, this is a different issue, but it’s another symptom of the same problem. Tech businesses don’t give a shit, and developers make excuses about how it’s not life or death. Except our bad choices do still negatively affect people’s lives even if we try to convince ourselves it doesn’t.


Could you provide some examples of the hacks you're referring to?

State management, URL fragment management, reimplementing basic controls...

One that I hate the most is that they first reimplement tabular display with a soup of divs, then because this is slow as a dog, they implement virtualized display, which means they now need to reimplement scrolling, and because this obviously breaks CTRL+F, they end up piling endless hacks to fix that - assuming they bother at all.

The result is a page that struggles to display 100 rows of data. Contrast that with regular HTML, where you can shove 10 000 rows into a table, fully styled, without noticeable performance drop. A "classical" webpage can show couple megabytes worth of data and still be faster and more responsive than typical SPA.


Sounds like you're referring to some specific examples of poorly implemented apps rather than the concept of SPAs as a whole.

For your example, the point of that div soup is that enables behaviours like row/column drag&drop reordering, inline data editing, realtime data syncing and streaming updates, etc. - there is no way to implement that kind of user experience with just html tables.

There's also huge benefit to being able to depend on clientside state. Especially if you want your apps to scale while keeping infra costs minimal.

I get the frustrations you're talking about, but almost all of them are side effects of solutions to very real UX problems that couldn't be solved in any other way.

And to be clear, I'm not saying that people building SPAs when all they needed was a page showing 10,000 rows of static data isn't a problem. It's just a people problem, not an SPA problem.


> all of them are side effects of solutions to very real UX problems that couldn't be solved in any other way.

Except they had been solved in other ways and the problem was people insisted on using web technologies to emulate those other technologies even when web technologies didn’t support the same primitives. And they chose that path because it was cheaper than using the correct technologies from the outset. And thus a thousand hacks were invented because it’s cheaper than doing things properly.

Then along comes Electron, React Native and so on and so forth. And our hacks continue to proliferate, memory usage be damned.


> And they chose that path because it was cheaper than using the correct technologies from the outset

No, otherwise they would not need all those hacks. Web stack makes it cheap (fast and easy) to build an MVP, but since the very primitives required to fully implement requirements are not even there, they end up implementing tons of ugly hacks held by duck tape. All because they thought they could iterate fast and cheap.

It's the same story with teams picking any highly dynamic language for an MVP and then implementing half-baked typing on top of it when the project gets out of MVP stage. Otherwise the bug reproduction rate outpaces fixing rate.


Having done native and web frontends, they are different.

I prefer the capabilities of native frameworks but I prefer the web box model.

Sizing stuff is native frameworks is nice until it isn’t.


I’ve done both too. And I honestly don’t like the box model.

But I will admit I’ve focused more on desktop than mobile app development. And the thing about sizing stuff is it’s a much easier problem for desktop than mobile apps, which are full screen and you have a multitude of screen sizes and orientations.


>> I get the frustrations you're talking about, but almost all of them are side effects of solutions to very real UX problems that couldn't be solved in any other way.

Any other way? Just build a web app with emscripten. You can do anything.

For a while GTK had an HTML5 backend so you could build whole GUI apps for web, but I think it got dropped because nobody used it.


> rather than the concept of SPAs as a whole.

This is the whole concept of the SPA - make a page behave like multiple pages. The premise itself requires breaking absolutely everything assuming that content is static.

> There's also huge benefit to being able to depend on clientside state. Especially if you want your apps to scale while keeping infra costs minimal.

Um... I'm old enough to remember the initial release of node, where the value proposition was that since you cannot trust client data anyway and have to implement thorough checking both client and server side, why not implement that once.

> I get the frustrations you're talking about, but almost all of them are side effects of solutions to very real UX problems that couldn't be solved in any other way.

Let me introduce you to our lord and savior native app


If you don't manage the history properly in your SPA, pressing the back button could take the user out of the app entirely.

If you don't let web developers manage history/state like this, we'd be going back to the inefficient world of, "every forward/back movement loads a whole page." (With lots of unnecessary round trip messages between the client and server while the user waits for everything to load).

Basically, the ability to manage history is a user-centric feature. It makes the experience better for them.


> If you don't manage the history properly in your SPA, pressing the back button could take the user out of the app entirely.

Yes. And that should be the default behavior: browser buttons should take you through the browser's history. If you keep a in-app state and want the user to navigate through it, you should provide in-app buttons.

Nobody complains that the browser's close button quits the browser instead of the app it's showing, or that the computer's power button shuts down the whole OS and not only the program in the foreground.

Users must be educated. If they have learned that left means "back" and right means "forward", that a star (sometimes a heart) means "remember this for me", and that an underlined checkmark means "download", then understanding the concept of encapsulation shouldn't be too much for them.


> Yes. And that should be the default behavior: browser buttons should take you through the browser's history. If you keep a in-app state and want the user to navigate through it, you should provide in-app buttons.

The Back and Forward buttons on a web browser is the navigation for the web. If you click a link on a static html page it will create a new entry. If you click back, it'll take you back. If you press forward, You will navigate forward.

We should not be creating a secondary set of controls that does the same thing. This is bad UX, bad design, and bad for an accessible web.

> Nobody complains that the browser's close button quits the browser instead of the app it's showing, or that the computer's power button shuts down the whole OS and not only the program in the foreground.

It does close the app it's showing because we have tabs. If you close a tab, it'll close the app that it's showing. If you close the browser, which is made up of many tabs, it closes all of the tabs. Before tabs, if you closed a window, the web page you were on would close as well. It does what is reasonably expected.

If on your web application you have a 'link' to another 'page' where it shows a change in the view, then you'd expect you would be able to press back to go back to what you were just looking at. SPAs that DON'T do that are the ones that are doing a disservice to the user and reasonable navigation expectations.

> Users must be educated. If they have learned that left means "back" and right means "forward", that a star (sometimes a heart) means "remember this for me", and that an underlined checkmark means "download", then understanding the concept of encapsulation shouldn't be too much for them.

They should not have to be 'educated' here. The mental model of using the back and forward buttons to navigate within a webpage is totally fine.


> It's also a great way to store state so that a user may bookmark or link something directly.

Can you unpack this please? AFAIK history stack is not preserved in the URL, therefore it cannot be preserved in a bookmark or a shared link.


Probably referring to using pushState (part of the History API) to update the URL to a bookmarkable fragment URL, or even to a regular path leading to a created document.

https://developer.mozilla.org/en-US/docs/Web/API/History/pus...

> The new history entry's URL. Note that the browser won't attempt to load this URL after a call to pushState(), but it may attempt to load the URL later, for instance, after the user restarts the browser.


Sure. I'm not speaking about preserving the full history stack in the URL, just storing state. Apologies in advance if my explanation for what I mean is something you already understand.

This can be as simple as having a single checkbox with a checked/unchecked state.

when you load the webpage, the javascript can pull in the url parameters with URLSearchParams (https://developer.mozilla.org/en-US/docs/Web/API/URLSearchPa...). If the url parameter you set is set to 'on' then the checkbox, which is by default unchecked, can be set to on.

You have your checkbox:

    <input type="checkbox" id="check">
And then you have your javascript:

    const check = document.getElementById('check');
  
    // get state of checkbox from URL parameter
    check.checked = new URLSearchParams(location.search).get('state') === 'on';
    
    // add event listener to call history api to alter the URL state.
    check.onchange = () => { history.replaceState(null, '', check.checked ? '?state=on' : '?state=off'); };

The history.replaceState() replaces the URL in your history with the one including the URL parameter, so if a user were to bookmark it, it would store that and reload it when they revisit the webpage.

If I used history.pushState(), each time I clicked on the checkbox, a new item would be added to the stack. for a checkbox this is almost certainly a bad idea because your browser history is going to be polluted pretty quickly if you happen to click it multiple times.

pushState can be useful when it matches the user expectations, though, like if it is an SPA and the user clicks on an internal link to another section of the site, they'd expect to be able to go back to the previous page, even though we're still on the same actual html page.

So you would not be preserving the entire history stack. You can sort of do this by encoding state changes into another url parameter, but the behavior isn't entirely consistent between browsers. It also does require, as far as I know, an explicit action from the user for it to actually affect their navigation. So a website couldn't just add 1000 entries to the user's history on load without some explicit interaction on the web page.

Once the user interacts, though, it does seem like it opens up a lot of opportunity to abuse it, intentionally or not. You can asynchronously push thousands of entries into the browser history without blocking interactivity of the site. you can even continue to push state to the URL from other inputs while doing so.


It’s kind of modal editing. Your 99% is enter to send because it’s a chat program. You’re sending mostly quick messages where adding a chorded input to send is just adding extra work to that mode.

When you enter a code block, that assumption changes. You are now in a “long text” mode where the assumptions are shifted where you are more likely want to insert a new line than to send the message.

I think people that have used tables or a spreadsheet and a text editor kind of understand modal editing and why we shift behaviors depending on the context. Pressing tab in a table or spreadsheet will navigate cells instead of inserting a tab character. Pressing arrow keys may navigate cells instead of characters in the cell. Pressing enter will navigate to the cell below, not the first column of the next row. It’s optimized for its primary use case.

I think if the mode change was more explicit it’d maybe be a better experience. Right now it is largely guessing what behavior someone wants based off the context of their message but if that mismatches the users expectations it’s always going to feel clumsy. A toggle or indicator with a keyboard shortcut. Can stick the advanced options inside the settings somewhere if a power user wants to tinker.


> I think people that have used tables or a spreadsheet and a text editor kind of understand modal editing and why we shift behaviors depending on the context.

I don't have a spreadsheet software nearby, but I remember the cell is highlighted different if you're in insert mode or navigation mode. Just like the status line in Vim let's you know which mode you're in.


Babel features are kind of a moot point if you’re just talking about the syntax, which seems to be the purpose of the post. Most of the reason to use org mode is tied to emacs.

There’s no reason you couldn’t do something similar with markdown code blocks if someone were so inclined. But that’s tool dependent, not syntax.

I sort of agree with Karl’s point about there being too many standards of markdown, but I doubt org mode would have survived the same level of popularity without suffering the same fate.

It doesn’t help that there is no standard for org mode. You can only really use and take advantage of its power in emacs. It isn’t susceptible to lossy transformations because there’s only one real org mode editor.


Well, but I am not aware of anyone having come up with a good syntax to do babel things in Markdown. Markdown and Org Mode also set out to serve different purposes. For a quick and dirty text Markdown might suffice, but the babel stuff and spreadsheet stuff enable a lot of use cases that Markdown simply doesn't cater to. We already have the implementation of all these nice things in Emacs. If we were to replicate them for some markdown dialect, they would probably be done half-right, before someone actually manages to get literate programming right for various languages, including what code to translate to, how to wrap or not wrap the code that is inside blocks, sessions, output formats, etc. We might as well use what we have with Emacs. There is probably a way to call Emacs' functionality from outside of Emacs, to treat it as a library.

But not all is well with Org Mode syntax either. Many git hosters have only a very rudimentary implementation of a parser and writing a parser for it is not actually that easy. Its dynamic nature requires at least a 2 step approach of parsing and then later checking all kinds of definitions at the top of a file and further processing the document according to that. It's power comes at that cost. That's probably why we have so many Markdowns, but only one Org Mode (OK maybe a few, counting Vim and VSCodium plugins, that achieve a feature subset).

I will say though, that org mode syntax is much better suited for writing technical documentation than markdown. The only issue is, that not so many people know it or want to learn it, and I don't know a way to change that. Perhaps that effort to have the org mode syntax separately defined (https://gitlab.com/publicvoit/orgdown/-/blob/master/doc/Over...) by the same author will help creating more support for the format in various tools.


I agree you would need to specify the markdown to allow more implementations. https://github.com/jgm/djot Would make a good DSL inside languages, combine that with compile-time execution so that blocks can auto-recalculate and you have a more available mechanism than emacs/org in other languages.


It's more likely it has to do with all the work they're doing to getting the WebExtension API to work with WebKit which is a main selling feature for the MacOS version - using firefox and chrome extensions in a webkit-powered browser.


And esp for the iOS version, where there are not many options for using extensions in other browsers. The only browser there that can use UBO, afaik. In MacOS it is a bit too buggy for me for daily use, ymmv.


The difference between those is the person is actually using this text editor that they built with the help of LLMs. There's plenty of people creating novel scripts and programs that can accommodate their own unique specifications.

If a programmer creating their own software (or contracting it out to a developer) would be a bespoke suit and using software someone or some company created without your input is an off the rack suit, I'd liken these sorts of programs as semi-bespoke, or made to measure.

"LLMs are literally technology that can only reproduce the past" feels like an odd statement. I think the point they're going for is that it's not thinking and so it's not going to produce new ideas like a human would? But literally no technology does that. That is all derived from some human beings being particularly clever.

LLMs are tools. They can enable a human to create new things because they are interfacing with a human to facilitate it. It's merging the functional knowledge and vision of a person and translating it into something else.


compilers can only produce machine code. so unorginal.


The expansion cards seem pretty gimmicky to me. You're replacing a hub with... a bunch of hubs with one port on it. I know it opens up to some third party modules (this one seems particularly cool: https://github.com/LeoDJ/FW-EC-DongleHiderPlus) but for the most part you are getting less connectivity than other laptops. You don't even get an audio jack without taking up one of your expansion slots (edit: on the Framework 16. 13 includes it).

If the expansion slots were larger then they could have maybe facilitated something like getting 2 usb-a ports in exchange for the one USB-C which feels like an actual thing to consider. As it is, it just doesn't feel like you're gaining anything. If you're carrying any additional expansion cards with you you lose the only advantage it has over buying a hub, which can turn that one usb-c slot into multiple usb-a ports, ethernet, hdmi, audio, sd card reader, etc.


For what it's worth, the 13 does have an audio jack. It's only the 16 that requires an addin card for that.

I get what you're coming where you're coming from though. For me, the whole package was worth it, but that's probably not true for everyone.


> There's no reason why Framework cannot be that successful in 10 years time.

They don't have the resources nor is their scope large enough. Could that change in 10 years? Maybe, but probably not. I'm not even sure it's something they would want to replicate. Retail costs a lot of money and the benefits to it are quite limited. Similarly a service network that would be comparable to one of the larger PC manufacturers would also be very expensive.

> Furthermore, when Framework might become that successful, no need to buy a full new laptop, you can just buy the stuff that failed and move on. And if that does happen, then experience with Framework promises to be much better than experience with Macbook.

The experience you're describing is still involving a person opening up their laptop to replace whatever the failed part is, assuming they even know what the failed part is. I'm qualified to do those sort of diagnostics on a computer and depending on what it is, it'd still be more downtime than going to buy/getting a loaner laptop in most cases.

I'm not saying people can't learn that but I know that people won't.


If that were the case then why isn’t Firefox on mobile on Android more successful? Apple blocking other browser engines in iOS is the only thing preventing a complete hegemony of the web by Google/Blink.


Different platforms, different tastes.

Facebook's Threads app has more activity on iOS than Android[1].

---

[1] https://pxlnv.com/linklog/threads-android-ranking/


I could be entirely off base, but I would expect Android to be more likely to have more users that would go out of their way to use a non-default web browser, given that it seems to be favored by people who like customizing things. The relative openness of the platform invites a different demographic.

On the other hand, the default on Android is Chrome so there may be less motivation to change since it's the 'default' platform to target. But if Apple opened up iOS to other browsers, the likely outcome would not be Firefox gaining market share but Chrome completely taking over.

I do not like that iOS doesn't allow for alternative engines but I appreciate that it's basically the only thing that even somewhat reigns Google in.


> But you're telling me privacy preserving solutions to combat illicit content are impossible?

Yes. You cannot have a system that positively associates illicit content with an owner while preserving privacy.


Thanks for the reply, but you are exactly the audience my post is for. Because you say that, we will lose what little figments of privacy and freedoms we have left.

Apple tried and made good progress. They had bugs which could be resolved but your insistence that it couldn't be done caused too much of an uproar.

You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.

These are just some of the things that are possible that I came up with in the last minute of typing this post. Better and more well thought out solutions can be developed if taken seriously and funded well.

However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.


> Because you say that, we will lose what little figments of privacy and freedoms we have left.

I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.

> You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.

What about this is privacy preserving?

> However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.

It's not "materially false." Bringing a human into the picture doesn't do anything to preserve privacy. If, like in your example, a parent's family photos with their children flag the system, you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.

You cannot have a system that is scanning everyone's stuff indiscriminately and have it not be a violation of privacy. There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects - it is supposed to be a protection against abuse.


> I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.

You have an ideological approach instead of a practical one. It isn't governments that are demanding it. I am demanding it of our government, I and the majority. I don't want freedoms paid for by such intolerable and abhorrent levels of ongoing injustice. It isn't a false sense of security, for the victims it is very real. Most criminals are not sophisticated. Crime prevention is always about making it difficult to do crime, not waving a magic wand and making crime go away. I'm not saying let's give up freedoms, but if your stance is there is no other way, then freedoms have to go away. But my stance is that the technology is there, it's just slippery slope fallacy thinking that's preventing from getting it implemented.

> What about this is privacy preserving?

Persons aren't identified before a human reviews and confirms that the material is illicit.

You have to identify yourself to the government to drive and place a license plate connected to you at all times on your car. You have to id yourself in most countries to get a mobile phone sim card, or open a bank account. Dragnet surveillance is what I agree is unacceptable except as a last resort, it isn't dragnet if algorithms flag it first, and it isn't privacy invading if false hits are never associated with individuals.

> you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.

There is just cause, the material was flagged as illicit. In legal terms, it is called probable cause. If a cop hears what sounds like a gunshot in your home, he doesn't need a warrant, he can break in immediately and investigate because it counts as extenuating circumstance. The algorithms flagging content are the gunshots in this case. You could be naked in your house and it will be a violation of privacy, but acceptable by law. If you said after review, they should get a warrant from a judge I'm all for it.

It is materially false, because that the scanning can be done without sending a single byte of the device. The privacy intrusion happens not at the time of scanning, but at the time of verification. To continue my example, the cop could have heard you playing with firecrackers, you didn't do anything wrong but your door is now broken and you were probably naked too, which means privacy violated. This is acceptable by society already.

The false positive rates for cops seeing/hearing things, and for eyewitness testimony is very high in case you're not aware. by comparison, apples csam scanner was very low.

> There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects

As stated above, so long as the scanning is happening strictly on-device, you're not being surveilled. When there is a hit, humans can review the probable cause, a judge can issue a warrant for your arrest or a search warrant to access your device.

Another solution might be to scan only at transmission time of the content, not capture and storage (still not good enough, but this is the sort of conversation we need, not plugging in of ears).

Let's take a step back. Another solution might be to restrict every content publishing on the internet to people positively identifying themselves.


> You have an ideological approach instead of a practical one.

It's both. We can save a whole lot of time and money not wasting resources on security theater and reallocate it towards efforts that actually make society better and safer.

> It isn't governments that are demanding it. I am demanding it of our government, I and the majority.

> I don't want freedoms paid for by such intolerable and abhorrent levels of ongoing injustice. It isn't a false sense of security, for the victims it is very real.

No, it still is a very false sense of security. Intercepting illicit material online doesn't actually stop the crime from being committed nor does it dissuade people from distributing it.

> Most criminals are not sophisticated. Crime prevention is always about making it difficult to do crime, not waving a magic wand and making crime go away.

Sure, but the 'criminals' that are distributing illicit material online are already going to lengths, sometimes very technical, to distribute it anonymously.

> I'm not saying let's give up freedoms, but if your stance is there is no other way, then freedoms have to go away.

You are saying let's give up freedoms. Let's drop any sort of notion that you care about freedom because you do not. I'm not saying that it's an invalid world view; your reasoning for wanting to eradicate those freedoms is rational and with good intention, but you are begging for authoritarianism none the less.

I don't think there's any sort of agreement to be had here. Fundamentally I cannot agree with the notion that everyone must concede their personal liberties and privacy in order to capture a few more stupid criminals.

> But my stance is that the technology is there, it's just slippery slope fallacy thinking that's preventing from getting it implemented.

No, it's actually just a slipper slope. There is no fallaciousness in the logic here because we've already witnessed the erosion of our rights for this purpose over and over again and they continue to push for even more degradation of those rights.

> Persons aren't identified before a human reviews and confirms that the material is illicit.

This is already a violation of privacy. Share all of your personal photos with hacker news if you disagree. We don't know who you are, after all, so it's not a violation of your privacy, right?

> There is just cause, the material was flagged as illicit. In legal terms, it is called probable cause. If a cop hears what sounds like a gunshot in your home, he doesn't need a warrant, he can break in immediately and investigate because it counts as extenuating circumstance. The algorithms flagging content are the gunshots in this case. You could be naked in your house and it will be a violation of privacy, but acceptable by law. If you said after review, they should get a warrant from a judge I'm all for it.

In legal terms, probable cause is what you need to make an arrest or before obtaining a search warrant. The "gunshot" exception isn't probable cause. It's an emergency exception that allows for a warrantless search because there is an independent, externally observable signal of imminent harm i.e. an emergency situation.

The algorithms are not the 'gunshot' here. It is not searching in response to some sort of external signal like a gunshot or hearing someone screaming or even seeing someone getting attacked. It is the search itself - it only produces a flag marking someone as suspicious because it has already examined someone's private files. The "probable cause" was produced by conducting the search. That is backwards.

It is equivalent, in your analogy, to a cop opening every front door in the neighborhood to look inside and then saying they now have probable cause because they saw something suspicious. The search already happened.

> It is materially false, because that the scanning can be done without sending a single byte of the device. The privacy intrusion happens not at the time of scanning, but at the time of verification.

You do not need to transmit information for it to be violation of privacy. If a cop opens your filing cabinet, looks through your folders, and leaves everything exactly where he found it, he's still already intruded by examining your private material.

The suspicion of criminal activity must precede the search. Simply possessing digital files isn't a basis for individual suspicion - you are treating everyone as a suspect that deserves no protection.

> To continue my example, the cop could have heard you playing with firecrackers, you didn't do anything wrong but your door is now broken and you were probably naked too, which means privacy violated. This is acceptable by society already.

Society accepts warrantless entry only when there is an actual emergency - The reason a gunshot or firecrackers can justify it is because they are external signals - they do not require the police officer to enter the home in order to detect it.

Society does not accept random entries just to look for problems.

And just to get ahead of it, a machine performing the search doesn’t change anything. A search is defined by what’s being examined, not who (or what) is doing the examining. If the government sent a robot into your home that didn’t know your name and only alerted authorities if it found something illegal, it would still be a search. The fact that it’s automated doesn’t make it any less of an intrusion.


Let me post a longer reply later. But for your last point, we do have automated machine generated alarms in form of smoke detectors. We're legally required to have them in our homes. Firefighters might do the breaking in though, instead of cops, but still agents of the government. Is it more accurate to treat internet access, same as public road access? it's a regulated privileged, instead of a right. You can't taint your windows too much so cops can look in for example when driving on public roads.


With this logic, you could justify embedding cameras in every private space of someone’s home. The feed could be sent to a server running an automatic algorithm that flags potential crimes. If something suspicious appears, authorities would be alerted and an independent review would determine whether a crime occurred.

I have no doubt in my mind if we did that it would certainly be a huge win for law enforcement, detecting crimes and gathering evidence to help catch criminals. Why stop there, though? Why not require everyone live in glass apartments like in the novel We?

These aren't big leaps from what you're proposing. You are advocating for mass surveillance with the assumption that these systems won't be abused despite countless examples of surveillance being misused by those in power.

Comparing scanning all of someone's digital files to smoke detectors is absurd.


You have a good point, but is a phone equal to your private home, or is it similar to a car (where you are required to have transparent glass windows). Is it a right or a privilege?

But to challenge your argument further, if the majority are fine with having cameras in their homes that don't transmit unless a crime is detected, isn't that just democracy?

What's getting lost in this discussion might be the fact that the majority of people don't care that much about privacy, especially when heinous crimes are involved. Furthermore, the equivalent would be house builders installing cameras in homes, not home owners being required to install one. But a reasonable compromise might be scanning content being transmitted instead of stored?


> You have a good point, but is a phone equal to your private home, or is it similar to a car (where you are required to have transparent glass windows). Is it a right or a privilege?

We regulate the operation of motor vehicles because they pose an immediate safety risk. As in, the use of one could reasonably result in injury or death. A phone is not something you could reasonably expect to be used to create immediate harm (injury, death) and you wouldn't regulate one as such. That's not to say that aspects of it can't be regulated, but the fact that it can be a tool used to generate harm does not make it itself particularly dangerous.

> But to challenge your argument further, if the majority are fine with having cameras in their homes that don't transmit unless a crime is detected, isn't that just democracy?

Yes, which is why we avoid direct democracy pretty much everywhere in the world. But rights aren't something that can be taken away by a vote. Only protections against a government violating your rights can. If you could vote away your rights then pretty much every authoritarian government would be wholly justified in their abusive actions.

> What's getting lost in this discussion might be the fact that the majority of people don't care that much about privacy, especially when heinous crimes are involved. Furthermore, the equivalent would be house builders installing cameras in homes, not home owners being required to install one. But a reasonable compromise might be scanning content being transmitted instead of stored?

Most people don't care about a lot of things. That's another reason why we don't have most people writing legislation. There are tons of things I have extremely limited knowledge about that someone else feels very strongly about and vice versa. The majority of people feeling apathetic towards something isn't an indicator that the majority is correct.


> We regulate the operation of motor vehicles because they pose an immediate safety risk.

That's not the legal reasoning as i recall. it is because they use public roads. They are just as unsafe when you drive them in a racing circuit or on your ranch, but traffic laws only apply on public roads. Same with your post mail being scanned and searched, or your baggage at airlines, it isn't just for safety and no warrant is needed, they look for contrabands,customs violations,etc.. too. It is because you are engaging in a privileged activity.

> Yes, which is why we avoid direct democracy pretty much everywhere in the world.

News to me, i thought it was because of practicality. I think you mean pluralistic?

> If you could vote away your rights then pretty much every authoritarian government would be wholly justified in their abusive actions.

Maybe a clear definition of digital rights is what is missing? But explain to me why your right to privacy is more important than the rights of victims. If victimization was rare, that would be one argument, but it is frequent, and something can be done to reduce it. From what I understand, the scanning methods Apple proposed are differential, your privacy won't be violated unless there is a match.

Going back to my earlier point, you have rights. But those rights can only be protected by the government so long as the security of its people remains in tact. Every right we have is taken way when it comes to "national security risk" for example. Is a potential terrorist attack any worse in terms of security compared to the very real impact of CSAM against the most innocent members of society? If there was a terrorist attack impending and the only way to stop it is by scanning everyone's phones, guess what? it is already the law that the government can do that.

> Most people don't care about a lot of things. That's another reason why we don't have most people writing legislation. There are tons of things I have extremely limited knowledge about that someone else feels very strongly about and vice versa. The majority of people feeling apathetic towards something isn't an indicator that the majority is correct.

They don't write legislation, but they determine what legislation gets written. They vote based on promises of legislation, they may not care about details but they care about outcomes. In this case "not caring" is for that, outcomes, not the technicalities of legislation. As a matter of policy the voters don't care. And law makers have a duty to reflect the sentiment of their constituents.

Even it comes down to taking away the rights of the minority voters, it may not be as simple legislation, but constitutional amendments exist and it all comes down to how many people want that change. We could literally have something insane like slavery back again within a year given enough popular sentiment.

The patriot act has been getting renewed since its inception, now almost a quarter of a century ago, across multiple administrations, and with bi-partisan support. that is the will of the people in effect.


Except that it is not materially false. Only in a perfect society will your “system that flags illicit content” not become a system that flags whatever some authoritarian regime considers threatening, and subverting public logging/auditing is similarly trivial to a motivated authoritarian. All your hypothetical solutions rely on humans, who are notoriously susceptible to being influenced by either money or being beaten with pipes, and on corporations, who are notoriously susceptible to being influenced by things that influence their stock price.

The Pleyel’s corollary to Murphy’s law is that all compromises to individuals’ rights made for the sake of security will eventually be used to further deprive them of those rights.

(I especially liked the line “You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.”)


This is already the case with other means of communication. the internet isn't that special. If you don't trust your government, do something else about it.

We rely on eye witness testimony and human juries all the time. The innocence project has a long list of people that spent decades in prison because of this.

The solution to authoritarian regimes is to not have one, not tolerate cp on the internet.


> The solution to authoritarian regimes is to not have one, not tolerate cp on the internet.

Perhaps the problem doesn't have a binary solution.


I think it does, but not having such a regime has lots of implementation complexities? either you have one or you don't, so binary.


> The solution to authoritarian regimes is to not have one

The solution to not being poor is being rich. You could apply that logic to a lot of things. Have this thing instead of that thing. Using your example above of "differential privacy scanning"

Differential privacy is a property of a dataset meaning you can’t tell an individual was part of a dataset. If it’s traceable back to the individual device it’s not differentially private.

I think at this point you're just trying to say "don't have this thing have that thing instead" as a response to anything.


> You could apply that logic to a lot of things.

Certainly you can. The solution to being poor is not being poor. how? that is a different story, but ultimately, the solution to being poor must be not being poor, otherwise it isn't a solution right? And of course it is a reductive take, but it is nevertheless correct. Solutions that don't result in poor people no longer being poor are not solutions. Solutions that don't involve in not having an authoritarian regime are not solutions to that problem either.

Your solution to authoritarian regimes is not fighting CSAM, you made the CSAM problem worse, and it does not prevent authoritarian regimes. An authoritarian regime does not need your permission to scan your phone. And most human governments in history qualify as authoritarian, and they didn't need phones let along scanning of phones.

> I think at this point you're just trying to say "don't have this thing have that thing instead" as a response to anything.

I'm saying: "If you don't like apples, don't eat apples. Don't talk about how we need to kill all the bees and worms that help apple trees reproduce".

> Differential privacy is a property of a dataset meaning you can’t tell an individual was part of a dataset.

Yeah, that's correct. And that's a violation of individual's privacy..how?

What would it take for you to consider scanning of phones a valid solution. Would mass murder, global nuclear war, pandemic containment? Is it a question of not understanding the harm being done? My frustration is that, ok, let's not scan phones. what's your solution? You have none. Your solution is to do nothing and accept things should be the way they are. If I said let's verify everyone's ID before they can access the internet, is that acceptable? Let's ban Tor and VPNs instead, is that acceptable? What is your solution? Can you at least agree what we should aggresively be working on a solution? We have people training LLMs to generat CSAM and you hear not a peep out of all these companies and devs working on the tech. Just slap knees and declare "welp, that's unfortunate".

I don't care what governments do. If it takes an authoritarian regime to stop this insanity, I'm all for it. I'll be royally screwed, it will be a nightmware. But if that is the cost, so be it. This is how authoritarians gain power by the way. You have the apathetic educated and ruling classes, and the masses crying for change, and they will actually solve the problem but destroy everything else along the way. I'm tell you that if I, someone who is relatively aware and informed of the risks of privacy loss, of tech underlying the systems we use, if I am saying this, imagine what the majority of people would say.

it took one 9/11 attack to get us the patriot act, if someone used Tor on their rooted android phone to do something worse, phone scanning will be the least of your concerns. And the public would support it. You need a solution because the public demands it, at the cost of privacy if required. But it is for technologists to device a mechanism that solves the problem without costing us privacy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: