Hacker Newsnew | past | comments | ask | show | jobs | submit | khalic's commentslogin

This study assumes everybody is oblivious to contamination, and explicitly says they can't differentiate. Not useful and bordering on the tautological

The non-trivial part isn't contamination per se, it's that the contaminant is chemically and spectroscopically similar enough to evade standard discrimination

Misleading title

Just… keep coding?


Really cool trivia, thanks for sharing :D


The LLM ban is unenforceable, they must know this. Is it to scare off the most obvious stuff and have a way to kick people off easily in case of incomplete evidence?


It is enforceable, I think you mean to say that it cannot be prevented since people can attempt to hide their usage? Most rules and laws are like that, you proscribe some behavior but that doesn't prevent people from doing it. Therefore you typically need to also define punishments:

> This policy is not open to discussion, any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed, and any attempt to bypass this policy will result in a ban from the project.


What happens when the PR is clear, reasonable, short, checked by a human, and clearly fixes, implements, or otherwise improves the code base and has no alternative implementation that is reasonably different from the initially presented version?


If you're going to set a firm "no AI" policy, then my inclination would be to treat that kind of PR in the same way the US legal system does evidence obtained illegally: you say "sorry, no, we told you the rules and so you've wasted effort -- we will not take this even if it is good and perhaps the only sensible implementation". Perhaps somebody else will eventually re-implement it later without looking at the AI PR.


How funny would it be if the path to actually implement that thing is then cut off because of a PR that was submitted with the exact same patch. I'm honestly sitting here grinning at the absurdity demonstrated here. Some things can only be done a certain way. Especially when you're working with 3rd party libraries and APIs. The name of the function is the name of the function. There's no walking around it.


It follows the same reasoning as when someone purposefully copies code from a codebase into another where the license doesn't allow. Yes it might be the only viable solution, and most likely no one will ever know you copied it, but if you get found out most maintainers will not merge your PR.


That's why I said "somebody else, without looking at it". Clean-room reimplementation, if you like. The functionality is not forever unimplementable, it is only not implementable by merging this AI-generated PR.

It's similar to how I can't implement a feature by copying-and-pasting the obvious code from some commercially licensed project. But somebody else could write basically the same thing independently without knowing about the proprietary-license code, and that would be fine.


The trick is getting people to believe you.


You not realizing how ridiculous this is, is exactly why half of all devs are about to get left behind.

Like, this should be enshrined as the quintessential “they simply, obstinately, perilously, refused to get it” moment.

Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.


> Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.

Well that day doesn't appear to be coming any time soon. Even after years of supposed improvements, LLMs make mistakes so frequently that you can't trust anything they put out, which completely negates any time savings from not writing the code.


Sorry, but this is user error.

1) Most people still don't use TDD, which absolutely solves much of this.

2) Most poople end up leaning too heavily on the LLM, which, well, blows up in their face.

3) Most people don't follow best practices or designs, which the LLM absolutely does NOT know about NOR does it default to.

4) Most people ask it to do too much and then get disappointed when it screws up.

Perfect example:

> you can't trust anything they put out

Yeah, that screams "missing TDD that you vetted" to me. I have yet to see it not try to pass a test correctly that I've vetted (at least in the past 2 months) Learn how to be a good dev first.


> no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.

No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.


This is a non-argument. All of the cloud LLM's are going to move to things like micronuclear. And the scientific advances AI might enable may also help avoid downstream problems from the carbon footprint


I wasn't gesturing to the energy/environmental impacts of AI.


The problem is that even if the code is clear and easy to understand AND it fixes a problem, it still might not be suitable as a pull request. Perhaps it changes the code in a way that would complicate other work in progress or planned and wouldn't just be a simple merge. Perhaps it creates a vulnerability somewhere else or additional cognitive load to understand the change. Perhaps it adds a feature the project maintainer specifically doesn't want to add. Perhaps it just simply takes up too much of their time to look at.

There are plenty of good reasons why somebody might not want your PR, independent of how good or useful to you your change is.


How would you tell that it's LLM-generated in that case?

If the submitter is prepared to explain the code and vouch for its quality then that might reasonably fall under "don't ask, don't tell".

However, if LLM output is either (a) uncopyrightable or (b) considered a derivative work of the source that was used to train the model, then you have a legal problem. And the legal system does care about invisible "bit colour".


It's (c) copyright of the operator.

For one simple reason. Intention.

Here's some code for example: https://i.imgur.com/dp0QHBp.png

Both sides written by an LLM. Both sides written based on my explicit prompts explaining exactly how I want it to behave, then testing, retesting, and generally doing all the normal software eng due diligence necessary for basic QA. Sometimes the prompts are explicitly "change this variable name" and it ends up changing 2 lines of code no different from a find/replace.

Also I'm watching it reason in real time by running terminal commands to probe runtime data and extrapolate the right code. I've already seen it fix basic bugs because an RFC wasn't adhered to perfectly. Even leaving a nice comment explaining why we're ignoring the RFC in that one spot.

Eventually these arguments are kinda exhausting. People will use it to build stuff and the stuff they build ends up retraining it so we're already hundreds of generations deep on the retraining already and talking about licenses at this point feels absurd to me.


I think you need to read the report from the US Copyright office that specifically says that it's *not* (c) copyright of the operator.

It doesn't matter if the "change this variable name" instruction ends up with the same result as a human operator using a text editor.

There is a big difference between "change this variable name" and "refactor this code base to extract a singleton".


You may as well be the MPAA right now throwing threats around sharing MP3s. We're past the point of caring and the laws will catch up with reality eventually. The US copyright office says things that get turned over in court all the time.


Tell me, how have laws “caught up with” “the [RIAA…] throwing threats around sharing MP3s?” So far as I know that’s still considered copyright infringement and the person doing it, if caught, can be liable for very substantial statutory damages.

It sounds like you really can’t handle being told “no, you can’t use an LLM for this” by someone else, even if they have every right to do so. You should probably talk to your therapist about that.


lol, ask the software industry whether or not their "past the point of caring" about the licenses on their software.

Whether it's an OSS license or a commercial license, both are dependent on copyright as the underlying IP Right.

The courts have so far (in the US) agreed with the Copyright office's reasoning.

Use an LLM as a tool, mostly OK.

Use it to create source from scratch, no copyright as the author isn't human.

Use it to modify existing software, the result is only copyright on whatever original remains.


The entire industry is right now encouraging LLM use all day everyday at big corps including mine. If your argument is the code we are producing isn't copyright of our employers you won't get very far. Call it the realpolitik of tech if you want.


This is where most reasonable people would say “OK, fine”

CLEARLY, a lot of developers are not reasonable


It is entirely reasonable for a project to require you to attest that the thing you are contributing is your own work.

The unreasonable ones are the ones with the oppositional-defiant “You can’t tell me I can’t use an LLM!” reaction.


It IS their own work.

The simplest refutation of your point of view is, who or what is responsible if the work submission is wrong?

It will always be the person’s, never the computer’s. Conveniently, AI always acts as if it has no skin in the game… because it literally and figuratively doesn’t… so for people to treat it like it does, should be penalized


If it’s the output of an LLM, it’s not their own work.


Who prompted the LLM?

Who vetted the output?

Who ensured there was adequate test coverage?

Who insisted on a certain design?

Who is to blame if it's bad code? That is the same entity that is responsible, and the same entity that "did it"

tl;dr your stance is full of poop, my dude


“I looked up the topic on Wikipedia and I highlighted the text and I selected copy and I selected paste so I don’t see how this is plagiarism.”

That’s what you sound like.


You sound like someone who has literally zero understanding as to why that is a ridiculous comparison.

There are a thousand and one ways that I participate when building something with LLM assistance. Everything from ORIGINATING AN IDEA TO BEGIN WITH, to working on a thorough spec for it, to ensuring tests are actually valid, to asking for specific designs like hexagonal design, to specific things like benchmarks... literally ALL OF THE INITIATIVE IS MINE, AND ALL OF THE SUCCESS/FAILURE CONSEQUENCES ARE MINE, AND THAT IS ULTIMATELY ALL THAT MATTERS

Please head towards a different career if you now have a stupid and contrived excuse not to continue working with the machines, because you sound like a whining child

And you're not answering the question, because you know it would end your point: WHO OR WHAT IS RESPONSIBLE IF THE CODE SUCCEEDS OR FAILS?


I started working in the industry when you were able to buy a Lisp Machine new and have been studying AI even longer, and I’ve been very successful in it. I not only know what I’m talking about, I have the experience to back it up.

You sound like someone who’s deeply in denial about exactly how the LLM plagiarism machines work. You really do sound like a student defending themselves against a plagiarism charge by asserting that since they did the work of choosing the text to put into their essay and massaging the grammar so it fit, nobody should care where it came from.


By that definition, every single human who wrote a paper after reading a source document is a “plagiarism machine”

and I’m 53 and well remember Symbolics from freshman year at Cornell, in fact my application essay to it was about fuzzy logic (AI-tangential) and probably got me in, so I too am quite familiar

i’m also quite good at debate. the flaw in your logic is that plagiarism requires accountability and no machine can be accountable, only the human that used it, ergo, it is still the work of the human, because the human values, the human vets, the human initiates, and the human gains or loses based on the combined output, end of story; accelerated thought is still thought, and anyway, if a machine can replicate thought, then it wasn’t particularly original to begin with


and your stance is not your own if you got the LLM to stand for you. ;-P

human prompting != human production


Yes, what happens when the murder looks like a heart attack? This isn't hypothetical, some assassinations occur like this. That doesn't make murder laws unenforceable.

Lots of people try to get away with perfect crimes and sometimes do. That doesn't make the rule unenforceable, it just highlights the limits of human knowledge in the face of a dishonest person. Hence the escalations for trying to destroy evidence of crimes or in this case to work around the AI policy. Here, instead of just closing your PR, they ban you if you try to hide it.


I think the bigger point about enforcement is not whether you're able to detect "content submitted that is clearly labelled as LLM-generated", but that banning presumes you can identify the origin. Ie.: any individual contributor must be known to have (at most) one identity.

Once identity is guaranteed, privileges basically come down to reputation — which in this case is a binary "you're okay until we detect content that is clearly labelled as LLM-generated".

[Added]

Note that identity (especially avoiding duplicate identity) is not easily solved.


You can slap on any punishment clause you want but verifying LLM-origin content without some kind of confession is shaky at best outside obvious cases like ChatGPT meta-fingerprints or copy-paste gaffes. Realistically, it boils down to vibes and suspicion unless you force everyone to record their keystrokes while coding which only works if you want surveillance. If the project ever matters at scale people will start discussing how enforceability degrades as outputs get more human-like.


There’s this thing called “honor” where if you tell someone that they need to affirm their contribution is their own work and not created with an LLM, most people most of the time will tell the truth—especially if the “no LLMs” requirement is clearly stated up front.

You’re basically saying that a “no-LLMs” rule doesn’t matter, because dishonorable people exist. That’s not how most people work, and that’s not how rules work.

When we encounter a sociopath or liar, we point them out and run them out of our communities before they can do more damage, we don’t just give up and tolerate or even welcome them.


Unenforceable means they can't actually enforce it since they can't discriminate high quality LLM code from hand typed


Well, unenforceable isn't a synonym for undetectable or awkward. Their policy indicates that they are aware of this difficulty: if you admit to using AI then they close your pull request, if you do not admit to using AI but evidence later surfaces that you did then they ban you. They can enforce this.

The hope here is the same hope as most laws: that lies eventually catch up to people. That truth comes to light. But sure, in the meanwhile, there are always dishonest people around trying to flout rules to varying degrees of success. Some are caught right away, some live their entire lives without it catching up to them. That doesn't make the rule unenforceable, that just highlights the limits of rules: it requires evidence that can be hard to come by.


This is the dream of the sociopathic slopmonger.

Real people in the real world understand that rules don’t simply cease to exist because there’s no technical means of guaranteeing their obedience. You simply ask people to follow them, and to affirm that they’re following them whether explicitly or implicitly, and then mete out severe social consequences for being a filthy fucking liar.


Keep wishing, in the meantime some people have to deal with the real world and plan accordingly


I suspect this is for now just a rough filter to remove the lowest effort PRs. It likely will not be enough for long, though, so I suspect we will see default deny policies soon enough, and various different approaches to screening potential contributors.


Any sufficiently advanced LLM-slop will be indistinguishable from regular human-slop. But that’s what they are after.

This heuristic lets the project flag problematic slop with minimal investment avoiding the cost issues with reviewing low-quality low-effort high-volume contributions, which should be near ideal.

Much like banning pornography on an artistic photo site, the perfect application on the borderline of the rule is far less important than filtering power “I know it when I see it” provides to the standard case. Plus, smut peddlers aren’t likely to set an OpenClaw bot-agent swarm loose arguing the point with you for days then posting blogs and medium articles attacking you personally for “discrimination”.


Probably just an attempt to stop low effort LLM copy pasta.


A sign to point at when you get someone is posting "I asked AI to fix this and got this". You can stop reading and any arguments right there. Saving lot of time and effort.


Speed limits are unenforceable. You'll never catch everyone speeding so why even bother trying.


> The LLM ban is unenforceable

Just require that the CLA/Certificate of Origin statement be printed out, signed, and mailed with an envelope and stamp, where besides attesting that they appropriately license their contributions ((A)GPL, BSD, MIT, or whatever) and have the authority to do so, that they also attest that they haven't used any LLMs for their contributions. This will strongly deter direct LLM usage. Indirect usage, where people whip up LLM-generated PoCs that they then rewrite, will still probably go on, and go on without detection, but that's less objectionable morally (and legally) than trying to directly commit LLM code.

As an aside, I've noticed a huge drop off in license literacy amongst developers, as well as respect for the license choices of other developers/projects. I can't tell if LLMs caused this, but there's a noticeable difference from the way things were 10 years ago.


> As an aside, I've noticed a huge drop off in license literacy amongst developers

What do you mean by this? I always assumed this was the case anyway; MIT is, if I'm not mistaken, one of the mostly used licenses. I typically had a "fuck it" attitude when it came to the license, and I assume quite a lot of other people shared that sentiment. The code is the fun bit.


The chardet debacle is probably one of the most recent and egregious.


> I always assumed this was the case anyway; MIT is, if I'm not mistaken, one of the mostly used licenses

No, it wasn't that way in the 2000s, e.g., on platforms like SourceForge, where OSS devs would go out of their way to learn the terms and conditions of the popular licenses and made sure to respect each other's license choices, and usually defaulted to GPL (or LGPL), unless there was a compelling reason not to: https://web.archive.org/web/20160326002305/https://redmonk.c...

Now the corporate-backed "MIT-EVERYTHING" mindvirus has ruined all of that: https://opensource.org/blog/top-open-source-licenses-in-2025


... you think It was good time?

Not being able to publish anything without sifting through all the libs licences? Remembering legalese, jurisprudence, edge cases, on top of everything else?

MIT became ubiquitous because it gives us peace of mind


You have to go through all the dependencies anyway, to roughly judge their quality, and the activity of their maintainers. Quickly looking at the license doesn't take any more effort.


Totally realistic expectation


Considering you have to list all used open source software, their authors, and their licenses in your UI anyway, sure.

Or how are you handling that?

Sure, sometimes you can automate some of it, but you'll still have to manually check the attributions are correctly done.


> ... you think It was good time?

Yes, as do, probably, most people who remember it.


Sarcasm? Nobody will be contributing with a complexe signing process like that, and it doesn't guarantee anything in the end, it's like a high tech pinky swear


Lots of projects have had requirements like this for years, usually to prevent infection by (A)GPL's virality, or in the case of the FSF, so they can sue on your behalf, or less scrupulously, so the project can re-license itself or dual license itself in the future should the maintainers opt to. (This last part was traditionally the only part that elicited objections to CLAs.)

> it's like a high tech pinky swear

So is you attesting you didn't contribute any GPL'd code (which, incidentally, you arguably can't do if you're using LLMs trained on GPL'd code), and no one seemed to have issues with that, yet when it's extended to LLMs, the concern trolling starts in earnest. It's also legally binding .


They’re all muscle inside, fascinating


That was my first impression. But then I thought in humans, and concluded ants have less muscle % of volume

Less muscle than us by ANY measure would be mindblowing for beings who can carry up to 100 times their weight, compared to ~ 1-2 we can.

I can't give any appreciation of muscle % of weight. I don't know how heavy is chiting armor/exoskeleton.


https://sites.nd.edu/biomechanics-in-the-wild/2024/11/06/ant...

"Scaling laws indicate that smaller organisms, such as ants, benefit from a greater strength-to-size ratio, partly due to the favorable scaling of muscle cross-sectional area relative to body mass. A 2023 study by Clemente and Dick emphasizes that while larger organisms have more total muscle mass, the strength per unit mass decreases with increasing size due to scaling effects."

To me it hit home when i recognized that high frequency 200Hz+ wing beat by say a bee - which we tend to think about as extremely "fast" - still have only about 5m/s wing tip speed - which is actually very slow and thus extremely efficient, and orders of magnitude more efficient than say helicopter blades (or even small drone props) having tip speed at 200m/s. (Note: that is even without taking into account different air viscosity, Rayleigh number, at that scale. Different air and fluid viscosities at those scales and different relative scales of surface tension and capillary forces - microfluidics so to speak - are also what insects are heavily optimized for and take advantage of.)

To illustrate - similar to bee efficiency at the scale of the human body - human powered helicopter - 150 feet span https://www.youtube.com/shorts/zNrCbcQVmuE . Note that while similar to the bee wing efficiency is achieved at that scale, the human power density is lower than the bee's, so it can fly only for short periods of time that that top cyclist can produce his highest power.

The scaling laws also dictate exoskeleton being more efficient than internal skeleton for the small bodies like the insects' ones, and also the breathing by the decentralized system of spiracles than by the centralized internal lungs.


specifically head, and it is that big for that reason. I mean it is logical, yet i have never even thought about that until seeing those images. They are basically working and killing machines.


You should see bull ants and stinger meat eating black ants perfectly peeling an unripe mango peel. Both the work finesse of the work and the fact they eat precisely the part which I don't.

(Sometimes they also eat the pulp. But they always eat more peel than anything)


Local rat does something similar to citrus trees. Eats the peel clean off leaving the skinless fruit still on the tree. The fruit doesn’t even rot right away as it is still connected to the tree.


I see the industry found its new buzzword: humanoid robots

Slap a "head" on an industrial machines and watch investors go brrrrrrrrr


ugh, qwen, I wish they'd use an open data model for this kind of projects


I’m sorry in what world is age restriction effective at keeping teens away from alcohol? Are you from the 60s?


No, saying that e2e encryption makes users _less_ safe is completely dishonest, nothing is fine about this.

The logic of "anything is better than before" is also fallacious.


Depends on your definition of "safe". Imagine an adult DMs a nude photo to a minor (or other kinds of predation).

If it's E2EE, no one except the sender and receiver know about this conversation. You want an MITM in this case to detect/block such things or at least keep record of what's going on for a subpoena.

I agree that every messaging platform in the world shouldn't be MITM'd, but every messaging platform doesn't need to be E2EE'd either.


The receiver has a proven and signed bundle, that they can upload to the abuse report. So the evidence has even stronger weight. They can already decrypt the message, they can still report it.


Yes, but this leaves the only way to identify this behavior as by reporting from a minor. I'm not saying I trust TikTok to only do good things with access to DMs, but I think it's a fair argument in this scenario to say that a platform has a better opportunity to protect minors if messages aren't encrypted.

I'm not saying no E2E messaging apps should exist, but maybe it doesn't need to for minors in social media apps. However, an alternative could be allowing the sharing of the encryption key with a parent so that there is the ability for someone to monitor messages.


> I think it's a fair argument in this scenario to say that a platform has a better opportunity to protect minors if messages aren't encrypted

Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant? People are paranoid about this sort of thing not because they think law enforcement is more effective when it is constrained. But how easily crimes can be prosecuted is only one dimension of safety.

> However, an alternative could be allowing the sharing of the encryption key with a parent

Right, but this is worlds apart from "sharing the encryption key with a private company", is it not?


> Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant?

This is a false equivalency. I don't have to use TikTok DMs if I want E2EE. I don't have a choice about laws that allow the police to violate my rights. I'm not claiming that all E2EE apps should be banned.

> Right, but this is worlds apart from "sharing the encryption key with a private company", is it not?

Exactly why I suggested that as a possible alternative.


> This is a false equivalency.

I'm not making an equivalency. I'm just trying to get you to think how something that is at surface level true is not necessarily a "fair argument".

> I don't have to use TikTok DMs if I want E2EE.

I don't know why you think this is a convincing argument. It is currently illegal to tap people's phone lines, but when phones were invented it obviously was not illegal. It became illegal in part because people had a reasonable expectation of privacy when using the phone. They also have a reasonable expectation of privacy when using TikTok DMs - that's why people call them "private messages" so often!

> Exactly why I suggested that as a possible alternative.

My point is that you are offering these as alternatives when they are profoundly different proposals. It is like me saying I am pro forced sterilization and then offering as an alternative "we could just only allow it when people ask for it". That's a completely different thing! Having autonomy over your online life as a family rather than necessarily as an individual is totally ok. Surrendering that autonomy is not.


> Surrendering that autonomy is not.

Then you can avoid using platforms that do not offer E2EE.


> Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant?

Police can access your home with a warrant.

Police cannot access your E2EE DMs with a warrant.


Not answering my question!

> Police cannot access your E2EE DMs with a warrant.

They can and do, regularly. What they can't do is prevent you from deleting your DMs if you know you're under investigation and likely to be caught. But refusing to give up encryption keys and supiciously empty chat histories with a valid warrant is very good evidence of a crime in itself.

They also can't prevent you from flushing drugs down the toilet, but somehow people are still convicted for drug-related crimes all the time. So - yes, obviously, the police could prosecute more crimes if we gave up this protection. That's how limitations on police power work.


> What they can't do is prevent you from deleting your DMs if you know you're under investigation and likely to be caught

If you are pretty confident your under investigation then this is might be Obstruction of Justice and that's pretty illegal.


> But refusing to give up encryption keys and supiciously empty chat histories with a valid warrant is very good evidence of a crime in itself.

Uh, it absolutely isn't? WTF dystopian idea is this?


It certainly can be - destruction of evidence is a crime. If they can prove you destroyed evidence, even if they can't prove that the destroyed evidence incriminates you, that's criminal behaviour. For instance if it's known by some other means you have a conversation history with person X, but not whether that conversation history is incriminating, and then when your phone is searched the conversation history is completely missing, that is strong evidence of a crime.


And they shouldn't be able to. Police accessing DMs is more like "listening to every conversation you ever had in your house (and outside)" than "entering your house".


>Police cannot access your E2EE DMs with a warrant.

Well the kind of can if they nab your cell phone or other device that has a valid access token.

I think it's kind of analogous to the police getting at one's safe. You might have removed the contents before they got there but that's your prerogative.

I think this results in acceptable tradeoffs.


Yes, that is a fair argument and most countries allow the use of surveillance cameras in public for this reason.


in public is the operative word (and surveillance cameras in public are extremely recent and very controversial, so not as strong an argument as you might be thinking)


> I'm not saying no E2E messaging apps should exist, but maybe it doesn't need to for minors in social media apps. However, an alternative could be allowing the sharing of the encryption key with a parent so that there is the ability for someone to monitor messages.

The problem with that idea, that you are implying E2E should require age verification. Everyone should have access to secure end to end encryption.


> The problem with that idea, that you are implying E2EE should require age verification.

I can understand why might draw that conclusion, but I would not personally support this.


Are you suggesting all messaged photos should be scanned, and potentially viewed by humans, in case it depicts a nude minor? Because no matter how you do that, that would result in false positives, and either unfair auto-bans and erroneous reports to law enforcement (so no human views the images), or human employees viewing other adults' consensual nudes that were meant to be private. Or it would result in adult employees viewing nudes sent from one minor to another minor, which would also be a major breach of those minors' privacy.

There is a program whereby police can generate hashes based on CSAM images, and then those hashes can be automatically compared against the hashes of uploaded photos on websites, so as to identify known CSAM images without any investigator having to actually view the CSAM and further infringe on the victim's privacy. But that only works vs. already known images, and can be done automatically whenever an image is uploaded, prior to encryption. The encryption doesn't prevent it.

Point being, disallowing encryption sacrifices a lot, while potentially not even being that useful for catching child abusers in practice.

I'm sure some offenders could be caught this way, but it would also cause so many problems itself.


> Are you suggesting all messaged photos should be scanned, and potentially viewed by humans, in case it depicts a nude minor?

No, I was not suggesting that.


SimpleX handles this by sending the decryption keys when the receiver reports the message.


Similarly WhatsApp, it's the reporting user's app which forwards the messages, not the server accessing these on its own (allegedly).

It's clearly possible.


Ugh. The kids aren't even safe from the people making, and enforcing laws. This argument should be long over for anyone with eyes or ears.


Keeping children safe and prosecuting are too different concepts, only vaguely related. So no, being able to track pdfs doesn't make children safer. What keeps them safe is teaching them safe communication habits and keeping them away from things like Tiktok.

We shouldn't make the world a worse place for every one because some parents can't take care of their children.


>Keeping children safe and prosecuting are too different concepts, only vaguely related.

See also: That time the FBI took over a CSAM site and kept it running so they could nab a bunch of users.


Not necessarily saying what they did was right, but I think there's a strong utilitarian argument to be made that what they did in that case was, in fact, the best way to keep children safe.

What's more dangerous? CSAM on the internet? Or actual child predators running loose?


That stuff spreads and re-spreads just like anything else people download off the internet. There's a pretty strong argument for shutting it down right away. IIRC most users were outside jurisdiction.


Even if one more person was prosecuted it was worth it. If you shut down an illegal website a new one will show up a month later, with the same people involved, and you achieved nothing.


What was the rate of child exploitation in the GDR?


Imagine Hamas are your government and want to figure out who's gay. You don't want a MITM in case they can do this.

Pick your definition of safe.


In that case don't use Tiktok dm's to discuss your sexuality. I think it is strange that people feel like they have to be able to talk on sensitive topics over every interface they can get their hands on.

Similarly in "traditional" media you may not want to discuss such private conversation on a radio broadcast. Perhaps you would rather discuss it on the phone or over snail mail as there is more of an expectation of privacy on those medium.


Right, but it currently isn't a sensitive topic - homosexuality is, as of 2026, broadly legal in the United States. That's a relatively new state of affairs, historically speaking, and one which Afghanistan shared as recently as 2021.


I'm commenting in the context of the conversation, not in a vacuum. You could just as (in fact, much more) easily say that children shouldn't be on apps with private messaging enabled. That would help a lot more, and then we could keep e2ee.


> there is more of an expectation of privacy on those medium

What does the "p" in "pm" stand for?


excuse me, I confused "Private messages" (pm) for "Direct messages" (dm).

I will update above


I don't think you confused anything, except for the terminology the platform uses. There is an obvious expectation of privacy when sending direct messages!


Hasn't been true ANYTIME IN HISTORY. Hell it was well understood even by children that no conversation you had on the telephone was truly private. That's why cyphers were invented.


What are you talking about? It is illegal to tap people's phone lines or to interfere with mail. Are you saying people don't have a reasonable expectation of privacy even when it's illegal to be spied on?


'Illegal' doesn't really mean anything in this, or any other, day and age when you are talking about the very rich, the very powerful, or the state.

The good thing about e2ee is that it probably makes the list of those with the ability to decrypt things encrypted e2e somewhat smaller. Fact is hacking can get to those keys. (i.e. state actor zero-click exploits your phone they are going to be able to get your private key and the messages in memory)


> 'Illegal' doesn't really mean anything in this

This is a thread arguing about what the law should be.

> Fact is hacking can get to those keys.

Everything made by humans is fallible.


it stands for "not a public timeline post"


It should be obvious from how contrived your wording is that nobody thinks of them this way.


This is fine if you have TLS encryption and the platform is not local.

Sure, they can fabricate some evidence and get access to your messages, in which case, valid point.


It's a kind of Trojan horse propaganda in my opinion.

Users get used to the argument with TikTok and then apply it to other platforms.

Put it this way: why wouldn't those same arguments apply to any platform (if you believed them)?


well having no e2e encryption is safer than having a half-baked e2e encryption that have backdoor and can be decrypted by the provider.

and for tiktok's stance, I think they just don't want to get involved with the Chinese government related with encryption (and give false sense of privacy to user)


It makes certain users less safe in certain situations.

E2E makes political activists and anti-chinese dissidents safer, at the cost of making children less safe. Whether this is a worthwhile tradeoff is a political, not technical decision, but if we claim that there are any absolutes here, we just make sure that we'll never be taken seriously by anybody who matters.


Claiming e2e makes children less safe is flat out dishonest. And the irony of you criticising “absolutes” after trying to pass one is just delicious.


What are children at risk of, when E2EE is used?

What are children at risk of, when E2EE is not used?


> What are children at risk of, when E2EE is used?

Potential exposure to abusive adults.

> What are children at risk of, when E2EE is not used?

State-sanctioned violence.


This is the argument they can’t have…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: