Not gonna like, I am having to actively fight the aversion I feel when reading something was "all written by claude", it is so hard to check if it was properly done or pure garbage, I don't even take the time to check.
I know this position is wrong, but it feels hard to spend my time on something that someone else might not have spent the time to create
> I know this position is wrong, but it feels hard to spend my time on something that someone else might not have spent the time to create
I don't think that position is wrong. I felt similarly when tutoring a high-school student recently. They didn't do any work themselves, they were forced by their parent to come to me two days before a test. I offered to help, but when I realized that the student didn't care enough to study by themselves, I basically lost all motivation to help them.
It feels the same with AI-generated content. If the "creator" didn't care to spend time on it, why should I spend mine?
Of course you can, but shell scripting really fucking sucks.
One moment you have a properly quoted JSON string, the next moment you have a list of arguments, oops you need to escape the value for this program to interpret it right, but another program needs it to be double-escaped, is that \, \\, or \\\\? What subset of shell scripting are we doing? Fish? modern Linux bash, macOS-compatible bash? The lowest common denominator? My head is spinning already!
If I want to script something I'm writing Python these days. I've lost too much sleep over all the "interesting" WTF situations you get yourself into with shell scripting. I've never used Hurl but it's been on my radar and I think that's probably the sweet spot for tasks like this.
Just a note, this was a bit difficult to read with the amount of times you repeat the phrase “ browser, device, platform, or bot that was making a request to the application”
I feel like an idiot when reading _about_ Haskell.
I have no problem reading and writing Haskell code itself. But for some reason most of the articles about it are successful efforts in deliberate complication of otherwise simpler affairs.
Is everyone ignoring that “holdback” mechanism google is introducing with the idea? Where 5-10% of the traffic behaves as if it did not have attestation enabled, I would like to understand how that does not address the “web DRM” concerns but I can’t find an explanation anywhere
It's very obvious that is disingenuous; they will use that as an excuse to make it palatable while planning to silently turn it off in a couple of years. Clearly it's a "foot in the door" technique similar to how Apple tried (but ultimately backed down) to introduce local scanning of all content on user devices (in the name of CSAM of course - a palatable scenario - which later opens the door for scanning anything a government entity dislikes).
TLDR holdbacks might help with specifically the DRM component; but they can only go one of three ways:
- They can be effective at forcing sites not to rely on attestation, in which case there is no benefit to this proposal because everyone (including users of browsers like Chrome) will still be subjected to the same invasive backup strategies. You'll still be fingerprinted and tracked no matter what because even if you're using Chrome 1/20 times you send a request the website will just revert back to the original fingerprinting.
- Or if they aren't effective at forcing sites not to rely on attestation, well... then they haven't solved the DRM problem.
- Finally, attestation might be used to primarily decrease annoying behaviors, which will still in practice make browsing the web for anyone who doesn't use a browser with attestation so painful that they'll eventually switch. Think "you're not on Chrome, so you're going to see 9x the captchas you otherwise would see."
You can't simultaneously have "this allows us to trust the client" and "we can't rely on it." One of them has to give. At their best holdbacks would turn this into another tracking vector and would change nothing about the web for the better. More likely, holdbacks will allow sites that would previously be judicious about where they used captchas and blocks around the site to start spamming them everywhere -- because Chrome users will only see 5-10% of those annoyances. And at their worst, sites would just not implement the fallbacks because the attestation signal is still reliable enough.
Holdbacks call the entire motivation of this spec into question, since the whole point of holdbacks is to make it impossible for websites to get rid of the invasive "backup" walls and tracking and captchas that the spec claims to be trying to replace. Blocking ad fraud? Blocking automated requests? WEI only helps with that if websites can trust the signal and block browsers that aren't sending it; otherwise websites are right back to square one trying to prevent fraud. But if they can do better blocking based on that signal, then we're back in DRM territory.
Implementing holdbacks in a way that actually prevents DRM is likely to be fairly challenging. In the most straightforward implementation, websites can simply retry the request until they get an attesation token or until they hit 10 iterations, at which point they'll ban you as normal.
Statistically profiling users and determining whether or not their browser supports attestation is likely to be fairly easy, unless Google has a much cleverer implementation of holdbacks than they've revealed so far in the spec.
This would be the worst case scenario -- holdbacks would be used as an excuse to push the changes through and sites would simply ignore them and block users based on aggregate stats: you haven't passed an attestation check in the past 30 minutes even though you made 20 different requests that should have had a token attached? Yeah, you're pretty likely on an "unsupported" browser.
reply