The headline seems pretty misleading. Here’s what seems to actually be going on:
> Every time you open LinkedIn in a Chrome-based browser, LinkedIn’s JavaScript executes a silent scan of your installed browser extensions. The scan probes for thousands of specific extensions by ID, collects the results, encrypts them, and transmits them to LinkedIn’s servers.
This does seem invasive. It also seems like what I’d expect to find in modern browser fingerprinting code. I’m not deeply familiar with what APIs are available for detecting extensions, but the fact that it scans for specific extensions sounds more like a product of an API limitation (i.e. no available getAllExtensions() or somesuch) vs. something inherently sinister (e.g. “they’re checking to see if you’re a Muslim”).
I’m certainly not endorsing it, do think it’s pretty problematic, and I’m glad it’s getting some visibility. But I do take some issue with the alarmist framing of what’s going on.
I’ve come to mostly expect this behavior from most websites that run advertising code and this is why I run ad blockers.
How is probing your browser for installed extensions not "scanning your computer"?
Calling the title misleading because they didn't breach the browser sandbox is wrong when this is clearly a scenario most people didn't think was possible. Chrome added extensionId randomization with the change to V3, so it's clearly not an intended scenario.
> vs. something inherently sinister (e.g. “they’re checking to see if you’re a Muslim”)
They chose to put that particular extension in their target list, how is it not sinister? If the list had only extensions to affect LinkedIn page directly (a good chunk seem to be LinkedIn productivity tools) they would have some plausible deniability, but that's not the case. You're just "nothing ever happens"ing this.
> How is probing your browser for installed extensions not "scanning your computer"?
I think most people would interpret “scanning your computer” as breaking out of the confines the browser and gathering information from the computer itself. If this was happening, the magnitude of the scandal would be hard to overstate.
But this is not happening. What actually is happening is still a problem. But the hyperbole undermines what they’re trying to communicate and this is why I objected to the title.
> They chose to put that particular extension in their target list, how is it not sinister?
Alongside thousands of other extensions. If they were scanning for a dozen things and this was one of them, I’d tend to agree with you. But this sounds more like they enumerated known extension IDs for a large number of extensions because getting all installed extensions isn’t possible.
If we step back for a moment and ask the question: “I’ve been tasked with building a unique fingerprint capability to combat (bots/scrapers/known bad actors, etc), how would I leverage installed extensions as part of that fingerprint?”
What the article describes sounds like what many devs would land on given the browser APIs available.
To reiterate, at no point am I saying this is good or acceptable. I think there’s a massive privacy problem in the tech industry that needs to be addressed.
But the authors have chosen to frame this in language that is hyperbolic and alarmist, and in doing so I thing they’re making people focus on the wrong things and actually obscuring the severity of the problem, which is certainly not limited to LinkedIn.
> What the article describes sounds like what many devs would land on given the browser APIs available.
> To reiterate, at no point am I saying this is good or acceptable. I think there’s a massive privacy problem in the tech industry that needs to be addressed.
These two sentences highlight the underlying problem: Developers without an ethical backbone, or who are powerless to push back on unethical projects. What the article describes should not be "what many devs would land on" naturally. What many devs should land on is "scanning the user's browser in order to try to fingerprint him without consent is wrong and we cannot do it."
To put it more extreme: If a developer's boss said "We need to build software for a drone that will autonomously fly around and kill infants," The developer's natural reaction should not be: "OK, interesting problem. First we'll need a source of map data, and vision algorithm that identifies infants...." Yet, our industry is full of this "OK, interesting technology!" attitude.
Unfortunately, for every developer who is willing to draw the line on ethical grounds, there's another developer waiting in the recruiting pipeline more than willing to throw away "doing the right thing" if it lands him a six figure salary.
Fighting against these kinds of directives was a large factor in my own major burnout and ultimately quitting big tech. I was successful for awhile, but it takes a serious toll if you’re an IC constantly fighting against directors and VPs just concerned about solving some perceived business problem regardless of the technical barriers.
Part of the problem is that these projects often address a legitimate issue that has no “good” solution, and that makes pushing back/saying no very difficult if you don’t have enough standing within the company or aren’t willing to put your career on the line.
I’d be willing to bet good money that this LinkedIn thing was framed as an anti-bot/anti-abuse initiative. And those are real issues.
But too many people fail to consider the broader implications of the requested technical implementation.
Oh yeah. Must be an anti-fraud/child abuse/money laudering/terrorism/fake news thing. All real problems with no known good solution (to my knowledge, please prove me wrong).
> These two sentences highlight the underlying problem: Developers without an ethical backbone, or who are powerless to push back on unethical projects.
One reason your boss is eager to replace everyone with language models, they won’t have any “ethical backbone” :’)
Many developers overestimate their agency without extremely high labor demand. We got a say because replacing us was painful, not because of our ethics and wisdom. Without that leverage, developers are cogs just like every other part of the machine.
You can't actually push back as an IC. Tech companies aren't structured that way. There's no employment protection of any kind, at least in the US. So the most you can do is protest and resign, or protest and be fired. Either way, it'll cost you your job. I've paid that price and it's steep. There's no viable "grassroots" solution to the problem, it needs to come from regulation. Managers need to serve time in prison, and companies need to be served meaningfully damaging fines. That's the only way anything will get done.
I'm hoping the Ladybird project's new Web browser (alpha release expected in August) will solve some issues resulting from big tech controlling most browers.
> There's no viable "grassroots" solution to the problem, it needs to come from regulation. Managers need to serve time in prison,
No, yes
Yes, giving these people short (or long, mēh) prison sentences is the only thing that will stop this.
No, the obvious grassroots response is to not use LinkedIn or Chrome. (You mean developers not consumers, I think. The developers in the trenches should obey if they need their jobs, they are not to blame. It is the evil swine getting the big money and writing the big cheque's...)
Yes, what I meant was there's no way ICs will change any of this. Using this or that extension, or choosing not to use some service won't really change anything either. The popular appetite just isn't there. Personally I use a variety of adblockers and haven't had a linkedin or anything for many years, but I fully accept that's an extremist position and most consumers will not behave that way. The only way these companies' behavior will improve is when they are meaningfully, painfully punished for it. There's very little we as consumers or ICs can do until then. Unless of course their risk management fails and they alienate a sufficiently large number of users that it becomes "uncool" to use the product. But all we need to do is look to twitter to see just how bad it'll get before then...
I integrate these kinds of systems in order to prevent criminals from being able to use our ecommerce platform to utilize stolen credit cards.
That involves integrating with tracking providers to best recognize whether a purchase is being made by a bot or not, whether it matches "Normal" signals for that kind of order, and importantly, whether the credit card is being used by the normal tracking identity that uses it.
Even the GDPR gives us enormous leeway to do literally this, but it requires participating in tracking networks that have what amounts to a total knowledge of purchases and browsing you do on the internet. That's the only way they work at all. And they work very well.
Is it Ethical?
It is a huge portion of the reason why ecommerce is possible, and significantly reduces credit card fraud, and in our specific case, drastically limits the ability of a criminal to profit off of stolen credit cards.
Are people better off from my work? If you do not visit our platforms, you are not tracked by us specifically, but the providers we work with are tracking you all over the web, and definitely not just on ecommerce.
No, credit card companies should be made to develop robust solutions to protect themselves from cards being able to be stolen. It's not like secure authentication isn't a relatively solved problem. They've obviously managed to foist the problem on you and make you come up with shitty solutions. But that's bad.
What I'm wondering is if this requires sending the full list of extensions straight to a server (as opposed to a more privacy-protecting approach like generating some type of hash clientside)?
Based on their privacy policy, it looks like Sift (major anti-fraud vendor) collects only "number of plugins" and "plugins hash". No one can accuse them of collecting the plugins for some dual-use purpose beyond fingerprinting, but LinkedIn has opened themselves up to this based on the specific implementation details described.
The SOP of this entire industry is "Include this javascript link in your tag manager of choice", and it will run whatever javascript it can to collect whatever they want to collect. You then integrate in the back end to investigate the signals they sell you. America has no GDPR or similar law, so your "privacy" never enters the picture. They do not even think about it.
This includes things like the motion of your mouse pointer, typing events including dwell times, fingerprints. If our providers are scanning the list of extensions you have installed, they aren't sharing that with us. That seems overkill IMO for what they are selling, but their business is spyware so...
On the backend, we generally get the results and some signals. We do not get the massive pack of data they have collected on you. That is the tracking company's prime asset. They sell you conclusions using that data, though most sell you vague signals and you get to make your own conclusions.
Frankly, most of these providers work extremely well.
Sometimes, one of our tracking vendors gets default blackholed by Firefox's anti-tracking policy. I don't know how they manage to "Fix" that but sometimes they do.
Again, to make that clear, I don't care what you think Firefox's incentives are, they objectively are doing things that reduce how tracked you are, and making it harder for these companies to operate and sell their services. Use Firefox.
In terms of "Is there a way to do this while preserving privacy?", it requires very strict regulation about who is allowed to collect what. Lots of data should be collected and forwarded to the payment network, who would have sole legal right to collect and use such data, and would be strictly regulated in how they can use such data, and the way payment networks handle fraud might change. That's the only way to maintain strong credit card fraud prevention in ecommerce, privacy, status quo of use for customers, and generally easy to use ecommerce. It would have the added benefit of essentially banning Google's tracking. It would ban "Fraud prevention as a service" though, except as sold by payment networks.
Mandating that tracking for anti-fraud be vertically integrated with the payment network seems unnecessary. Surely the law could instead mandate the acceptable uses of such data? The issue at present appears to be the lack of regulation, not scofflaws.
I'm not convinced tracking is the only or even a very good way to go about this though. Mandating chip use would largely solve the issue as it currently stands (at least AFAIK). The card provider doing 2FA on their end prior to payment approval seems like it works just as well in practice.
At this point my expectation is that I have to do 2FA when first adding a new card to a platform. I'm not clear why they should need to track me at that point.
> Even the GDPR gives us enormous leeway to do literally this, but it requires participating in tracking networks that have what amounts to a total knowledge of purchases and browsing you do on the internet. That's the only way they work at all.
That data sounds like it would be very valuable.
But I think if I sell widgets and a prospective customer browsers my site, telling my competitors (via a data broker) that customer is in the market for widgets is not a smart move.
How do such tracking networks get the cooperation of retailers, when it’s against the retailers interests to have their customers tracked?
I suspect a lot of retailers simply aren’t aware that that data is being collected and sold off to their competitors (or to ad networks so their competitors can poach their audience)
> These two sentences highlight the underlying problem: Developers without an ethical backbone, or who are powerless to push back on unethical projects. What the article describes should not be "what many devs would land on" naturally. What many devs should land on is "scanning the user's browser in order to try to fingerprint him without consent is wrong and we cannot do it."
I think using LinkedIn is pretty much agreeing to participate in “fingerprinting” (essentially identifying yourself) to that system. There might be a blurry line somewhere around “I was just visiting a page hosted on LinkedIn.com and was not myself browsing anyone else’s personal information”, but otherwise LinkedIn exists as a social network/credit bureau-type system. I’m not sure how we navigate this need to have our privacy while simultaneously needing to establish our priors to others, which requires sharing information about ourselves. The ethics here is not black and white.
If you voluntarily visit my website and my web server sends a response to your IP address, have I “taken” your IP address, or did you give it to me “voluntarily”? What if I log your IP address?
One works for money. And money is important. Ethics isn’t going pay mortgage, send kids to university and all that other stuff. I’m not going to do things that are obviously illegal. But if I get a requirement that needs to be met and then the company legal team is responsible for the outcome.
In short, you are not going to solve this problem blaming developer ethics. You need regulation. To get the right regulation we need to get rid of PACs and lobbying.
Regulation does not necessarily need to be about deciding what's right and what's wrong. It's about making life better for people. That's supposed to be why we have government. If they are not improving people's lives, why do we even have them? Too many people see the government doing nothing to improve their lives and think there's totally nothing wrong with that.
I fail to see how some of the octogenarians in DC, who are making a kiling for decades in trading on market moves that they initiate/regulate themselves, are making life better for your family, or mine.
Because at least half the country thinks that government can't/shouldn't help them, and reliably votes for people who can't/won't make their lives better. We get the government we vote for, and too many people think the government's job is to grief people.
> I think most people would interpret “scanning your computer” as breaking out of the confines the browser and gathering information from the computer itself.
Yes, but I also think that most people would interpret "Getting a full list of all the Chrome extensions you have installed" as a meaningful escape/violation of the browser's privacy sandbox. The fact that there's no getAllExtensions API is deliberate. The fact that you can work around this with scanning for extension IDs is not something most people know about, and the Chrome developers patched it when it became common. So I don't think describing it as something everybody would expect is totally fine and normal for browsers to allow is correct.
> I also think that most people would interpret "Getting a full list of all the Chrome extensions you have installed" as a meaningful escape/violation of the browser's privacy sandbox
I think that’s a far more reasonable framing of the issue.
> I don't think describing it as something everybody would expect is totally fine and normal for browsers to allow is correct.
I agree that most people would not expect their extensions to be visible. I agree that browsers shouldn’t allow this. I, and most privacy/security focused people I know have been sounding the alarm about Chrome itself as unsafe if you care about privacy for awhile now.
This is still a drastically different thing than what the title implies.
> Yes, but I also think that most people would interpret "Getting a full list of all the Chrome extensions you have installed" as a meaningful escape/violation of the browser's privacy sandbox.
I don't think so, because most people understand that extensions necessarily work inside of the sandbox. Accessing your filesystem is a meaningful escape. Accessing extensions means they have identification mechanisms unfortunately exposed inside the sandbox. No escape needed.
It's extremely unfortunate that the sandbox exposes this in some way.
Microsoft should be sued, but browsers should also figure out how to mitigate revealing installed extensions.
Y'all are letting "most people" carry an awful lot of water for this scummy behavior here.
In my experience, most people - even most tech people - are unaware of just how much information a bit of script on a website can snag without triggering so much as a mild warning in the browser UI. And tend toward shock and horror on those occasions where they encounter evidence of reality.
The widespread "Facebook is listening to me" belief is my favorite proxy for this ... Because, it sorta is - just... Not in the way folks think. Don't need ears if you see everything!
> The widespread "Facebook is listening to me" belief is my favorite proxy for this ... Because, it sorta is - just... Not in the way folks think. Don't need ears if you see everything!
Getting folks to install “like” and “share” widgets all over their websites was a genius move.
> I think most people would interpret “scanning your computer” as breaking out of the confines the browser and gathering information from the computer itself.
That is exactly how I interpreted it, and that is why I clicked the link. When I skimmed the article and realized that wasn't the case, I immediately thought "Ugh, clickbait" and came to the HN comments section.
> To reiterate, at no point am I saying this is good or acceptable. I think there’s a massive privacy problem in the tech industry that needs to be addressed.
100% Agree.
So, in summary: what they are doing is awful. Yes, they are collecting a ton of data about you. But, when you post with a headline that makes me think they are scouring my hard drive for data about me... and I realize that's not the case... your credibility suffers.
Also, I think the article would be better served by pointing out that LinkedIn is BY FAR not the only company doing this...
That sounds problematic and is only supported by people mindlessly agreeing to it. I know someone who got jobs at google and apple with no linkedin, and he wasn't particularly young. What do you do in the face of it? I say quit entirely. It was an easy decision because I got nothing out of it during the entire time I was on it.
After getting laid off at age 52 (2nd time, 1st time day after my 50th birthday, took an inter-company transfer), and searching for a year, applying to maybe 5-10 companies a week, I got my current job (2 years+) through a random LinkedIn button.
> Alongside thousands of other extensions. If they were scanning for a dozen things and this was one of them, I’d tend to agree with you. But this sounds more like they enumerated known extension IDs for a large number of extensions because getting all installed extensions isn’t possible.
To take a step back further: what you're saying here is that gathering more data makes it less sinister. The gathering not being targeted is not an excuse for gathering the data in the first place.
It's likely that the 'naive developer tasked with fingerprinting' scenario is close to the reality of how this happened. But that doesn't change the fact that sensitive data -- associated with real identities -- is now in the hands of MS and a slew of other companies, likely illegally.
> But the authors have chosen to frame this in language that is hyperbolic and alarmist, and in doing so I thing they’re making people focus on the wrong things and actually obscuring the severity of the problem, which is certainly not limited to LinkedIn.
The article is not hyperbolizing by exploring the ramifications of this; and it's true that this sort of tracking is going on everywhere, but neither is it alarmist to draw attention to a particularly egregious case. What wrong things does it focus on?
> The gathering not being targeted is not an excuse for gathering the data in the first place.
I’m not saying it is. My point is that they appear to be trying to accomplish something like getInstalledExcentions(), which is meaningfully different from a small and targeted list like isInstalled([“Indeed.com”, “DailyBibleVerse”, “ADHD Helper”]).
One could be reasonably interpreted as targeting specific kinds of users. What they’re actually doing to your point looks more like a naive implementation of a fingerprinting strategy that uses installed extensions as one set of indicators.
Both are problematic. I’m not arguing in favor of invasive fingerprinting. But what one might infer about the intent of one vs. the other is quite different, and I think that matters.
Here are two paragraphs that illustrate my point:
> “Microsoft reduces malicious traffic to their websites by employing an anti-bot/anti-abuse system that builds a browser fingerprint consisting of <n> categories of identifiers, including Browser/OS version, installed fonts, screen resolution, installed extensions, etc. and using that fingerprint to ban known offenders. While this approach is effective, it raises major privacy concerns due to the amount of information collected during the fingerprinting process and the risk that this data could be misused to profile users”.
vs.
> “Microsoft secretly scans every user’s computer software to determine if they’re a Christian or Muslim, have learning disabilities, are looking for jobs, are working for a competitor, etc.”
The second paragraph is what the article is effectively communicating, when in reality the first paragraph is almost certainly closer to the truth.
The implications inherent to the first paragraph are still critical and a discussion should be had about them. Collecting that much data is still a major privacy issue and makes it possible for bad things to happen.
But I would maintain that it is hyperbole and alarmism to present the information in the form of the second paragraph. And by calling this alarmism I’m not saying there isn’t a valid alarm to raise. But it’s important not to pull the fire alarm when there’s a tornado inbound.
> But what one might infer about the intent of one vs. the other is quite different, and I think that matters.
That's where we disagree: intent doesn't matter here, because the intent of the person gathering the data is not the same as those who have access to the data. I don't care if the team tasked with implementing this believed they were saving the world, because once this data is in the hands of a big corporation, in perpetuity, and the thousands of people that entails, and it diffuses across advertisers and governments, be it through leaks, backroom deals, or perfectly above-board operations, it makes no difference how it got there.
The two paragraphs given:
> “Microsoft reduces malicious traffic to their websites by employing an anti-bot/anti-abuse system that builds a browser fingerprint consisting of <n> categories of identifiers, including Browser/OS version, installed fonts, screen resolution, installed extensions, etc. and using that fingerprint to ban known offenders. While this approach is effective, it raises major privacy concerns due to the amount of information collected during the fingerprinting process and the risk that this data could be misused to profile users”.
vs.
> “Microsoft secretly scans every user’s computer software to determine if they’re a Christian or Muslim, have learning disabilities, are looking for jobs, are working for a competitor, etc.”
The latter is the tangible effect of the former. The two aren't mutually exclusive, and considering the former has long gone unaddressed in its most charitable form, it only makes sense to use a particularly egregious example of it taken to its natural conclusion to address in courts and the public consciousness.
It is equally “searching your home network” as it is “searching your computer”. This is not searching your computer. It is searching your browser. Being contained to the browser is completely different than having access to the OS behind the browser.
The issue here is that even if the original goal is the first thing, once you have the data you can do that second thing. From where we stand, nothing changes - same information is collected. But now, it's also used for affinity targeting or worse.
> I think most people would interpret “scanning your computer” as breaking out of the confines the browser and gathering information from the computer itself.
Which they would, if they could.
They are scanning users' computers to the maximum extent possible.
> I think most people would interpret “scanning your computer” as breaking out of the confines the browser and gathering information from the computer itself. If this was happening, the magnitude of the scandal would be hard to overstate.
But at the end of the day, the browser is likely where your most sensitive data is.
> Alongside thousands of other extensions. If they were scanning for a dozen things and this was one of them, I’d tend to agree with you. But this sounds more like they enumerated known extension IDs for a large number of extensions because getting all installed extensions isn’t possible.
If that's all it takes to fool you then its pretty trivial way to hide your true intentions.
> making people focus on the wrong things and actually obscuring the severity of the problem, which is certainly not limited to LinkedIn.
No, LinkedIN has much more sensitive data already. Combined with which the voracious fingerprinting, this stands out as a particularly dystopian instance of surveillance capitalism
but the language of "your computer" implies files on your computer, as it would be what people commonly call it. Merely just the extension is not enough.
If it has the ability to scan your bookmarks, or visited site history, that would lend more credence to using the term "computer".
The title ought to have said "linkedIn illegally scans your browser", and that would make clear what is being done without being sensationalist.
So are fonts. But running Window.queryLocalFonts() is not equivalent to “illegally searching your computer”.
I’m not defending the act of scanning for these extensions, and I’m of the opinion that such an API shouldn’t even exist, but just pointing out that there are perfectly legitimate APIs that reveal information that could be framed as “files installed on your computer” that are clearly not “searching your computer” like the title implies.
it doesn't have to be files. it could be in memory on the browser. Extensions don't imply files for anyone but the most technical of conversations. Certainly not to the laymen.
Having sensationalist titles should be called out at every opportunity.
> it doesn't have to be files. it could be in memory on the browser.
How'd that work? If it's in memory, the extensions would vanish everytime I shutdown Chrome? I'll have to reinstall all my extensions again everytime I restart Chrome?
Have you seen any browser that keeps extension in memory? Where they ask the user to reinstall their extensions everytime they start the browser?
> but the language of "your computer" implies files on your computer, as it would be what people commonly call it. Merely just the extension is not enough.
But the language of "your computer" also implies software on your computer including but not limited to Chrome extensions.
It implies more than just the browser, which is likely why it was used for the post title. If it is exclusively limited to the browser, then "scans your browser" is more correct, and doesn't mislead the reader into thinking something is happening which isn't commonplace on the internet.
Are you defending LinkedIn’s behavior right now or are you just happy to be more technically correct (the best kind of correct!) than those around you? Trying to understand the angle
The browser fingerprinting described is ubiquitous on the internet, used by players large and small. There are even libraries to do this.
Like OP, I don't consider behavior confined to the browser to be my computer. "Scans your browser" is both technically correct and less misleading. "Scans your computer" was chosen instead, to get more clicks.
Something may be bad, but accurately describing why it is bad significantly elevates the discourse.
Eg, someone could use the phrase "Won't someone think of the children?" to describe a legitimately bad thing like bank fraud, but the solutions that flow from the problem that "children are in danger" are significantly different from the solutions that flow from "phishing attacks are rampant".
The two issues in this case aren't quite as different as child-endangerment and bank fraud. But if the problem was as the original title describes, the solution is quite different (better sandboxing) than what the actual solution is. Which I don't know, but better sandboxing ain't it.
This is just the next iteration of the issues with Linux file permissions, where the original threat model was “the computer is used by many users who need protection from each other”, and which no longer makes much sense in a world of “the computer is used by one or more users who need protection from each other and also from the huge amounts of potentially malicious remote code they constantly execute”.
Scanning your computer is an entirely different thing than scanning browser extensions. By maximizing the expectation via "Illegally searching your computer", the truth suddenly appears harmless.
I personally think its misleading and even when you start reading the page it links to is even more misleading in my opinion.
>Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm.
When I read that, I think they have escaped the browser and checking which applications I have installed on my computer. Not which plugins the browser has in it. Just my 2cents.
Because "scanning your computer" technically could include scanning plugins, but it could also include scanning your files, your network or your operating system.
While "scanning your browser" would be more accurate and would exclude the interpretation that it scans your files.
The reason the latter is not used is that, even though more precise and more communicative, it would get less clicks.
Lol, lmao even. Lawmakers are banning privacy as fast as they can, this kind of personally identifiable stuff is perfectly aligned with their end goals.
Checking for extensions is barely anything when you consider the amount of system data a browser exposes in various APIs, and you can identify someone just by checking what's supported by their hardware, their screen res, what quirks the rendering pipeline has, etc. It's borderline trivial and impossible to avoid if you want a working browser, and if you don't the likes of Anubis will block you from every site cause they'll think you're a VM running scraper bot.
In the same way that scanning and identifying your microwave for food you put inside it is not the same as scanning your house and reading the letters in your postbox.
Your browser is a subset of your computer and lives inside a sandbox. Breaching that sandbox is certainly a much more interesting topic than breaking GDPR by browser fingerprinting.
> I’ve come to mostly expect this behavior from most websites that run advertising code and this is why I run ad blockers.
Expecting and accepting this kind of thing is why everyone feels the need to run an ad-blocker.
An ad-blocker also isn’t full protection. It’s a cat and mouse game. Novel ideas on how to extract information about you, and influence behavior, will never be handled by ad-blockers until it becomes known. And even then, it’s a question of if it’s worth the dev time for the maker of the ad-blocker you happen to be using and if that filter list gets enabled… and how much of the web enabling it breaks.
The point was more that the headline frames this as some major revelation about LinkedIn, while the reality is that we’re getting probed and profiled by far more sites than most people realize.
Studies show most people who don’t think they’re impacted by advertisements are wrong. Advertisements don’t just drive you to buy something, they can also be used to create brand recognition, positive feeling associations and force the brand to front of mind.
You don’t notice ads when they pop up in front of content? When they lead to nearly full page breaks between paragraphs in an article? When they contain auto-play videos? When the video resizes itself and moves to stay in the viewport as the user scrolls? When so many ads load that the page crashes? When you do a Google search and there is only a single organic result without scrolling?
They introduced ads like a frog into tepid water. The water is now boiling and many still think everything is fine, because at this point it’s all they know.
It’s not a fear, it’s annoyance and a resentment. I’m annoyed that the ads make web pages so much worse. I resent that everything being “free” with ads has made it next to impossible for other business models to take hold and that new companies need burden themselves with investors, because the expectation is that things online should be free. I’m annoyed that a profile of who I am has been built and sold without my consent and without giving me a cut of the profit. I resent the companies that do this and have no respect for them or their leadership. It’s most certainly not fear of advertisements.
The fear is what will happen to that data, or what may already be happening, if it is controlled by some deceitful individuals or groups.
The fear doesn’t come from the ads, it comes from the invasive data collection that increases the profit of the ads. It’s compounded by the extremely frequent hacks and data leaks that have made it very clear that most of these companies cannot keep the data they collect secure. As such, they have no business collecting and storing it in the first place.
A billboard is an advertisement, so is a magazine ad. The world would be a more aesthetically pleasing place without them, sure, but I don’t go out of my way to avoid them like with the online ads. Billboards and magazines aren’t monitoring me and using hyper-targeted ads. A knitting magazine is going to show ads for knitting stuff. A billboard in Orlando is going to point a driver toward Disney. That’s just fine. Those ads meet people where they are, they don’t follow them around.
I don’t like shopping at Target due to what I’ve read about their data collection and how it’s used. I don’t fear big box stores, I just don’t want to be part of their data set. A store should be a store that profits from the margins of the products they sell. Now, the retail arm is just the front of their advertising or credit card arm of the business, where all the real money is. I don’t want to play that game. I’m a simple man, I want things to be what they are and that’s it.
Excuse the rant.
When I look up diaphimisticophobia, it seems specific about the commercial and their content being the fear. I think most people on HN have an issue with the data collection and use, not the content of the ads themselves.
It's pretty wild that we live in a world where the actual FBI has recommended we use ad blockers to protect ourselves, and if everyone actually listened, much of the Internet (and economy) as we know it would disappear. The FBI is like "you should protect yourself from the way that the third largest company in the world does business", and the average person's response is "nah, that would take at least a couple of minutes of my time, I'll just go ahead and continue to suffer with invasive ads and make sure $GOOG keeps going up".
>the average person's response is "nah, that would take at least a couple of minutes of my time,
As a data point I, a technical person who tweaks his computer a lot, was against adblocking for moral reasons (as a part of perceived social contract, where internet is free because of ads). Only later I changed mi mind on this because I became more privacy aware.
Figure this: You could plaster a page with the most obtrusive ads imaginable without ever showing a cookie banner, when they collect no private info.
Most people, including folks on here, think cookie banners are a problem, but they are just an annoying attempt to phish your agreement. As long as these privacy loopholes exist, we will keep hearing such stories even from large corporations with much to loose, which means the current privacy regulations do not go far enough.
Beyond just invasive/annoying, ad networks explicitly spread malware and scams/fraud. There's not much incentive for them to clamp down on it, though, as that would cost them money both in lost revenue and in paying for more thorough review.
It'd not even be hard for them to stop it, but they just had to be annoying instead.
When I first started out on the internet, ads were banners. Literally just images and a link that you could click on to go see some product. That was just fine.
However, that wasn't good enough for advertisers. They needed animations, they needed sounds, they needed popups, they needed some way to stop the user from just skimming past and ignoring the ad. They wanted an assurance that the user was staring at their ad for a minimum amount of time.
And, to get all those awful annoying capabilities, they needed the ability to run code in the browser. And that is what has opened the floodgate of malware in advertisement.
Take away the ability for ads to be bundled with some executable and they become fine again. Turn them back into just images, even gifs, and all the sudden I'd be much more amenable to leaving my ad blocker off.
> The social contract was "your ads aren't annoying or invasive
Even back in the 1990s the internet was awash with popups, popunders and animated punch-the-monkey banner ads. And with the speed of dial up, hefty images slows down page loads too.
You must be a true Internet veteran if you remember a time ads weren’t annoying!
I remember a time before ads. I remember the first time I got "spam" email - email not directly addressed to me that ended up in my inbox. I was very confused for some time about why this email was sent to me.
I remember how I felt the first time I saw an ad come across my browser, it seems so long ago - I guess it was more than a quarter century ago now. I knew it was going to be downhill from there, and it has been.
Well by 2000 the guy at Tripod had already developed pop-up ads. I honestly don't remember ads before the pop-ups, but it must have already been maturing.
I strongly believe in paying journalists but I started blocking ads after nytimes.com served me a Windows malware download from a Doubleclick domain. It couldn’t have harmed my Mac but it was clear that the adtech industry had no interest in cleaning shop if it cost them a dime in revenue.
You mean the internet you pay to access and which was around before the ads were even on it? That internet?
I'm not trying to be mean I'm just trying to historically parse your sentence/belief.
Because for me this is a simplified analogy of what happened on the internet:
a) we opened a club house called the internet in the early 1990s, just after the time of BBSs
b) a few years later a new guy called commercial business turned up and started using our club house and fucking around with our stuff
c) commercial business started going around our club house rearranging the furniture and putting graffiti everywhere saying the internet is here and free because of it. We're pretty sure it might have even pissed in the hallway rather than use the toilet and the whole place is smelling awful.
d) the rest of us started breaking out the scrubbing brushes and mops (ad blockers, extensions, VPNs, etc) trying to clean up after it
e) some of its friends turned up and started repeating something about social contracts and how business and ads built this internet place
f) the rest of us keep crying into our hands just trying to meet up, break out the slop buckets to clean up the vomit in the kitchen and some of us now have to wear gloves and condoms just to share things with our friends and stop the whole place collapsing
Ya, back when 'we' were fucking around on BBS's there was the equivalent of 10 people online at the time.
Quantity is a quality in itself. Your BBS was never going to support a million users. Once people figured out the network effect it was over for the masses. They went where the people are, and we've all suffered since.
Honestly, I still prefer webboards, the closest thing to a BBS, for specific topics like specific car brands/models. WAY better signal-to-noise ratio. Alas, for my car model, all the recent stuff has moved to Fbook. FML.
> a) we opened a club house called the internet in the early 1990s, just after the time of BBSs
"we" is doing a lot of work here. No clubhouse got optical switching working and all that fiber in the ground for example. Beyond POC, the Internet was all commercial interests.
"we" paid ISP's ... which in turn, paid for infrastructure. Some of "we" pay cable providers for internet service, which in turn paid for (in my case) fiber-to-the-curb. Advertising basically supported social media, search engines, etc.
> it was first and foremost a military enterprise, just like GPS
This is sort of like arguing cutlery is a military enterprise. Like yes, that’s where knives came from. But that’s disconnected enough from modern design, governance and other fundamental concerns as to be irrelevant. The internet—and less ambiguously, the World Wide Web—are more commercial than military.
This is moving the goalposts. The commenter above is talking about the enthusiast-populated internet of the late 80s/early 90s, at which point it still wasn't even clear if it was legal to use the internet for commercial purposes. If all you mean to say is that the internet is currently commercialized, yes, that is obviously true, in much the same way that a disgusting ball of decomposing fungus may have once been an apple.
> commenter above is talking about the enthusiast-populated internet of the late 80s/early 90s, at which point it still wasn't even clear if it was legal to use the internet for commercial purposes
Source? Not doubting. But I have a friend who was buying airline tickets through CompuServe in the late 80s/early 90s.
Compuserve was NOT the internet. Compuserve / Prodigy / GEnie were early versions of Facebook. They also inter-operated (email) for some period of time. IIRC.
This is ignoring things like newspapers that were made obsolete by the internet. At some point someone does need to actually pay for the content we see online. That is if we want that content to actually be good.
not sure why you're talking about "commercial business" being the one inserting ads everywhere when even niche community run forums from the 2000s also had ads to help pay for their server costs. At the end of the day all this costs money. Whether its paid by ads or direct subscriptions. IMO the problem is more about concentration and centralization of the internet into a handful of sites than advertising.
I have expensive online subscriptions to New York Times, Wall Street Journal, and Washington Post. Nevertheless they are FILLED with ads/popups/videos that run automatically/dark patterns. Just saying: there's no refuge.
True, but that doesn’t invalidate what I said about the vast majority of sites that aren’t globally known, prestigious news companies that people are willing to pay an expensive subscription for.
Most publishers of content online are ad supported and struggling, and I want to make sure I’m contributing to their revenue somehow.
I don’t feel bad about blocking ads on sites I pay for though.
The average person — that would be me — thinks "nah, I have no idea how to install an ad blocker or how one works, and I'm afraid I'll screw up my computer."
The crazier part is that its an official government position, and we (people at large / the government) aren't immediately slapping down the actions of these companies.
Don't worry, soon you'll need to pay every website 5.99 a month because AI is destroying click through rates. The internet will likely be far worse without ads than with ads. Solving the tracking problem doesn't need to be mixed up with blocking ads outright. What's funny is that tracking isn't nearly as meaningful for click through rates on ads as relevance to what's on the page, and yet so much effort is placed onto tracking for the slim improvement it provides.
It would not be 5.99 to access a website because that's not what it costs and that's not what ads yield.
I think people think ads give way, way more money than they actually do. If you're visiting a website with mostly static ads then you're generating fractions of a cent in revenue for that website. Even on YouTube, you're generating mere cents of revenue across all your watch time for the month.
Why does YouTube premium cost, like, 19 dollars a month then? I don't know, your guess is as good as mine.
Point is, you wouldn't be paying 5.99. You could probably pay a dollar or two across ALL the websites you visit and you'd actually be giving them more money than you do today.
But there's no method or structure in place to pay a website a fraction of a cent. Ads are the only way we've found that actually implements a form of microtransactions... paying a tenth of a penny for a sliver of attention.
I don't want to defend ads, but whatever replaces them is going to be very disruptive. Maybe better, but very different.
In 2023 I did a deep dive into the crypto community with two main questions:
- do these people understand the principles of making good products?
- is anyone clearly working towards a microtransaction system that could replace advertising and subscription models?
After attending two conferences, hundreds of conversations and hours spent researching, my conclusion to both questions was no. The community felt more like an ouroboros. It was disappointing.
I don't want to pay NYT a subscription fee, I want to pay them some fraction of a cent per paragraph of article that I load in. Same goes for seconds of video on YouTube, etc.
Apparently I'm alone in this vision, or at least very rare...
I have also done similar research because I wanted to build something to handle microtransactions on a personal website that could scale if adopted to be usable by everyone if they wanted.
I looked at crypto currency because it seems like the obvious naive solution. it doesnt work. the cost of the transaction itself far outweighs the value of the transaction when dealing with fractions of a cent. you want an entire network to be updating ledgers with ~millions of records per ~$1000 moved. the fundamental tech of crypto leans towards slower, higher value transactions than high volume, small transactions. Lots of efforts have been made with some coins to bring down the bar of "high value, low volume" to meet everyday consumer usage rates and values - but a transaction history at the scale of every ad impression for every person is a tough ask and would perpetually be in an uphill battle against energy costs.
Ultimately, the conclusion I came to is that the service would need to be centralized, and likely treated as cash by not keeping track of history. Centralized company creates "web credits", user spends $5 for 10,000 credits, these credits are consumed when they visit websites. Websites collect a few credits from each user, and cash out with the centralized company. The issue is that since it would cost more to track and store all the transactions than the value of the transactions themselves, you have to fully trust the company to properly manage the balances.
I started building it and since I would be handling, exchanging, and storing real currency - it seemed subject to a lot of regulations. It is like a combination bank and casino.
i've thought about finishing the project and using disclaimers that buying credits legally owes the user nothing, and collecting credits legally owes the websites nothing, and operating on a trust system - but any smart person would see the potential for a rug pull on that and i figured there would not be much interest.
The alternative route of adhering to all the banking regulations to get the proper insurances needed to make the commitments necessary to users and websites to guarantee exchange between credits and $ seemed like too much for 1 person to take on as a side project for free
It would need to be mostly centralized, but keeping track of history would not be hard.
A typical credit is getting paid in, transacted once, and cashed out. And a transaction with a user ID, destination ID, and timestamp only needs 16 bytes to store. So if you want to track every hundredth of a penny individually, then processing a million dollars generates 0.16 terabytes of data. You want to keep that around for five years? Okay, that's around $100 in cost. If you're taking a 1% fee then the storage cost is 1% of your fee.
If your credits are worth 1/20th of a penny, and you store history for 18 months, then that drops the amount of data 17x.
(And any criticisms of these numbers based on database overhead get countered by the fact that you would not store a 10 credit transaction as 10 separate database entries.)
fair enough on tracking history in the centralized model. I had suspicions there would be hidden costs that might make it too expensive. i dont think the data storage would be as much of a problem as the cost to write it to storage.
I wasn't fully envisioning credits only being transacted once before cashout either. I was thinking more along the lines of being able to create something that goes viral, a lot of people use it and you rack up a bunch of credits, and then you can sit on those credits and spend them as you use the internet without ever having to connect to a bank yourself. So people who are contributing more than they are consuming would rack up credits. they could use those credits to enrich their contributions, maybe pay for cloud services, etc.
the credits could form its own mini web economy if it got popular enough. As cool as this would all be if done honestly, I know that if i saw a company telling me to buy web credits to use anywhere on the internet and the websites get to decide how much to charge and they charge it automatically when i visit the website, and if the company i buy the credits from goes out of business then i may not be able to cash out or get my money back, then I likely wouldnt be buying those credits... so idk
Even with user to user credits it would take a lot for the number of transactions to go above 2. That would mean more than half the money is going to viral payouts.
And was this assuming you'd only take a cut on the cash going in and out? Because even a 0.1% cut of the transactions would mean you have $1000 to handle the amount of data I described in the last comment.
>And was this assuming you'd only take a cut on the cash going in and out
I think fee needs to be per transaction, maybe not cash flowed per transaction but accrued per transaction.
Say we both self-host a website for our favorite daily game, and I use yours about as much as you use mine. We would transfer roughly the same amount of credits back and forth to each other ad-infinitum. but the credit service provider is accumulating only expenses with each transaction.
Say someone make a lot of bot accounts to simulate user traffic, and it sends each of them credits to use to visit their own site. the host collects the credits from the bots and transfers them back to the bots to keep them running.
you are not alone, people seriously proposed one thing after another in the early 2000s.. same time frame as RSS, roughly. Somehow, these proposals were undermined and slow-walked? merger and acquisition in Silicon Valley was aligned with very different things
>"Ads are the only way we've found that actually implements a form of microtransactions... paying a tenth of a penny for a sliver of attention."
Ads were the path of least resistance, and once entrenched, they effectively prevented any alternative from emerging. Now that we've seen how advertising scales, and how it's ruined our mediascape, we're finally looking at alternatives. Not dissimilar to how we reacted to pollution, once we saw it at scale.
And has roughly 2.7 billion monthly active users. This means the average YouTube user brings in around $1.23 per month. When you consider that CPM's can easily swing by 20X based on how wealthy the user demographic is, and willingness to pay a subscription is a strong signal for purchasing power, I would not be at all subscribed if a YouTube premium subscription was revenue-neutral for Google.
This may be a hot take but I'd be willing to pay my ISP $10 extra that they would distribute to sites I visit, if it meant zero tracking and ads. I use an ad blocker but I genuinely want to support content creators in a way that doesn't optimize for ads or clicks.
There would need to be a way for ISPs to know which websites are getting my traffic in order to know who to distribute the money to, which I'm not a fan of. But I think something along those lines, with anonymized traffic data, would work a treat.
Well what makes you think the VPN providers are not tracking?
You would have to either self-host your own VPN server somewhere (maybe on a public cloud provider) or if you are truly paranoid, use something like Tor.
They have been subject to warrant requests, and had nothing to turn over. There are only a few vpn providers that I genuinely trust. (Mullvad, airvpn, etc)
Really though, I am not worried about 3 letter agencies performing legitimate law enforcement duties. I am worried about corporations hovering up more data about me than I'd want to reveal, and either using that as a basis to charge me more, or worse, they get hacked, and that data is used by bad actors to target me.
Yeah that's the problem (and possibly why such a thing didn't exist).
But I kinda see it like TV. Cable providers know what channels and shows people are watching. Obviously web browsing data is more personal and intimate so it's not the same thing, but it's a good starting point for a thought experiment.
> This may be a hot take but I'd be willing to pay my ISP $10 extra that they would distribute to sites I visit, if it meant zero tracking and ads. I use an ad blocker but I genuinely want to support content creators in a way that doesn't optimize for ads or clicks.
The problem is that both the ISP and the websites would then go "Cool, we're getting $10 a month from them!" for about a minute before they started trying to come up with ways to start showing you ads anyways. With the level of customer appreciation ISPs tend to show, I'm sure they'd have no problem ignoring your complaints and would happily revoke your service if you stopped paying the now $10-higher price per month.
people with something to share, people with something to say, who share and say it because they want to
that's how pamphleteers worked, that's how the Internet worked
at scale, static (CMS-managed) information sites cost effectively nothing even for arbitrary amounts of traffic, and smoothed across a range of people sharing stuff, it approaches zero per person
publishing used to be free with your ISP, and edge CDN used to be (and still is) free to a point (an incredibly high volume point) as well
having people pay something nominal to say things instead of pay far too much in attention-distraction or money to consume things, would put this all back the right way round
I couldn’t disagree with this more if I tried. The biggest benefit of the internet is to make it easier to talk to each other and share ideas. Putting financial gates in front of that ability is hot garbage.
Also, I agree that the platforms and paradigms we have are fucked up, but do believe that people who put work into making something deserve to charge for it if there are folks who’d pay.
The ISP shouldn't necessarily be involved in this process, but some form of syndication does need to happen, and it seems crazy that it hasn't.
The closest we've come is something like Apple News, which allows me to pay for a selected (by them, not me) subset of features on a selected (by them, not me) subset of news sites. Can't somebody do this right?
internet will likely be far worse without ads than with ads
Not sure on that. It was far, far better before what drives ads today. I've gotten more value from random people's static HTML pages in 1999, than I ever have from something in the last 25 years.
This just led me to think of news sites, and how they've turned mostly into click-bait farms in the last decade to 15.
Gives me pause. Didn't the king of "doing it online" buy a newspaper, but the end result wasn't an improvement on its fate? If there is any way to make cash from news, shouldn't Bezos have been able to do it??
I would love to get something more akin to a monthly print issue of BYTE, Omni, Starlog, Reality Hackers, WIRED and Dr Dobbs Journal without blinky, shouty ads that cause the content to re-render every 10 seconds.
E-ink is getting cheaper and cheaper, there's a lot of 6" screen devices for $100. If it dropped to $100 for a 11" screen, that would be a respectable size for a magazine. I cite eink as most are distraction free, or can be, and are very easy on the eyes.
Such content would also suck with flashy ads too.
It's pretty easy tech I think, it's just never hit a flash point. But it could.
We literally had all of this. We had regular, affordable, high quality printed media for every hobby and interest and industry, that you could get delivered to your home address and collect in your own archive if you want, and your local library could do the same.
Those pieces of paper could not track anything about you. They tried, selling their subscriber lists, but that was the best tracking they could provide! You could easily ignore ads, and in return they had to make ads interesting enough in various ways that you might look at them anyway, or they had to make their ads directed at people who went looking for whatever you were selling.
It was an objectively better system in every way.
The Sears catalog was worlds better than Amazon. You weren't going to buy a fraudulent item for one.
Tech is a failure. It has made so much worse. It has only served to allow businesses to cut costs while extracting money from every single local community that used to allow such cash to circulate locally.
I might recomment a middle ground before banning all internet advertising.
What if we limited advertising to images which don't set tracking cookies, so you would get something sort of like banner headlines. Maybe say the image had to be served from the same place as the rest of the content so you don't get to track readers with image trackers
It turns out that "makes the most money for a small amount of people" is pretty much the same as "makes everything shitty for everyone else". It's time that we either stop accepting "most profitable" as an excuse for making things worse or start regulating/punishing bad behavior until it becomes so costly that it's no longer profitable.
Hardly. I'm the guy upthread, lamenting the current state of things.
But with e-ink, you can be detached. Knowing someone buys a newspaper is hardly a surprising thing. To put it in perspective, a large number of people subscribed to the paper, and it was delivered daily. The same was true of magazines subscriptions. As long as the media is offline (eg, PDF, epub, similar), and the reader OSS, then the tracking and ads aren't an issue still.
--
I don't disagree with how poor things are, but one issue is government moves slowly. Laws being passed today, are the result of trends 20 years ago. For example, in my legal jurisdiction, vendors (eg, Best Buy, big box stores) are responsible for the thing they sell. It's not just "ship it back to manufacturer", for obvious reasons.
Eventually the issues with e-trade will be dealt with, just as issues with shoddy sellers were deal with a century ago. Here's an example...
Back in the 50s people would send items through the mail, then demand people pay for them, or pay for return shipping. I'm not kidding. Even when it wasn't easily defensible in civil court, all the legal threats would scare some into paying.
So laws were passed. If you receive something in the mail you didn't order? It's yours. Period.
But this took a decade to happen, if not more.
This is the sort of thing which will happen in this new market.
And yes, Amazon sucks as it is now.
It's really quite fascinating to me how a lot of new markets aren't about novel, but instead about not having terrible behaviour regulated. For example, Amazon has the worst customer service in all existence. It used to be good, but they now take immense pains to hide all support channels, and where I live, it's a maze of incomprehensible clicks to even attempt to get a chat.
So... I have to call now. Every time. And now they have the same wall of "noise" on the phone, so it's harder to get through there. In the past, I've done chargebacks when I can never reach a company, and that will be the inevitable conclusion here too.
Which shows how incredibly stupid Amazon is, when this household buys $4k of stuff a month from them, and just has edge-case returns sometimes. I'm sure they'll cancel my account first time, and, well, who cares.
When companies get to this level of "screw the consumer", they're at the edge of all ability to improve profits. There's no where left to go. I expect Amazon to have issues due to things like this, and the squeeze on foreign imports, and crash and burn on its side.
But back to your point? Yes, we should. Or, we should just pass laws which make centralized advertising, that is, the collection of Pii impossible.
Ban all Pii? Ban all transactions of Pii? And you end advertising as it is.
> If there is any way to make cash from news, shouldn't Bezos have been able to do it??
News only made money when the newspapers could leverage their circulation numbers to run their own ads network. The classifieds section was a money machine. I remember full-page ads in the Washington Post from local car dealerships showing every model they were selling. They likely ran different ads for distribution in other regions, probably 10Xing their money. Google and Facebook killed that.
What Bezos bought was a corpse of a business, but one with strong journalistic credibility known for historic investigative analyses such as the Watergate cover-up that earned public goodwill. He was buying that goodwill and slowly asphyxiating it to align with his own interests.
This sounds possibly better. Aligns the interest of the website more with the users.
Ads are a weird game. People say you're ripping off the website if you adblock, but aren't you ripping off the advertiser if you don't buy the product? If I leave YouTube music playing on a muted PC, someone is losing.
That'd be ideal because it would mean I could browse the internet without ads and just never use AI chatbots. Unfortunately I think ads are only going to spread and what we'll actually end up with is "more ads everywhere".
I would rather pay people and websites for content. I already do this today for journalism orgs and a handful of high value substacks, I'm happy to pay for more. I'd pay for HN. Free does not scale (with the caveat being orgs like Wikipedia, the Internet Archive, and others who have an endowment behind them and can self fund alongside donations; this, of course, is a model others can adopt), people need to eat, pay for rent, etc, and ads are ineffective when everyone can block them.
Ads are a symptom of the problem that people want human generated content for free; they either do not value the content enough to pay for it, or cannot afford it. Ads do not solve for those problems.
No disagreement there, except the early web was not about scale. The sites you visited may have been created by someone as a hobby, a university professor outlining their courses or research, a government funded organization opening up their resources to the public, a non-profit organization providing information to the public or other professionals, or companies providing information and support for their products (in the way they rarely do today).
> people need to eat, pay for rent
Those people were either creating small sites in their spare time, or were paid to work on larger sites by their employer.
There were undoubtedly gaps in the non-commercial web. On the other hand, I'm not sure that commercializing the web filled those gaps. If anything, it is so "loud" that the web of today feels smaller and less diverse than the web of the 1990's.
I agree there are hobbyists, for lack of a better term, who will always share for free "for the love of the game", passion, whatever you want to call it. Nothing stops them from doing this passion or charity work today, the evidence of that is clear from the content we see daily pass through /new here. That was never really ad driven, nor would it be in the future, and numerous mechanisms remain for them to share this content for free with the world. But that is a small minority of today's Internet and consumption of data, information, and content (imho).
How does HN exist? Wealthy benefactors. Do I appreciate it any less? I do not, I am very grateful. But solutions are needed where a wealthy benefactor has not stepped in or does not exist, a commercial business model is untenable, the government does not or will not fund it, and the scale is beyond a single person spending a few hours a week on it for free.
I run into occasional articles, often linked from here, for say economist or ft.com or new york times
I'm not signing up for a subscription for that journal, but paying a small amount for access to that one article is a no brainer. I don't subscribe to a newspaper either, but I'll happily buy one.
The New European did this a decade ago using "agate" (named after the smallest font you'd get in a newspaper), top up with a few quid, then pay for each article.
Sadly didn't catch on. TNE dropped it in 2019[0]. Agate still exists, having been renamed to "axate", but consumers aren't willing to pay with anything other than their time.
While this works for some cohort of consumer, it doesn't work for organizations that need consistent cashflows to pay for consistent expenses, and so, those willing to subscribe on a recurring basis carry the economic burden of sustaining such operations.
Newspapers continue to run ads even after the paywalls went up everywhere a decade or so ago. Once "premium" offerings like HBO, which were ad-free on cable TV, now has ads on its paid streaming version. Even with the "premium" subscription tier, there's sponsored/co-branded content. And for some reason, it now has live sports, where they have no control over the ads shown.
The problem was less the scale of supply and more the scale of demand.
In the 19th century, economist William Stanley Jevons found that, as coal became more readily and easily available, demand for it went up. This was counter to the theories of others, and the principle became known as Jevons Paradox.
Jevons Paradox (a concept that is widely misunderstood, especially when it comes to tech and finance bros talking about AI) demonstrates that, a resource becomes more abundant and easily accessible, demand for that resource rises. As the web took off, people hungered more and more for digital content -- especially as internet accessibility became faster and cheaper.
To keep up -- and to pay for being able to keep up -- increasingly sophisticated monetization models were introduced.
In any case, ad models are one thing. But it's the data brokering that's even more insidious.
The irony is that if internet content were harder to access, the population on the whole wouldn't want it as much.
Now, the culmination of Jevons Paradox has spun itself around a bit in this case. We now live in a world where those profiting off of ad models and data brokering actively try to get people to demand internet content more. (Look no further than the recent social-media-addiction lawsuits.)
>> Sadly you are atypical and the vast majority are freeloaders
> Citation needed.
I think we need to agree upon a definition of freeloader before citing sources to support the claim. I've found that many people who use the word have a much more transactional view of the world than I do.
> I would rather pay people and websites for content.
I do not think that this is a workable model. Firstly, because it leads inevitably to monopolization, because you don't want to pay 50,000 people for content, you want to pay 10 people for content. Secondly, because most content is bad and a waste of time and you don't find out until after you've bought it. Thirdly, and most importantly, is that there's no actual, clear separation between "news" and "advertising."
Content is generated because people who want that content generated sponsor it beforehand, and dictate the conditions under which the delivery of that content will be accepted as a fulfillment of that sponsorship. The people sponsoring that content can have any number of reasons for doing it; it can make them money directly (i.e. I have articles about cats, people who like cats subscribe to my cat website), which if you're a linear thinker you think is the only way, or it can make them money indirectly, maybe by leading consumers to particular products or political stances that they have a stake in.
This is simply the truth. Your preferences don't matter, and it's not a moral question. If you pay for content, you're more valuable to advertise to, not less. A lot of work is put into producing trash that you regret having read or watched, and was really intended to make you support Uganda's intervention in a Zambian election (or whatever.) If you "value" reading it, you've failed an intelligence test. Its value is elsewhere for the people to paid for it to be written.
What's recently shown itself to scale is small groups of people sponsoring journalists and outlets who put out tons of content for free. The motivation of those sponsors is usually to spread the points of view of the journalists they sponsor widely, because they believe them to be good.
There was never a pay model that supported things that people didn't feel passionate about or entertained by. Newspapers cost less than the paper they were written on. Television news was always a huge money loser that was invested in to raise the social status and respectability of the network. If you feel passionately about anything, you're far better off paying people to listen, to give you a chance, than to lock away content. Journalism as a luxury good can work, but only for Bloomberg terminals and Stratfor, when it is used to make other lucrative decisions by its buyers.
> orgs like Wikipedia, the Internet Archive, and others who have an endowment behind them
This is simply sponsorships by governments and billionaires. Never ever been any significant shortage of that (the patron saint of this is King Alfonso X.*) All of those people have wide interests that can often be served by paying for media to be produced or distributed. It's where we got our first public libraries from.
For me, the fact that Substack and Patreon almost work is more important, and is something that wouldn't have been as easy without the benefits that the internet brings for the collaboration of distant strangers.
Yes, but the Awful registration fee is more like a speedbump to make banned behavior at least a little expensive to the offending users. Most of the revenue comes from completely optional aesthetic purchases: avatars, avatars _for others_, smilies, etc. I suspect it's a whale based economy.
True, I think it was more sort of a natural filter than explicitly revenue for the website.
Still, I would be willing to pay a bit more for a website that I actually like if it's a one-time fee; I actually paid for the "Platinum" membership for Something Awful so that I would have access to search, and a custom icon, so I think the total damage was around $30.
Dunno, I guess I just feel like people will pay for things if those things don't suck. I think the fact that the only way that companies can really compete for people's time is giving it away for free [1] is a testament that most stuff on the internet is actually kind of shit.
[1] yeah I know something something you are the product something something.
Also look up K shaped economies at the same time and you get a better answer.
But the gist of it is, companies do free to play systems that support themselves by a very small portion of their user base spending a very large amount of money. The free/low paying users find themselves with poor/no service as the companies do anything to attract more whales.
K based economies are somewhat related as you see a very small portion of the participants in an economy make a huge amount of money while everyone else gets poor.
Whales are the tiny percentage of users who spend large amounts of actual money on bullshit non-products offered by mobile apps and online platforms. AKA suckers.
> The internet will likely be far worse without ads than with ads
This is highly debateable. I wouldn't mind paying a bit for the websites I am using as there are just a few platforms and some blogs that I would be happy to pay a small amount for.
> soon you'll need to pay every website 5.99 a month
No, I won't. I'll just stop using them. So will almost everyone. I don't think there's a single ad-supported product that would survive by converting to a paid subscription, because they're all so profoundly unnecessary.
I'm happy to see that day. I'm already paying for stuff I need in life. There's no reasons to insist on not paying for the stuff I need in the web. Just kill those spywares stealing my personal actions and information.
I honestly don’t think “with ads” describes what we are experiencing. We are being all but violently fracked for data (and we don’t know what all they’re taking) for them to sell to 3rd parties we don’t know who then use decades of research and tooling + your personal data to psychologically manipulate you into not just buying things, but also into feeling and acting certain ways (socially, politically, etc).
This isn’t Nielsen ratings informing cable networks where to throw up which commercials in certain regions. This is far more dangerous and intense. So the conversation needs to be framed differently than the implied bar of “intrusive/annoying/incessant ads.”
But those websites would have to provide 5.99 a month of value, and many don't.
We used to have "static" banners on sites, that would just loop through a predefined list on every refresh, same for every user, and it worked. Not for millions of revenue, but enough to pay for that phpbb hosting.
The advertisers started with intrusive tracking, and the sites started with putting 50 ads around a maybe paragraph of usable text. They started with the enshittification, and now they have to deal with the consequences.
Nary a month goes by that I don't bemoan the loss of BYTE and Dr Dobbs Journal. WIRED is still hanging on, but it's more of a site where tech warehouses in Shenzhen hawk there latest wares.
There was a time when Boing Boing was a decent little print magazine. And the web site went a decade before turning into... whatever the heck it is now.
And Reality Hackers and Mondo 2000 were "guaranteed unreadable," but they were on the bleeding edge of desktop publishing style and technology.
I'm old enough to remember typing BASIC games from COMPUTE! into my C64 and reading about the latest Star Trek film in Starlog.
I sing the praises of Omni, even though it was clear they were probably snorting a lot of cocaine in their offices.
I can't be the only one who remembers Computer Shopper, but I have to admit it was years before I realized they had a bit of content and were more than just an ad sheet for Micro Center.
PC World wasn't my jam, but I respected the role it played. UnixWorld and Info World were more my thing.
And I even read the stories and articles in Playboy in the 70s. Believe it or not, they had some amazing authors publish stories there.
I think the damage is there even if you don't see the ads. News outlets and organizations that used to be magazine publishers focus on lowest common denominator stories they know will get the highest engagement. That usually means sexy anger-bait.
Sure we had that in the print times, but we had a lot more "slow" content that you could sit with and contemplate over a day, week or month.
Even those of us who don't see ads see the structure that the ad-driven internet economy creates. Listicles, clickbait and AI-generated slop web pages, just trying to get more ad impressions. Sure, with an ad blocker I can see the low-quality content without an ad, but without the ad economy hopefully there'd be less incentive to create low-quality content to begin with.
Majority of people use their mobile devices these days to browse the Internet. Installing an ad blocker on your iPhone is a significantly bigger challenge than on desktop.
Use Firefox/Fennec which allow you to install a variety of the add-ons you can install on the desktop version such as UBO, Stylus, ViolentMonkey, Bitwarden, SponsorBlock, etc... or install Brave which comes with adblock by default. As for iPhone, you can install Brave which has adblock, I don't think Firefox has add-ons in that version though, not sure.
I don’t think you can write off Apple or Microsoft just because Thiel made some investment in them.
Being the VC to a company’s round B, C, and D (adding up to maybe 40% ownership/control) is VERY different from simply throwing some money at a trillion dollar company to see some returns.
Firefox on Android supports it without any issue. That would cover a significant enough segment of the population that it might encourage actual change in the industry if people started moving to that platform.
Firefox on Android has approximately 0.5% market share on mobile, less than Opera. I really doubt it's enough to spark any sort of industry-wide change.
I'm not saying that Firefox on Android has significant market share; rather that Android has significant market share, and those users could be served by switching to Firefox solely for the purpose of using an adblocker.
If all Android users did this, something would change.
The point is it’s easy. It’s near frictionless. Unlike a lot of pie in the sky statements I see here like how “easy” it is to install and run Linux (it isn’t), Firefox adoption is truly trivial for any smartphone user and presents a stronger baseline than chrome does. People here often get critical of Firefox/Mozilla, and I totally get it, but compared to Google Chrome it doesn’t, well, compare.
Firefox runs great 99.99% of the time. It’s easy to add extensions. So we should be pushing people to adopt it.
It’s becoming easier on iPhone (even uBlock origini is now available, if only the lite version), which is nice because internet is becoming more and more unusable without them.
AdGuard installs through the App Store and integrates seamlessly with Safari. It's not as perfect as some of the desktop class adblockers, but it's free and can be up and running in a couple minutes.
If you're on Android, Firefox supports many full desktop extensions, including uBlock Origin.
There have been mobile Safari ad blockers for 10 years now, free or paid, and many of them can now be unified with desktop Safari. Many alternative iOS browsers include ad blocking directly, since they can't use the Safari plugins (despite all being powered by WebKit).
Not anymore. You can just find one on the app store and install it, almost exactly the same as you do in a browser's extension "store". It won't be as good as uBlock but it certainly works fine even in Safari.
ublock origin lite is straight up on the app store now, should work with any moderately recent version of iOS/iPadOS. Installed this on my family's Apple devices and it works pretty well.
There's also been other adblock apps for a long while, though (adguard comes to mind).
Can't speak for IOS but for android users I highly recommend Firefox for android, since you can install ublock origin within it.
Let's be real, browsing the modern internet is downright impossible without it today.
Every browser should have ad blocking technology included and enabled by default. I do not understand why Apple in particular has not pushed this with Safari, as they like to portray that they care about privacy.
I get why Chrome doesn't, and that's why you should not use it. But Netscape? Edge? What is stopping them?
Browsing the web without an ad blocker is a miserable experience. Users who have never tried or don't know how to set one up would be delighted.
Google pays Apple 20+ billion dollars annually to be the default search engine in Safari. I don't know whether the absence of ad blocking is a stipulation in that deal or not, but I have to imagine that if Apple blocked ads in Safari by default, that deal would not be renewed.
Apple is worth nearly $4T. I think they can afford to take a principled stand here, especially considering the current mood about big tech.
And I don't think Google would lightly give up being the default search engine on the dominant mobile platform in the USA, and significantly more dominant among upper-income users.
At least with Chrome i can use ublock - not so with safari.
The best browser is ofc Firefox but everyone seems to have forgotten that bc of bad publicity or whatever
This implies they had some sinister plan to claim all your data as theirs or something which is ridiculous - they didn’t back down from anything but changed the wording of the legal text to make it easier to parse for non-lawyers.
It’s worse than that. My mom wants to see ads. I thought I was doing her a favor adding her to my pihole but she really likes ads, especially Facebook ads.
It's probably better to let them spy on your highly encrypted traffic going overseas than use a US based service considering that they can march into any US company and start collecting every bit of data (https://en.wikipedia.org/wiki/Room_641A)
> the average person's response is ... I'll just go ahead and continue to suffer with invasive ads
The real reason is that the average person neither suffers with ads nor finds ads invasive, despite what a vocal online minority would have you believe. We just ignore them and get on with life. ::shrug::
Ignoring (post-impact) and moving on is the natural thing to do, but it seems like a stretch to imply that the average person neither suffers or finds ads invasive.
The suffering isn't acute, it's death by a thousand cuts as your mind erodes into a twitchy mess. Look at the comment section of a nice youtube video and see people outraged at getting blasted with an ad at the wrong moment.
Most people don't like ads, but we love the stimulation of the screen more so we suffer them, regardless of the damage done.
>... it seems like a stretch to imply that the average person neither suffers or finds ads invasive.
The average person has never heard of HN. It isn't the case that the average person's experience with today's internet ads is that of having their "... mind erode[s] into a twitchy mess."
The average person doesn't look at the comment section of a nice YouTube video.
>Most [HN] people don't like ads....
Most people don't suffer — at least not consciously — as a result of ads.
I don't know why you're inserting HN into it? We're talking about average people, not nerds with ad-blockers. Are you suggesting that the average person enjoys being interrupted with ads?
> It isn't the case that the average person's experience with today's internet ads is that of having their "... mind erode[s] into a twitchy mess."
Perhaps I was a bit dramatic with my wording, but my point still stands. Since you're flatly denying it, perhaps you have some references? As far as I can tell, all signs are pointing to widespread ADHD increases correlated to computer use, which may not be directly tied to ads exclusively, it stands to reason that they're big offenders given their nature of being short, attention-grabbing, context-breaking, non-interactive engagements. There's plenty of studies that support this.
> The average person doesn't look at the comment section of a nice YouTube video.
Um, really?
> Most people don't suffer — at least not consciously — as a result of ads.
My point was it's death by a thousand cuts, boiling the frog, etc. The average attention span has been cut in half over the last 20 years. Also, I'd argue that sensitive people who may already be mentally stressed, which seems to be a growing group, might actually suffer in the short term or immediately.
You've made some strong statements, but I'm having a hard time buying them.
YT made sure adblockers ruin the experience. We really need a good YT alternative, as it has become AI slop (shorts) and most new videos are of real poor quality.
"Ad blockers" nowadays do much more. From the horse’s mouth, which describes itself as a “wide-spectrum content blocker” [1]:
“uBlock Origin (uBO) is a CPU and memory-efficient wide-spectrum content blocker for Chromium and Firefox. It blocks ads, trackers, coin miners, popups, annoying anti-blockers, malware sites, etc., by default using EasyList, EasyPrivacy, Peter Lowe's Blocklist, Online Malicious URL Blocklist, and uBO filter lists. There are many other lists available to block even more [...]
Ads, "unintrusive" or not, are just the visible portion of the privacy-invading means entering your browser when you visit most sites. uBO's primary goal is to help users neutralize these privacy-invading methods in a way that welcomes those users who do not wish to use more technical means.”
Appreciate the clarification, I would clarify to say the origin story of Ad blockers are ads, and the underlying behaviours may not capture everything that fingerprinting may do where people don't advertise.
Ublock is great, but I am finding fingerprinting that gets past it and that's what I'm referring to.
I'd like to install uBlock Origin, when I try, Chrome warns it needs the permission to, "Read and change all your data on all websites". That seems excessive, to give that much power to one extension. I currently use no extensions to keep my security posture high.
I never get the fear behind extensions, at least not to the level where you wouldn't use an open-source extension that's extremely well vetted. And even if that isn't good enough for you, choosing to browse the web without using a content blocker is a far, far greater security risk.
I use uBlock Origin with basically every filter list enabled on Brave with their default blocker enabled. I just confirmed that this does not prevent the script from loading and scanning extensions. The browser tools network tab on LinkedIn is absolutely frightening.
NoScript will prevent that script from loading and scanning extensions. JS is required for almost all fingerprinting and malware spread via websites. Keeping it disabled, at least by default, is the best thing you can do to protect yourself.
According to the EFF fingerprinting website, Firefox + uBlock Origin didn't really make my browser particularly unique.
But turning on privacy.resistfingerprinting in about:config (or was it fingerprintingProtection?) would break things randomly (like 3D maps on google for me. maybe it's related to canvas API stuff?) and made it hard to remember why things weren't working.
Not really sure how to strike a balance of broad convenience vs effectiveness these days. Every additional hoop is more attrition.
fingerprint.com seems to be some fingerprinting vendor, they don't even offer a demo without logging in. https://coveryourtracks.eff.org is EFFs demo site is non-profit and doesn't require login
I have a lot of browser extensions running and am using Brave as my browser. I have their built in adblocker enabled as well as some of their privacy features turned on in the settings. I am also using a self hosted adblock instance for my DNS servers. I actually appear as random and not unique which is really nice to see. I know Brave does intentionally lean on some of the privacy side of things and it also has options to specifically prevent sites from fingerprinting by blocking things like seeing language preferences. I have to assume it is also doing some things in the backend to try and prevent other fingerprinting methods.
It is not telling you that the test site has never seen you before, because the eff isn't storing your fingerprint for later analysis and tracking
It could actually tell you about which real tracking vendors are showing you as "Seen and tracked" so it's pretty annoying they don't do that.
If that site shows you as having a unique fingerprint, I guarantee you are being tracked across the web. I've seen the actual systems in usage, not the sales pitch. I've seen how effective these tools are, and I haven't even gotten a look at what Google or Facebook have internally. Even no name vendors that don't own the internet can easily track you across any site that integrates with them.
The fingerprint is just a set of signals that tracking providers are using to follow you across the internet. It's per machine for the most part, but if you have ever purchased something on the internet, some of the providers involved will have information like your name.
Here is what Google asks ecommerce platforms to send them as part of a Fraud Prevention integration using Recaptcha:
> the EFF isn't storing your fingerprint for later analysis and tracking
Yes they are, quoting that very page:
> Your browser fingerprint appears to be unique among the 312,935 tested in the past 45 days
So clearly they store the information for at least 45 days. This raises the question what they actually mean by unique. If I change my IP and re-test, I get the same
> Your browser fingerprint appears to be unique among the 312,941 tested in the past 45 days
So does that mean that my fingerprint changed, and they can't track me anymore? Or do they mean to tell me that they still track me and I'm still as uniquely identified.
Their methodology and linked articles does not seem to answer this [0] [1]
It's all very complicated, because the fingerprinting needs to be unique enough to identify you while still being "persistent" enough not to identify you as somebody else if you change just one bit of it.
Exactly because no one in his right mind is going to work in "state". So the "state" is more like 95% "fucking idiots" as you put it, and that is self-reinforcing.
When has infantilizing adults resulted in positive outcomes? What if the group of idiots decide you're the idiot and start making decisions for your own good?
I asked an LLM to create a plan for a 'digital rebirth' in order to minimize privacy harms. It's a lot of work, but increasingly: a worthwhile endeavor.
I disagree, I think we should push back hard on behavior like this. What business is it of LinkedIn's what browser extensions I have installed? I think the framing for this is appropriate.
Why is it possible for a web site to determine what browser extensions I have installed? If there are legitimate uses, why isn't this gated behind a permission prompt, like things like location and camera?
This, to me, seems like the more salient point. A headline like “Major browsers allow websites to see your installed extensions” seems more appropriate here.
We’ve known for a long time that advertisers/“security” vendors use as many detectable characteristics as possible to constrict unique fingerprints. This seems like a major enabler of even more invasive fingerprinting and that seems like the bigger issue here.
It's possible to write a headline that directs blames at both parties: "Major Browsers Fail to Block Websites that Invade Your Privacy"
The fact that the website is doing this is a bigger problem than the browser not preventing it. If someone breaks into a house, it's the burglar who is prosecuted, not the company that made the door.
If you scanned LinkedIn's private network, you'd be criminally charged. Why are they allowed to scan yours with impunity? And why is this being normalized?
The best solution is a layered defense: laws that prohibit this behavior by the website and browsers that protect you against bad actors who ignore the law.
> If you scanned LinkedIn's private network, you'd be criminally charged. Why are they allowed to scan yours with impunity? And why is this being normalized?
First, I think it’s a major issue that Chrome is allowing websites to check for installed extensions.
With that said, scanning LinkedIn’s private network is not analogous to what is going on here. As problematic as it is, they’re getting information isolated to the browser itself and are not crossing the boundary to the rest of the OS much less the rest of the internal network.
Problematic for privacy? Yes. Should be locked down? Yes. But also surprisingly similar to other APIs that provide information like screen resolution, installed fonts, etc. Calling those APIs is not illegal. I’m curious to know what the technical legal ramifications are of calling these extension APIs.
If a company leaks my sensitive data, I get some nice junkmail offering me some period of time of credit monitoring or whatever so what are browsers doing to prevent this?
The issue should never be 'We want entities to have this data but only use it in some constrained and arbitrary manner that we can't even agree about it's definition.' instead 'This data shouldn't be made available to X'
This is a Chrome thing. It’s a safe bet that if you use Google products you don’t care about privacy anyway. “Google product collects info about you: news at 11.”
Google cares deeply about privacy. Google defines privacy as them not giving your private data that they have collected to anyone else unless you ask them to.
Google cares deeply about privacy. Google defines privacy as them not giving your private data that they have collected to anyone who hasn't paid them for it or can compel them to give it up.
There's a fourth amendment case on the Supreme Court docket (Chatrie v. U.S.) about Google searching a massive amount of user data to find people in a location at a specific time, at police request. The case is about whether the police's warrant warranted such a wide scope of search (if general warrants are allowed).
Point being: Google will 100% give your info to the police, regardless of whether the police have the legal right to it or not, and regardless of whether you actually committed a crime or not.
Bonus points: the federal court that ruled on the case said that it likely violated the fourth amendment, but they allowed the police to admit the evidence anyway because of the "good faith" clause, which is a new one for me. Time to add it to the list of horribly abusable exceptions (qualified immunity, civil asset forfeiture, and eminent domain coming to mind).
The breaking point with me that caused me to de-google myself was finding out that Google was buying Mastercard records in order to cross-reference them with Android phone data. That shit is not okay.
So no compelling here. The police asked for it and google gave it, either for free or in exchange for money. They didn't say "no" to the police, they didn't wait for a court order.
The bad guy here is google. And the people that champion data collection by private companies because of free market == good.
In that case, the main bad guy was the police who didn't bother to do even the most basic investigating after "check Google's GPS records to see who was at the house" including "Check Google's GPS records to see how how long they were there" which would have shown them this was a drive by, but yeah Google is absolutely a villain
Ah yes, I should have said I was describing the official line, not the behaviour. In all fairness the “can compel them to give it up” doesn’t seem to be optional but otherwise, yeah. Agreed.
This only works if the web page knows the random per-install id associated with an extension.
That can only happen if the extension itself leaks it to the web page and if that happens, scanning isn't necessary since it already leaked what it is to the webpage. It also doesn't tell you what extension it is, unless again, the extension leaks it to the webpage.
The attack on Chrome is far more useful for attackers as web pages can scan using the chrome store's extension ID instead.
1. Do a request to `chrome-extension://<extension_id>/<file>`. It's unclear to me why this is allowed.
2. Scan the DOM, look for nodes containing "chrome-extension://" within them (for instance because they link to an internal resource)
It's pretty obvious why the second one works, and that "feels alright" - if an extension modifies the DOM, then it's going to leave traces behind that the page might be able to pick up on.
The first one is super problematic to me though, as it means that even extensions that don't interact with the page at all can be detected. It's unclear to me whether an extension can protect itself against it.
> 1. Do a request to `chrome-extension://<extension_id>/<file>`. It's unclear to me why this is allowed.
Big +1 to that.
The charitable interpretation is that this behavior is simply an oversight by Google, a pretty massive one at that, which they have been slow to correct.
The less-charitable interpretation is that it has served Google's interests to maintain this (mis)feature of its browser. Likely, Google or its partners use similar to techniques to what LinkedIn/Microsoft use.
This would be in the same vein as Google Chrome replacing ManifestV2 with ManifestV3, ostensibly for performance- and security-related purposes, when it just so happens that ManifestV3 limits the ability to block ads in Chrome… the major source of revenue for Google.
The more-fully-open-source Mozilla Firefox browser seems to have had no difficulty in recognizing the issues with static extension IDs and randomizing them since forever (https://harshityadav.in/posts/Linkedins-Fingerprinting), just as Firefox continues to support ManifestV2 and more effective ad-blocking, with no issues.
> This would be in the same vein as Google Chrome replacing ManifestV2 with ManifestV3, ostensibly for performance- and security-related purposes, when it just so happens that ManifestV3 limits the ability to block ads in Chrome… the major source of revenue for Google.
uBlock Origin Lite (compatible w/ ManifestV3) works quite well for me, I do not see any ads wherever I browse.
The mv3 problem was never about "does it work now". It was about "can it keep up". Ad blocking is a cat and mouse game, and the mouse is kneecapped now. You're being slow boiled.
Well said. I'm glad that as blockers have managed to develop effective approaches under Mv3, but it took a tremendous amount of engineering effort that was only necessary because Google was trying to impose these very large costs on them.
These are web accessible resources, e.g. images and stylesheets you can reference in generated HTML. Since content scripts operate directly on the same DOM, it’s unclear how you can tell an <img> or <link> came from the modification of a content script or a first party script. You might argue it’s possible to block these in fetch(), but then you also need to consider leaks in say Image’s load event.
This behavior has been improved in MV3, with option to make the extension id dynamic to defeat detection:
> Note: In Chrome in Manifest V2, an extension's ID is fixed. When a resource is listed in web_accessible_resources, it is accessible as chrome-extension://<your-extension-id>/<path/to/resource>. In Manifest V3, Chrome can use a dynamic URL by setting use_dynamic_url to true.
For widget style services:
If you need the functionality of an extension to operate, then you can check if it's already installed so you don't ask to install it again.
This is better than forcing the extension to announce it's presences on every web site.
Agreed, but also, permission prompts are way overused and often meaningless to anyone at all, even fellow software engineers. “This program [program.exe] wants to do stuff, yes/no?” How should I know what’s safe to say yes to?
I think Android’s ‘permissions’ early on (maybe it’s improved?) and Microsoft’s blanket ‘this program wants to do things’ authorisation pop up have set a standard here that we shouldn’t still be following.
Generally the whole thing needs to be flipped upside down. Extensions is the easy one, there's not reason a random website can list your installed extensions, zero.
For other capabilities, like BlueTooth API, rather than querying the browser, assume that the browser can do it and then have the browser inform the user that the site is attempting to use an unsupported API.
> Of course Google is going to back door their browser.
Aside from the fact that other browsers exist, this makes no sense because Google would stand to gain more by being the only entity that can surveil the user this way, vs. allowing others to collect data on the user without having to go through Google's services (and pay them).
To broaden my point, I think we’d find that many websites we use are doing this.
My point isn’t that this is acceptable or that we shouldn’t push back against it. We should.
My point is that this doesn’t sound particularly surprising or unique to LinkedIn, and that the framing of the article seems a bit misleading as a result.
I've love it if LinkedIn got successfully sued for millions and it resulted in similar lawsuits against every other website that did this sort of thing.
> To broaden my point, I think we’d find that many websites we use are doing this.
Your point of "I think we’d find that many websites we use are doing this" doesn't make LinkedIn's behavior ok!
By your logic, if our privacy rights are invaded which is illegal in most jurisdiction, and then it become ok because many companies do illegal things??
Absolutely not. At no point am I saying this is ok.
I’m saying that the framing of the article makes this sound like LinkedIn is the Big Bad when the reality is far worse - they’re just one in a sea of entities doing this kind of thing.
If anything, the article undersells the scale of the issue.
> What business is it of LinkedIn's what browser extensions I have installed?
The list of extensions they scan for has been extracted from the code. It was all extensions related to spamming and scraping LinkedIn last time this was posted: Extensions to scrape your LinkedIn session and extract contact info for lead lists, extensions to generate AI message spam.
This doesn’t fit the description of scraping by any normal definition. It’s a classic feature probe structure, where the features happen to be scraping extensions.
I think it’s kind of funny that HN has gone so reactionary at tech companies that the comments here have become twisted against the anti-spam measures instituted on a website that will never trigger on any of their PCs, because HN users aren’t installing LinkedIn scrape and spam extensions.
HackerNews users used to be the type that would do the scraping, so they could Hack the data into whatever format or integration they desired.
It's unfortunate to see folks here who don't support that – interoperability is at the heart of the Hacker Ethic. LinkedIn (along with any other big tech companies locking down and crippling their APIs) is wrong to even try to block it.
Is it an issue of the resources scrapers consume? No: Even ordinary users trying to get API access on a registered persistent account linked to their name are stymied in accessing their own data. LinkedIn simply doesn't want you to access your own data via API, or in any manner that isn't blessed by them. That ain't right.
Accessing other users' LinkedIn data via the API requires their OAuth consent, as it should be. But you are welcome to access your own data via the API.
> The list of extensions they scan for has been extracted from the code. It was all extensions related to spamming and scraping LinkedIn
Not according to the website which says:
The scan doesn’t just look for LinkedIn-related tools. It identifies whether you use an Islamic content filter (PordaAI — “Blur Haram objects, real-time AI for Islamic values”), whether you’ve installed an anti-Zionist political tagger (Anti-Zionist Tag), or a tool designed for neurodivergent users (simplify). Under GDPR Article 9, processing data that reveals religious beliefs, political opinions, or health conditions requires explicit consent. LinkedIn obtains none.
It also scans for every major competitor to Microsoft’s own products — Salesforce, HubSpot, Pipedrive — building company-level intelligence on which businesses use which software. Because LinkedIn knows your name, employer, and role, each scan aggregates into a corporate technology profile assembled without anyone’s knowledge.
Sounds a little like "OpenAI must protect itself against copyright infringement by any means necessary, including copyright infringement of everyone else"
If I had to guess, LinkedIn would be primarily searching for extensions that violate their terms of service (e.g. something that could be used to scrape data). They put a lot of effort into circumventing automated data collection. I could be wrong.
Most sane people don't use linkedin. Only corporate cocksleeves use it and they won't push back against abuse and debasement because they get off to that shit.
This has been covered several times including reverse engineering of the code. The list of extensions they check for doesn’t include common extensions like ad blockers. It’s exclusively full of LinkedIn spamming and scraping type of extensions.
They also logically don’t need to fingerprint these users because those people are literally logging in to an account with their credentials.
By all appearances they’re just trying to detect people who are using spam automation and scraping extensions, which honestly I’m not too upset about.
If you never install a LinkedIn scraper or post generator extension you wouldn’t hit any of the extensions in the list they check for, last time I looked.
it apparently scans for something like "PQC Checker", an extension for checking if TLS connection is PQC-enabled? how is that a spam extension (and thats just a random one i saw)
Probably compromised extensions or misleading extensions.
It’s common for malware extensions to disguise themselves as something simple and useful to try to trick a large audience into installing them.
That’s why the list includes things like an “Islamic content filter” and “anti-Zionist tagger” as well as “neurodivergent” tools. They look for trending topics and repackage the scraper with a new name. Most people only install extensions but never remove them if they don’t work.
well if they have evidence why they dont report it? why are these extensions on the store? im sure linkedin has enough motion to report it directly to google
also, having a PQC enabled extension doesnt seem like a good "large user base capture" tactic.
the source code is as usual obfuscated react but that doesnt mean its malicious...
EDIT: i debuged the extension quickly and it doesnt seem to do anything malicious. it only sends https://pqc-extension.vercel.app/?hostname=[domain] request to this backend to which it has permissions. it doesnt seem to exfiltrate anything else. it might get triggered later but it has very limited permissions anyway so it doesnt seem to be a malicious extension. (but im no expert)
> well if they have evidence why they dont report it? why are these extensions on the store?
We had a browser extension for our product. A couple times a month someone would clone it, add some data scraping or other malware to it, and re-upload it with the same or similar name.
We set up automated searches to find them. After reporting it could take weeks to get them removed, some times longer. That’s for extensions with clear copyright problems!
The extensions may not be breaking any rules of the extension stores if they’re just scraping a website. Many of the extensions on the list are literally designed to do that as their headline feature.
If you think sending data from a page to a server would disqualify an extension from an extension store then think again. Many of the plugins listed even have semi-plausible reasons for uploading the scraped data, like the “anti-Zionist tagger” extension on the list or the ones that claim to blur things that are anti-Islam. Manufacturing a reason to send data to their servers gives them cover.
I am aware that google will take looong time to act. that is why I mentioned that it is LinkedIn (Microsoft) or its contracted fingerprinting/"monitoring" partner who may have more direct ways to report this if they actually investigate malicious extensions.
but that doesn't really matter. for the sake of the argument assume the extensions are not malicious (as evidenced e.g. by the PQC one with ?16 users?) does that change the situation?
They're doing a lot more than scanning for "compromised or misleading extensions"; there are a lot of scummy/spammy extensions on the list, but among the extensions included in the list of those they probe are also extensions such as:
- "Highlight multiple keywords in a web page", an extension that re-implements the equivalent Firefox's "Highlight All" findbar button in Chrome—and happens to mention LinkedIn in the description when describing one use case <https://chromewebstore.google.com/detail/ngkkfkfmnclhjlaofbh...>
- "Delayed gratification Research", a study/focus extension created "for OS semester at CODE University of Applied Sciences" to "Temporarily Block distracting websites"—with all of 4 active users <https://chromewebstore.google.com/detail/mmibdgeegkhehbbadeb...>
It's pretty clear that LinkedIn, like many website operators, don't think of themselves as a source of information that it will send to your UA upon request. It's not even just that they want total visibility into your habits like the worst of the advertising/tracking companies. What they want is as control as they can manage to wrangle over the experience of what it's like when you're "on" their site (i.e. looking at something on your computer that came from their site)—not least of all so they can upsell their userbase on premium features. LinkedIn doesn't care so much that people are inundating other users/orgs that might not appreciate that they're being treated as a "lead", so much as LinkedIn cares that the people doing the inundating are doing it with tools where LinkedIn wasn't able to get a cut.
It is likely in response to scraping. Linked in is heavily scraped by scammers who do the BEC scams. So linked in is trying to find ways to link together banned accounts, to handle their ban evasion.
I run a site which attracts a lot of unsavoury people who need to be banned from our services, and tracking them to reban them when they come back is a big part of what makes our product better than others in the industry. I do not care at all about actually tracking good users, and I am not reselling this data, or anything malicious, it's entire purpose is literally to make the website more enjoyable for the good users.
>Linked in is heavily scraped by scammers who do the BEC scams.
It's also heavily scraped by businesses for lead generation for sales and recruiting. Either before their API became available or to not pay them or to get around the restrictions of their API.
Font measurement (4): fontFamily, fontSize, getBoundingClientRect, innerText. Creates a hidden div, sets a font, measures rendered text dimensions, removes the element.
Storage (5): storage, quota, estimate, setItem, usage. Also writes the fingerprint to localStorage under key 6f376b6560133c2c for persistence across page loads.
Scanning for 6000 extensions is anti-competitive, surveillant and immoral.
> I’m not deeply familiar with what APIs are available for detecting extensions, but the fact that it scans for specific extensions sounds more like a product of an API limitation (i.e. no available getAllExtensions() or somesuch) vs. something inherently sinister
This seems like a really weird argument to make. The fact that the platform doesn't provide a privacy-violating API is not an extenuating circumstance. LinkedIn needed to work around this limitation, so they knew they're doing something sketchy.
For the record, I don't think they're being evil here, but the explanation is different: they're don't seem to be trying to fingerprint users as much as they're trying to detect specific "evil" extensions that do things LinkedIn doesn't want them to do on linkedin.com. I guess that's their prerogative (and it's the prerogative of browsers to take that away).
Judging from the fact that 99% of the list seem like data-mining scam apps or spam tools, I suspect that's the answer in these cases too.
If LinkedIn really wanted to profile your religious beliefs, they would presumably go after the most popular religion-related extensions, not some "real-time AI for Islamic values" thing with 6k users.
Those profiling tools don't really care which features are going to be used for predictions. It's just machine learning, and it's indiscriminate. So if you have an extension that correlates with you being Muslim, it will be used for whatever ML predictions they give to other companies, and the worst case will be another "oh we didn't do this intentionally".
Of course, that's not the first time this ever happened in human history, so even if it's not "something inherently sinister", it's just "criminal negligence".
The page isn't allowed to know what extensions you have, instead LinkedIn is looking for various evidence that extensions are installed, like if an extension was to create a specific html element, LinkedIn could look for evidence of that element being there.
Since the extensions are running on the same page as LinkedIn (some of them are explicitly modifying the LinkedIn the website) it's impossible to sandbox them so that linked in can't see evidence of them. And yes this is how a site knows you have an ad blocker is installed.
However, there are other proof of concept of another attack vector to bypass this by using timing difference when fetching those resources.
I help maintaining uBO's lists and I've seen one real world case doing this. It's a trash shortener site, and they use the `web_accessible_resources` method as one of their anti-adblock methods. Since it's a trash site, I didn't care much later.
> The scan probes for thousands of specific extensions by ID, collects the results
Why exactly does Chrome even allow this in the first place!? This is the most surprising takeaway for me here, given browser vendors' focus on hardening against fingerprinting.
This only happens if the extension puts their `moz-extension://` links into the DOM. It's different to chrome case where extensions can be detected regardless of being activated on that site or not.
As I understand it, an extension could also leak its links via its own backend, e.g. to advertisers, who could then detect it even though no user-observable DOM modification is happening.
Much better than static global IDs, but still not ideal.
Yeah, anything happening in backend depends totally on the extensions. Unless I need something, I rarely use extensions that are closed-source or open-source but has some sending data in their features.
For what it's worth - and I'm not saying that LinkedIn is doing this for the right reasons - I can imagine a frontend QA team wanting to do this to understand how prominent certain extensions are for users of various parts of their product, correlating those extensions against frontend bug reports, and using that to guide QA procedures with real-world extension sets.
When you're literally the company that invented Kafka for your clickstreams, "everything looks like a nail."
(More likely, though, this is an anti-scraping initiative, since headless browsers are unlikely to randomize their use of extensions, and they can use this to identify potential scrapers.)
> the fact that it scans for specific extensions sounds more like a product of an API limitation (i.e. no available getAllExtensions() or somesuch) vs. something inherently sinister (e.g. “they’re checking to see if you’re a Muslim”).
Your computer is your private domain. Your house is your private domain. You don't make a "getAllKeysOnPorch()" API, and certainly don't make "getAllBankAccounts()" API. And if you do, you certainly don't make it available to anyone who asks.
> I’m certainly not endorsing it, do think it’s pretty problematic, and I’m glad it’s getting some visibility. But I do take some issue with the alarmist framing of what’s going on.
Speaking has someone who shares the same lack of surprise, perhaps some alarm is warranted. Just because it’s ubiquitous doesn’t mean it’s ok. This feels very much frog in boiling water for me.
Why do you think the alarmist framing is unwarranted?
To me, it seems like the authors pulled the fire alarm for a single building when in reality there’s a tornado bearing down.
And by doing so, everyone is scrambling about a fire instead of the response a tornado siren would cause.
They’re both dangerous and worthy of an immediate reaction, but the confusion and misdirection this causes seems deeply problematic.
When people realize the fire wasn’t real, they start to question the validity of the alarm. The tornado is still out there.
I realize this analogy is a bit stretched.
As someone who has spent quite a lot of time steeped in security/privacy research, the stuff described in the article has been happening pervasively across the industry.
People absolutely should be alarmed. Many of us have been alarmed for quite some time. Raising the alarm by saying “LinkedIn is searching your computer” isn’t it.
I think this is a great analogy. I read quite a bit of the site and it's wildly blown out of proportion and severely lacking in context.
How many phone apps do you think are trying to detect what else is installed on your phone? I was part of an acquisition of a company with a very large mobile user base and our new parent was shocked we weren't trying to passively collect device information like this. They for sure were.
And on the flip side, as others have done well to point out, there are a LOT of legitimate reasons to fingerprint users for anti-fraud/abuse and I am 100% convinced that we're all better off for this.
Maybe thats all this story is about, maybe not, but this article leaves out an incredible amount of complexity.
Just because someone lets the electrician (LinkedIn) into their home (browser) doesn't mean they can do whatever the hell they want that isn't expressly prohibited. If the electrician wants to rifle through my desk drawers, they should ask for permission, and I will politely tell them to leave.
I worked for a company that sold b2b contact data and they had (maybe still have) a linkedIn extension. It basically enriched the linkedIn profile. I wonder if linkedIn is trying to block these, or heavily target, in some way, these types of users to push folks towards their sales navigator.
Your post sounds like "it sounds bad, but it's no different from what others do, so it's not that bad."
I would put it more like: it sounds bad, and it's no different from what others do, so they're all that bad.
The fact that they're working around an API limitation doesn't make this better, it just proves that they're up to no good. The whole reason there isn't an API for this is to prevent exactly this sort of enumeration.
It's clear that companies will do as much bad stuff as they can to make money. The fact that you can do this to work around extension enumeration limits should be treated as a security bug in Chrome, and fixed. And, while it doesn't really make a difference, LinkedIn should be considered to be exploiting a security vulnerability with this code.
The bigger problem I see here is browser security and Javascript as a whole. Browsers should not be allowed to extract and send such vast amounts of information in the first place, especially without the user's consent. At most, they should return a few broad things such as browser type (major version), language perhaps, and device type (mobile/desktop). That's it. Other things, such as exact resolutions, time zones, and other hardware identifiers make it trivially easy to track users across the Internet. Now that it's too late to revise Web standards, browsers should default to return spoofed values for all the rest.
I get the point you're making, but to be clear, "they’re checking to see if you’re a Muslim" vs "they’re checking to see if your fingerprint matches that of known Muslims in our ever-expanding database" are not too far off.
I'm confused, you call this "misleading" then quote the claim, but say it's "what [you'd] expect to find in modern browser fingerprinting code".
So what is it? Misleading, or exactly what you expected to find? It cannot be both.
It sounds more like you object to the negative framing of Microsoft hoovering up as much data as possible for profit, even though this is objectively a crime in the jurisdictions they are being sued in.
I've been avoiding Chrome-based browsers for many years now but have only recently become aware of how catastrophically low the Firefox market share is. I'm kind of shocked that more people aren't choosing to avoid Chrome.
> It also seems like what I’d expect to find in modern browser fingerprinting code.
Time to figure out if I can make FireFox pretend to be Chrome, and return random browser extensions every time I visit any website to screw up browser fingerprinting...
To flip it around, if one of those chrome extensions saved parts of the contents of the page it was on into a database, and I had the chrome extension navigate around on LinkedIn for me, collecting information, LinkedIn would sue me for CFAA violations because I'm scraping them for email addresses and phone numbers. This is not theoretical either, as LinkedIn has sued people in the past for scraping.
> sounds more like a product of an API limitation (i.e. no available getAllExtensions() or somesuch) vs. something inherently sinister (e.g. “they’re checking to see if you’re a Muslim”)
Then why search for PordaAI or Deen Shield? Or more specifically, since getAllExtensions() would return them, why would they be on the "scan list", instead of just ignored?
Well great there is no avalable 'getAllFiles()' or such either because they'd be scanning your files for "fingerprinting" as well.
> alarmist framing
Well they literally searching your computer for applications/extensions that you have installed? (and to an extent you can infer what are some of the desktop applications you have based on that too)
There is clear rules around what you can and can't do to fingerprint users. if it's being done overtly, covertly, obscurely, indirectly, all for the same result through direct or indirect or correlated metadata it ends up with the same outcome.
My understanding is the rules and laws are to prevent the outcome, by any means, if it's happening.
But I do take some issue with the alarmist framing of what’s going on.
I’ve come to mostly expect this behavior from most websites that run advertising code
We should be alarmed that websites we go to are fingerprinting us and tracking our behavior. This is problematic, full stop. The fact that most websites are doing this doesn't change that.
This blows my mind. What good reason is there for giving javascript such permissions by default? This should at the minimum trigger an explicit permission request from the user.
My guess would be that the internet is run by developers. Apps will want this data so javascript provides it to make decisions about window sizing and user agent capabilities. Authorization would probably only occur if javascript was gated by non developers just as SSDP open and forwards ports on routers without user intervention or knowledge rather than an API that prompt the user. Just a guess.
> It also seems like what I’d expect to find in modern browser fingerprinting code.
Exactly what I think it is. It's all for tracking and ultimately for advertisement. Linkedin can get exactly who you are and then they share that data with ad companies to better target you.
Yes. I was expecting LinkedIn was connecting to extensions that are using their exhanced privileges to scan your computer, per the "LinkedIn Is Illegally Searching Your Computer" headline.
> But I do take some issue with the alarmist framing of what's going on.
On the contrary, your framing is quite defeatist IMO. The fact that stores get robbed frequently does not mean we should just normalize that and accept it as a fact of life.
It's important to note that this isn't fixed by ad blockers. To avoid this kind of fingerprinting, you need to disable JavaScript or use a browser like Firefox which randomizes extension UUIDs.
How does this scan happen. AFAIK there is no API for a webpage to scan for extensions. The most a page could do is try to figure out indirectly if an extension exists if that extension leaks info into the page.
The next step for a forensic investigator, is to found out how many of those extensions, are actually from a partner or fully owned subsidiary from LinkedIn... When you see a cockroach...
How? What exactly would a reader be "mislead" to believe
The part about "inherently sinister" seems to be a thought from the mind of an HN commenter not the authors of the submitted web page. The later only describe LinkedIn's actions as illegal, not "sinister". The laws cited by the authors do not appear to consider any "state of mind", e.g., "sinister", or intent as relevant
"But I do take some issue with the alarmist framing of what's going on."
AFAICT, the submitted web page does not suggest that anything LinkedIn does is "dangerous", i.e., cause for "alarm". What it suggests is that LinkedIn's actions _violate European privacy laws_. The authors claim LinkedIn's actions present an opportunity to enforce these laws, i.e., "take action"
This could be easily inferred from the depth, breadth, and interconnectedness of data in the website.
By downplaying it, it's allowing it to exist and do the very thing.
The issue here is this stuff is working likely despite ad blockers.
Fingerprinting technology can do a lot more than just what can be learned from ads.
From the site:
"The scan doesn’t just look for LinkedIn-related tools. It identifies whether you use an Islamic content filter (PordaAI — “Blur Haram objects, real-time AI for Islamic values”), whether you’ve installed an anti-Zionist political tagger (Anti-Zionist Tag), or a tool designed for neurodivergent users (simplify). Under GDPR Article 9, processing data that reveals religious beliefs, political opinions, or health conditions requires explicit consent. LinkedIn obtains none." https://browsergate.eu/extensions/
I do think your comment is bringing up valid questions/points.
One thing that worries me about the current state of things is scale and speed. Modern technology, markets, communication systems and supply chains make it possible for things to go catastrophically wrong very quickly for a massive number of people.
I think this is somewhat unique to the current era.
I still don’t buy into the belief that we’re absolutely witnessing a collapse. Studying history shows that things have been far worse (politically, socially geopolitically, etc) and we’ve come out the other side numerous times.
So I think “ups and downs” is probably the right way to look at this. I do worry about the impact of modern technology on this equation.
It’s all cycles within cycles, hopefully. If you study dynamic systems then you realize it’s cycles of good and bad, and within those cycles more smaller cycles of ephemeral good and bad.
The way I read that question was: where can other people see this information about me once I’ve published the app? i.e. say I just published an app, where would you navigate to find this info?
If you sign an app your legal name gets embedded into the .app bundle. You can use e.g. `codesign` in the terminal to read it back, or if you publish to the App Store it will be in the UI, along with the others.
I've seen individual developer's names in the App Store, but the parent comment is also claiming that Address and Phone number is published. I've done a bit of digging, and can't seem to confirm this.
I think “king” may be overstating it somewhat. While it’s true that there are some big titles with anticheat that won’t work on Linux, there are quite a few major titles that work fine, and in practice I’ve been able to use Linux as a gaming system for awhile now without issue. I primarily play Overwatch, The Finals, ARC Raiders, Rocket League and Age of Empires.
I think the success of the Steam Deck has really helped the situation, and the titles that are broken because of anticheat are not important enough to me to keep a Windows system around.
I personally found out about my aphantasia when reading an article in Scientific American titled “When the Mind’s Eye is Blind”. A whole lifetime of experiences clicked into place.
So it’s not surprising that there would be an outpouring of new discoveries after more people learn of the concept.
Learning about aphantasia is how I learned people experience anything other than nothing visually in their mind’s eye.
Good question, I couldn't quite put it in words, but it's the popularity that bothers me. It could be popular because everybody's having great insights, but it could also be popular because everybody's greatly persuaded by a fashionable media buzz. On the internet, discussions like this always turn into a love-in where everyone reports anecdotal experiences and gets treated with esteem for being part of the community of believers. Back in the 90s I was briefly on a mailing list for people who had done the Myers–Briggs Type Indicator test so we could all report how INTP we were (that's the sensitive nonjudgmental intellectual one). It reminds me of that.
It's popular because most of us had never heard about it until a few years ago, and for a lot of us a whole lifetime of experiences suddenly made sense.
I always wondered why people would talk metaphorically (because I assumed they must do, because clearly you don't see things that aren't there other than while dreaming... or so I thought) about images of people they knew fading, or forgetting what they looked like.
And then suddenly I was told it wasn't metaphorical.
And then a few years later I had my one experience of seeing vivid imagery outside of a dream.
It also keeps coming up because people get all weirded out at the thought that this is a thing, and start insisting the distinction isn't real.
But having experienced both: Imagining things without visuals and with is nothing alike.
And I knew that before the experience I mentioned too, because images while dreaming is also wildly different from how I imagine things while awake.
As someone with aphantasia, all I ever get from people who can visualize is self-report, anecdote, self-assessment, etc.
By definition, this will always be the case until we have a deep enough understanding of the brain to diagnostically assess this.
What I can assure you is that I cannot see/imagine with my mind, and that many other aspects of my life make sense given this limitation, e.g. when people describe their experience of reading books and mental world building, it’s entirely foreign to me. Or when my brother describes his ability to create mind palaces, manipulate visual concepts mentally as if he were using CAD software, etc. it seems preposterous.
But I have to take his word that it’s something he can actually do. Such is the nature of this subject.
Until I discovered the concept of aphantasia in my early 30s, I genuinely thought that people’s descriptions of “visualization” were just a figure of speech. It was mind blowing to learn that people actually see anything more than nothing at all, and a lifetime of experiences and confusion about what other people described about theirs suddenly made sense.
I have similar feelings about those who claim to have an internal monologue or voice etc. It's all so alien to me. Outside of dreams or hynagogia, my "self" and internal experience is non-verbal, non-visual, and mostly lacking any other sensory qualia.
If "me" is rooted in any perceptual qualia, I think and experience a vague mixture of a spatial awareness, proprioception, topology, and emotion. I can barely summon sound memories like music, and this could include lyrics. This recall is very faintly rooted in auditory qualia. Like the ghost of an echo down a distance corridor. Moreso, I can "feel" such music memory as a hint of proprioception, i.e. the after-thump of bass in my body or the after-tingle of a cymbal in my ear. But it utterly lacks the presence and richness of real listening.
I can think about words and phrases I've either heard or read, or try to arrange some words to write or speak later. But they're fleeting concepts, neither visual nor auditory in quality. They're not like the sound or music memory above. They're also not visuals of typography. In fact, I've more than once had words in my lexicon that I could neither pronounce nor reliably spell. I could readily match them to parsed words when reading, but would be unable to express them.
Finally, I have a relative with schizophrenia. I've witnessed how she behaves when hallucinating and/or having delusions. She often seems to experience her thoughts as if being talked to over her shoulder, or can manifest a fear into seeing dangerous threats. Her experience seems a kind of polar opposite to mine.
I wonder how it is to be somewhere in the middle of this range. It must be different from hers, to be useful but not schizoid. And it also seems like it must be a lot more vivid and accessible than my usual experience.
I’m unfamiliar with German basic law, but considering the lawlessness we’re seeing play out in the US right now, I’m curious how/why modern constitutions are less vulnerable?
By this I mean: it’s not as if the things we see playing out are lawful. Is there a structural difference that somehow prevents the same kind of lawlessness?
Put another way, what stops a movement that decides to ignore Germany’s constitution from ignoring it should they somehow gain power?
> For starters, Germany does not give a single person the right to be king with decrees and military leadership.
Separation between civilian leaders and military leaders is a big one, yeah. When the same person controls both the military directly and the executive branch of the civilian government directly you don't have any way to punish him without his subordinates overthrowing him since he controls all the power.
I think a middle ground version of this is possible, e.g. instead of letting your battery die, reset the phone to defaults and don’t install anything with the exception of critical communication apps.
Run the rest of the experiment as described for other categories of use.
When the agency enforcing those labor laws is also blatantly violating the law while carrying out other highly publicized enforcement actions, they will be scrutinized for everything they do, including actions that were likely legal/necessary. That's part of the problem with the government breaking the law - legitimate actions are no longer seen as legitimate, because they have undermined themselves in the public eye.
I also don't think people are "going to bat for a company abusing labor laws" so much as they are highly suspicious of these enforcement actions given the complete lawlessness displayed elsewhere and imagine the possibility that there were more diplomatic solutions that still address the problem appropriately.
I don’t think this framing quite captures what’s going on.
The AI space is full of BS and grift, which makes reputation and the resulting trust built on that reputation important. I think the popularity of certain authors has as much to do with trust as anything else.
If I see one of Simon’s posts, I know there’s a good chance it’s more signal than noise, and I know how to contextualize what he’s saying based on his past work. This is far more difficult with a random “better” article from someone I don’t know.
People tend to post what they follow, and I don’t think it’s lazy to follow the known voices in the field who have proven not to be grifting hype people.
I do think this has some potential negatives, i.e. sure, there might be “much better” content that doesn’t get highlighted. But if the person writing that better content keeps doing so consistently, chances are they’ll eventually find their audience, and maybe it’ll make its way here.
You're not negating anything they've said, but given some insight into why the case might be. However the cult of personality and brand still exists and as a result heavily distorts what could appear here.
Saying that someone ought to write better consistently for them to "make its way here" leans completely into the cult of personality.
I think following people would be better served though personal RSS feeds, and letting content rise based on its merit ought to be an HN goal. How that can be achieved, I don't know. What I am saying is that the potential negatives are far far understated than they ought to be.
I think you’re mistaking my comment for an endorsement when it was primarily attempting to reframe and describe the dynamic.
> Saying that someone ought to write better
I did not say someone ought to write better. I described what I believed the dynamic is.
> I think following people would be better served though personal RSS feeds
My point was that this is exactly what people are doing, and that people tend to post content here from the people they follow.
> letting content rise based on its merit ought to be an HN goal
My point was that merit is earned, and people tend to attach weight to certain voices who have already earned it.
Don’t get me wrong. I’m not saying there are no downsides, and I said as much in the original comment.
HN regularly upvotes obscure content from people who are certainly not the center of a cult of personality. I was attempting to explain why I think this is more prevalent with AI and why I think that’s understandable in a landscape filled with slop.
> Every time you open LinkedIn in a Chrome-based browser, LinkedIn’s JavaScript executes a silent scan of your installed browser extensions. The scan probes for thousands of specific extensions by ID, collects the results, encrypts them, and transmits them to LinkedIn’s servers.
This does seem invasive. It also seems like what I’d expect to find in modern browser fingerprinting code. I’m not deeply familiar with what APIs are available for detecting extensions, but the fact that it scans for specific extensions sounds more like a product of an API limitation (i.e. no available getAllExtensions() or somesuch) vs. something inherently sinister (e.g. “they’re checking to see if you’re a Muslim”).
I’m certainly not endorsing it, do think it’s pretty problematic, and I’m glad it’s getting some visibility. But I do take some issue with the alarmist framing of what’s going on.
I’ve come to mostly expect this behavior from most websites that run advertising code and this is why I run ad blockers.
reply