FreeSwewing is an awesome project by Joost De Cock! You should totally check it out if you're into sewing. And if you're into digital sovereignty tou should totally check out statichost.eu (disclaimer: I'm the founder :)
TL;DR takeaway for HN techies: when executing resource-intensive workloads on Node.js, pay attention to its max heap size. It can be increased with the `--max-old-space-size` option, e.g. via the env var `NODE_OPTIONS="--max-old-space-size=16384"`.
Author here. What kind of security negligence are you referring to? What would be a specific attack vector that I left open?
Regarding the PSL - and I can't believe I'm writing this again: you cannot get on there before your service is big enough and "the request authentically merits such widespread inclusion"[1]. So it's kind of a chicken and egg situation.
Regarding the best practice of hosting user content on a separate domain: this has basically two implications:
1. Cookie scope of my own assets (e.g. dashboard), which one should limit in any case and which I'm of course doing. So this is not an issue.
2. Blacklisting, which is what all of this has been about. I did pay the price here. This has nothing to do with security, though.
I'm sorry to be so frank, but you don't know anything about me or my security practices and your claim of negligence is extremely unfounded.
> What kind of security negligence are you referring to?
I am not talking about "security negligence". I am talking about "negligence". The negligence was to not follow standard best practices known for over 20 years which led to disruption in your services.
Eric, I think it appropriate to mention, and I'd like to point out the lack of any real documentation (reaching a professional level) related to PSL on the professional working groups touching on these things (i.e. M3AAWG).
There are only two blog posts on M3AAWG in 2023 where it had been used silently (apparently for years), but was calling for support. I would think if it were an industry recognized initiative it would have the appropriate documents/whitepapers published on it in the industry working group tasked with these things. These people are supposed to be engineer's after all. AFAIK this hasn't happened aside from a brief-after-action with requests for support which is highly problematic.
When there is no professional outreach (via working group or trade group), its real hard to say that this isn't just gross negligence on google's part. M3AAWG has hundreds if not thousand's of whitepapers each hundreds of pages. A single blog post or two that mention it insufficiently, won't rationally negate this claim supporting gross negligence.
Why do I mention Gross negligence?, when coupled with loss, it is sufficient in many cases to support a finding of 'malice' without specific intent (i.e. general intent), especially when such an entity has little/no credibility, but is overshadowed by power/authority that is undeserved. Deceitful people that reasonably should know the consequences will go bad, often purposefully structure towards general intent to avoid legal complications and the legal system has evolved. I am not a lawyer, but this paraphrase about gross negligence/general intent/malice did come from a lawyer, its not meant or intended for use as legal advice in paraphrase form, so standard IANAL disclaimer applies. If the that is needed, consult a qualified professional for a specific distinction on this.
The company is more than technically capable of narrowly defining blacklists and providing due process and appropriate noticing requirements.
The situation begs questions of torturous interference, and whether the PSL is being used as an anti-competive mechanistic moat to prevent competitors from entering the market by imposing additional cost arbitrarily on competitors that is assymetric to the costs such companies have with competing services (as oligopoly/monopoly).
Many commenters are implying that there is a security issue here, and that I'm putting everyone in danger. That is quite frankly a pretty absurd claim to just casually make. I'm of course very curious to hear more details on what the security risk here actually would be?
Do you think I'm reading/writing sensitive data to/from subdomain-wide cookies?
Also, yes, the PSL is a great tool to mitigate (in practice eliminate) the problem of cross-domain cookies between mutually untrusting parties. But getting on that list is non-trivial and they (voluntary maintainers) even explicitly state that you can forget getting on there before your service is big enough.
I am not implying you’re putting “everyone” in danger. I’m merely implying that you’re putting your own service in danger by allowing clients to act like a trusted subdomain like controlpanel.statichost.eu, .secure, or Unicode similarities of www.
Ok, I see. You mean the possibility of users impersonating statichost.eu itself. That is actually a good point, and the exact reason why user subdomains are required to have a dash in them. Edit: Also, only ASCII is allowed. :)
I guess control-panel.statichost.eu is still possible, of course, but that already seems like a pretty long shot.
Author here. I understand that my post and what I'm trying to say is unclear. And that there are too many different aspects to all this.
What I'm trying to say in the post specifically about Google is that I personally think that they have too much power. They can and will shut down a whole domain for four billion users. That is too much power no matter the intentions, in my opinion. I can agree that the intentions are good and that the net effect is positive on the whole, though.
On the "different aspects" side of things, I'm not sure I agree with the _works_ claim you make. I guess it depends on what your definition of works is, but having a blacklist as you tool to fight bad guys is not something that works very well in my opinion. Yes, specifically my own assets would not have been impacted, had I used a separate domain earlier. But the point still stands.
The fact that it took so long to move user content off the main domain is of course on me. I'm taking some heat here for saying this is more important than one (including me) might think. But nonetheless, let it be a lesson for those of you out there who think that moving that forum / upload functionality / wiki / CMS to its own domain (not subdomain) can be done tomorrow instead of today.
Since there's a lot of discussion about the Public Suffix list, let me point out that it's not just a webform where you can add any domain. There's a whole approval process where one very important criterion is that the domain to be added has a large enough user base. When you have a large enough user base, you generally have scammers as well. That's what happened here.
It basically goes: growing user base -> growing amount of malicious content -> ability to submit domain to PSL. In that order, more or less.
In terms of security, for me, there's no issue with being on the same domain as my users. My cookies are scoped to my own subdomain, and HTTPS only. For me, being blocked was the only problem, one that I can honestly admit was way bigger than I thought.
What sort of size would be needed to get on there?
My open source project has some daily users, but not thousands. Plenty to attract malicious content, I think a lot of people are sending it to themselves though (like onto a malware analysis VM that is firewalled off and so they look for a public website to do the transfer), but even then the content will be on the site for a few hours. After >10 years of hosting this, someone seems to have fed a page into a virus scanner and now I'm getting blocks left and right with no end in sight. I'd be happy to give every user a unique subdomain instead of short links on the main domain, and then put the root on the PSL, if that's what solves this
Based on what I've seen, there's no way to get that project into the PSL. I would recommend you to have the content available at projectcontent.com if the main site is project.com, though. :)
The thing is, you cannot just add any domain to the PSL. You need a significant amount of users before they will include your domain. Before recently, there really was no point in even submitting, since the domain would have been rejected as too small. An increase in user base, increase in malicious content and the ability to add your domain to the PSL all happen sort of simultaneously.
I'm also trusting my users to not expose their cookies for the whole *.statichost.eu domain. And all "production" sites use a custom domain anyway, which avoids all of this anyway.
There are well-documented solutions to this that don't rely on the PSL. Choosing to ignore all of that advice while hosting user content is a very irresponsible choice, at best.
So the problem here is that Alice on alice.statichost.page might set a cookie for the `.statichost.page` domain if she's careless (which is sometimes the case with Alice). This cookie can then be read by Mallory on mallory.statichost.eu. Or the other way around, if Mallory wants to try to trick Alice into reading his cookie. How this can be prevented without the PSL is something I'm very interested to hear more about.
You are right, it would still affect all users. Until the pending PSL inclusion is complete, that is. But it now separates my own resources, such as the website and dashboard of statichost.eu from that.
(Author here) This is all true. The main assumption from my part is that anything remotely important or even sensitive should be and is hosted on a domain that is _not_ companysubdomain.domain.com but instead www.company.com.
That is a great point. When I see these sites I'm always seeing a dozen red flags, and maybe the biggest one is that it's showing a "NatWest" banking site or something and is hosted on "portal-abc.statichost.eu". But the whole point is of course saving users from coming to harm, and if it did - great!
TL;DR takeaway for HN techies: when executing resource-intensive workloads on Node.js, pay attention to its max heap size. It can be increased with the `--max-old-space-size` option, e.g. via the env var `NODE_OPTIONS="--max-old-space-size=16384"`.