But it seems like you only need the stagger and overlap because you’re using circles in the first place. Would it look worse if you just divided the rectangle into 6 squares without any gaps or overlap?
Basically, for this specific structure, they had to develop their own "sub structures" on the 1d line. These sub structures are known to create one little thing going diagonally (and then leave a bunch of debris behind, but that doesn't matter too much for that first step, they called this custom part "the fuse"). Then, there is a known technique where taking "diagonal moving objects" created on the same y-coordinate and placing them at the "right x position" makes the collide in a way where you can "program" where to create diagonal moving objects but at arbitrary positions on the screen (this is called a "binary construction arm"). And then, once you can create these anywhere on the screen, then you've basically won ; there's another technique to turn arbitrary positions into arbitrary shapes ("extreme compression construction arm", or ECCA), and it's "just" a matter of making the ECCA clean up all of the debris and build a new fuse but moved over.
Of course, the "just" here does the heavy lifting and represents over two years of exploration, writing algorithms for how to clean up everything, and so on.
I do agree with you on the compilation, and this is the reason I'm still writing the occasional .js or .mjs file. However, the js I write starts with enabling ts-check and has all of its type information encoded as comment. This way, I'm getting the benefits of typescript while writing the code without needing the whole compilation step.
Context: This is for a 2019 data breach on a system that was created in 2012. The GDPR was instated in 2018 (has it really been that long? Wow feels like yesterday) and Meta failed to disclose the 2019 data breach properly under GDPR, hence the fine.
Was it reported by a pentester? (ex-)employee? Facebook itself?
How do we know that it goes back to 2012?
I know in the public sector you have to disclose such things to ICO, but does that also apply to private companies? Who is going to hold them accountable?
I was concerned, reading your thing first, that the title (“Meta fined $102M for storing passwords in plain text”) was going to be false—that they were actually only fined for not disclosing the breach. But the article says the decision also claimed a GDPR violation for storing the passwords in plaintext, so that’s good:
> The DPC found that Meta violated several GDPR rules related to the breach. It determined that the company failed to "notify the DPC of a personal data breach concerning storage of user passwords in plaintext" without undue delay and failed to "document personal data breaches concerning the storage of user passwords in plaintext." It also said that Meta violated the GDPR by not using appropriate technical measures to ensure the security of users' passwords against unauthorized processing.
That's the maximal fine (that was never used as far as I know, at least on a large company). In this case the fine is understandably much smaller, since the privacy incident is not critical, and Facebook reported the problem to the authorities on its own.
None of these are true for the MitM threat model that caused this whole investigation:
- If someone manages to MitM the communication between e.g. Digicert and the .com WHOIS server, then they can get a signed certificate from Digicert for the domain they want
- Whether you yourself used LE, Digicert or another provider doesn't have an impact, the attacker can still create such a certificate.
This is pretty worrying since as an end user you control none of these things.
Thank you for clarifying. That is indeed much more worrying.
If we were able to guarantee NO certificate authorities used WHOIS, this vector would be cut off right?
And is there not a way to, as a website visitor, tell who the certificate is from and reject/distrust ones from certain providers, e.g. Digicert? Edit: not sure if there's an extension for this, but seems to have been done before at browser level by Chrome: https://developers.google.com/search/blog/2018/04/distrust-o...
CAA records may help, depending on how the attacker uses the certificate. A CAA record allows you to instruct the browser that all certs for "*.tetha.example" should be signed by Lets Encrypt. Then - in theory - your browser could throw an alert if it encounters a DigiCert cert for "fun.tetha.example".
However, this depends strongly on how the attacker uses the cert. If they hijack your DNS to ensure "fun.tetha.example" goes to a record they control, they can also drop or modify the CAA record.
And sure, you could try to prevent that with long TTLs for the CAA record, but then the admin part of my head wonders: But what if you have to change cert providers really quickly? That could end up a mess.
CAA records are not addressed to end users, or to browsers or whatever - they are addressed to the Certificate Authority, hence their name.
The CAA record essentially says "I, the owner of this DNS name, hereby instruct you, the Certificate Authorities to only issue certificates for this name if they obey these rules"
It is valid, and perhaps even a good idea in some circumstances, to set the CAA record for a name you control to deny all issuance, and only update it to allow your preferred CA for a few minutes once a month while actively seeking new certificates for any which are close to expiring, then put it back to deny-all once the certificates were issued.
Using CAA allows Meta, for example, to insist only Digicert may issue for their famous domain name. Meta has a side deal with Digicert, which says when they get an order for whatever.facebook.com they call Meta's IT security regardless of whether the automation says that's all good and it can proceed, because (under the terms of that deal) Meta is specifically paying for this extra step so that there aren't any security "mistakes".
In fact Meta used to have the side deal but not the CAA record, and one day a contractor - not realising they're supposed to seek permission from above - just asked Let's Encrypt for a cert for this test site they were building and of course Let's Encrypt isn't subject to Digicert's agreement with Meta so they issued based on the contractor's control over this test site. Cue red faces for the appropriate people at Meta. When they were done being angry and confused they added the CAA record.
[Edited: Fix a place where I wrote Facebook but meant Meta]
The requirement to ID yourself online was already a thing in China, and using government-provided unique IDs for that isn't the worst way to go about it. The main issue would be mandatory reporting (i.e. if the companies have to constantly send data about what every given ID is doing on their website), but that's a different issue - and I don't feel like it's harder to do this using the phone numbers they already use compared to using a government GUID.
The main issue is that this would make obtaining access to Chinese websites even more difficult for people outside of China. It was kind of possible to go around the phone number restriction by obtaining a phone number, but going around the government ID is going to be significantly more difficult.
TS allows you to pass a read-only object to a method taking a read-write value:
type A = { value: number; }
function test(a: A) { a.value = 3; }
function main() {
const a: Readonly<A> = { value: 1 };
// a.value = 2; <= this errors out
test(a); // this doesn't error out
console.log(a); // shows 3
}
I really don't agree with that. Git is a powerful tool with very few actual downsides, and the unwillingness of some developers to spend an hour learning how it works hurts them in the long-term.
It's like sticking to the text editing feature of your IDE because you can't be bothered to learn how it works. Sure, you _technically_ can do that, but you're losing on everything that makes an IDE useful and probably losing actual days or weeks worth of work because of that.
>the unwillingness of some developers to spend an hour learning how it works hurts them in the long-term
And that's the problem. Because every developer has spent an hour learning how it works by themselves but then each of them in completely different ways, from different sources, on different projects and workflows, some more correct than others, because there's not one single perfect ground truth way of using git in every situation, but git offers one million ways of shooting yourself in the foot once you land on the job, even after you think you learned git in that one hour.
And that IMHO is git's biggest problem: too powerful, too many features, too many ways of doing something, no sane defaults out of the box that everyone can just stick with and start working, too many config variables that you have to tinker with, etc. Case in point, just look at the endless debates in the comments here on what the correct git workflows are wand what the correct config variables are, nobody can agree on anything unanimously on what the right workflow of configs are everyone has their own diverging opinion.
Something being popular doesn't mean it's universally good everywhere and loved by everyone. Windows and Teams are also popular, almost every company uses them, that doesn't make them good. Diesel ICE cars are also highly popular in Europe even though they're much worse for our air quality and health. Do you see the issue with using popularity as an argument?
I've met many devs who hate git with a passion but they just have to use it because management said so and because evry other workplace now uses it, just like Teams and Windows. Not saying git is bad per se, just pointing out the crater of pitfalls it opens up.
Right but the world is bigger than corporate and yet I don't see anyone choosing anything else for their pet project large or small either. If Git was such a pain to use, wouldn't a lot of open source projects use something else? I know OpenBSD uses CSV, SQLite uses Fossil.. I can't honestly think of anything else non-Git right now that I use (I'm sure I'm missing some).
Years ago when private repositories were still a paid feature on GitHub, you could use Bitbucket, which had them for free, and offered Git and Mercurial. A few years later Bitbucket announced they were removing Mercurial support because "Mercurial usage on Bitbucket is steadily declining, and the percentage of new Bitbucket users choosing Mercurial has fallen to less than 1%".
>I don't see anyone choosing anything else for their pet project large or small either.
I also don't see anyone else choosing to breathing anything else than oxygen either. It's not like they have so many other options when the job market requires git and most coding tutorials also feature git and schools also use git, so the entire industry decided to use git despite other options existing.
Again, that doesn't mean git is bad or that is loved by everyone or that it's the best. Betamax also lost to VHS despite being technically superior. A lot of victories are won by the lesser product given enough inertia and being at the right time and the right place. Kind of how Windows and SAP got entrenched in the 90s. People and orgs were buying into it because everyone else was also using it so your only choice was to use it too no matter your own opinions on it. What were you gonna do? Piss against the wind and torpedo your hiring prospects by pigeonholing is some other "better" tool that nobody else uses?
I don't remember what VCS I used at my first job in the embedded industry but that one was hands down way better, easier and fool proof compared to git, with a nice GUI long before GIT GUI tools were even remotely good, it just didn't survive there long term because it costed a fuck tonne of money in licensing fees for the org. You can see where this is going, right? When it comes to bean counters, free beats paid every day regardless of most other arguments.
Not quite that, we've known about galaxies outside our own (like the Magellanic clouds or the Andromeda galaxy) for a few millenia, and the main reason black holes haven't been discovered for a while because they're black and we needed a theory to know where to look. The current theory of cosmology has overall been pretty stable for a while.
What's interesting there isn't that much the object themselves which are bog-standard as far as celestial objects go, but how red-shifted (and therefore how far away/long ago) they are, which is something the model doesn't quite exclude but does warrant some tweakings of the "initial parameters" of the universe to make it work this way compared to what we expect.
> we've known about galaxies outside our own (like the Magellanic clouds or the Andromeda galaxy) for a few millenia
Well, we could see them, but we weren't able to distinguish a galaxy from a nebula until after investing multiple centuries into the development of powerful telescopes.
> In 1924, Edwin Hubble established the distance to classical Cepheid variables in the Andromeda Galaxy, until then known as the "Andromeda Nebula" and showed that those variables were not members of the Milky Way. Hubble's finding settled the question raised in the "Great Debate" of whether the Milky Way represented the entire Universe
Weird choice to talk about the placebo effect in this context. The placebo effect is definitely used in combination with chemical and biological effects when administering drugs (or, more accurately, it always automatically happens). It's just when trying to test the efficacy of drugs that you need to control for the placebo effect, otherwise the noise of the results would drown the signal of the biological/chemical impact.
This is exactly the framing the author is criticizing. It assumes that the placebo effect is a constant that cannot be improved upon, and thus deserves no consideration, when designing the treatment. However, the placebo effect is malleable, and can be improved [1], In scientific studies, this is typically done through suggestions and conditioning [2]. However, this is not standard clinical practice (AFAIK).
Where the author is wrong, is that people that are designing drugs, aren't thinking about using the placebo effect more optimally. It is fairly well known, that the efficacy of drugs correlates with the severity of off-target side effects: say that you are taking an analgesic that acts by binding receptor A, but which also induces nausea by also binding an unrelated receptor B. During drug development, the structure of the drug is often tweaked to reduce or abolish binding to such off-target receptors, thus limiting side effects. However, these structural changes also often reduce efficacy, even if the affinity of the drug to the intended target isn't altered at all. My colleagues and I (working in pharmacology but in academia) have often wondered to what degree drug companies try to actively keep non-severe side effects as part of the response profile, given that they may be beneficial for the treatment outcome.