In a similar move (silently changing a feature crucial to some users), in Android 11 Google suddenly removed the possibility to use "special" characters
":<>?|\*
in filenames[0], presumably because they're not allowed on Windows/NTFS and Windows users might end up struggling to transfer them to their Windows computer. I don't care about NTFS at all, though. I just want to be able to sync all my files with my Linux machines and now I'm no longer able to. Makes me want to scream.
After throwing away that thing you will never use to creating filenames far beyond 8.3 format, the problem always comes soon after the matter is fully resolved:
I have a personal convention that all files I put into my synced folder must consist of lowercase alphanumeric characters, hyphens and periods (to be precise, match the regex /\.?([a-z0-9]([-.][a-z0-9])?)+/). It saves a lot of pain.
It can handle files with colon in the name fine, Finder just won't let you name them like that. The files themselves work fine if you created them in the Terminal/through sync.
Classic MacOS used colon as a path separator, so to support creating files that could be opened on classic MacOS the Finder disallows it.
Personal notes, and tons of academic papers and ebooks, all of which might contain question marks and colons. Occasionally, I also use arrows -> in travel itineraries / ticket PDFs.
Yes and who needs Dropbox since for a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.
On the topic of scaling, reversible computations are more energy efficient than non-reversible ones, see also the OP. Outputting the original inputs might seem silly and wasteful superficially but if you discarded them (as "heat"), you'd just be back to building a non-reversible, likely much less efficient gate.
(This is way beyond my area of expertise so excuse me that this might be a stupid idea.)
I assume the following happens: while a (small) subsystem is in "pure state" (in quantum coherence), no information flows out of this subsystem. Then, when measuring, information flows out and other information flows in, which disturbs the pure state. This collapses of the wave function (quantum decoherence). For all practical purposes, it looks like quantum decoherence is irreversible, but technically this could still be reversible; it's just that the subsystem (that is in coherence) got much, much larger. Sure, for all practical purposes it's then irreversible, but for us most of physics anyway looks irreversible (eg. black holes).
The problem is that the larger subsystem includes an observer in a superposition of states of observing different measured values. And we never observe this. Copenhagen interpretation doesn't deal with this at all. It just states this empirical fact.
So if I understand correctly, you are saying the observer doesn't feel like he is in a superposition (multiple states at once). Sure: I agree that observers never experience being in a superposition.
But don't think that necessarily means we are in a Many-Worlds. I rather think that we don't have enough knowledge in this area. Assuming we live in a simulation, an alternative explanation would be, that unlikely branches are not further simulated to save energy. And in this case, superposition is just branch prediction :-)
Yes, I think that's a stance many physicists take these days. Unfortunately, it's not verifiable. And we also don't have any clue how gravity (which does become relevant at our scales) would fit into this picture.
Having tried many other CI systems, all of which ultimately turned out to be subpar, it makes me incredibly sad to discover only now that Cirrus CI is (was?) quite a bit better than them. :( Thanks for the blog post, though!
Shouldn't you always read & double-check the 3rd-party GitHub actions you use, anyway? (Forking or copying their code alone doesn't solve the issue you mention any more than pinning a SHA does.)
Double checking Github actions does not mitigate threats from supply chain vulnerabilities. Forking an action moves the trust from a random developer to yourself. You still have to make sure the action is pulling in dependencies from trusted sources which can also be yourself depending on how far you want to go.
You could get in touch with GP by googling for his company (see profile), finding his name through the company website (he's the CEO), and then googling for his LinkedIn/X accounts.
[0]: https://github.com/jordwest/news-feed-eradicator
reply