Could you explain what you mean by "this keyboard puts heavy emphasis to press modifier keys (enter, ctrl, alt, etc) with the thumb?" I'm sitting here with one in front of me and can't figure out how I would possibly hit anything in your list other than the 'enter' key with my thumb.
I'm sorry, memory failed me and I included more keys in the listing.
Anyway, having to hit the enter and backspace keys with the thumb is what I found problematic, that's why I remapped them to alternative places with their online tool.
It's a rather fair statement to make. The rates of errors in memory are significant[1], and ZFS does nothing to ensure that in memory data structures are uncorrupted; it was designed to be used with ECC RAM[2].
And it's interesting and useful for scientific computing where you already have an MPI environment and distributed/parallel filesystems. However, it's not really applicable to this workload, as the paper itself says.
There is a provision in most file systems to use links (symlinks,
hardlinks, etc.). Links can cause cycles in the file tree, which
would result in a traversal algorithm going into an infinite loop.
To prevent this from happening, we ignore links in the file tree
during traversal. We note that the algorithms we propose in
the paper will duplicate effort proportional to the number of
hardlinks. However, in real world production systems, such as
in LANL (and others), for simplicity, the parallel filesystems
are generally not POSIX compliant, that is, they do not use
hard links, inodes, and symlinks. So, our assumption holds.
The reason this cp took such large amounts of time was the desire to preserve hardlinks and the resize of the hashtable used to track the device and inode of the source and destination files.
Sure, but if you read that article you walk away with a sense of thats a lot of files to copy. And the GP built a tool for jobs 2-3 orders of magnitude larger?! Clearly there are tradeoffs forced on you at that size...
Author of the paper here. The file operations are distributed strictly without links, otherwise we could make no guarantees that work wouldn't be duplicated, or even that the algorithm would terminate. We were lucky in that because the parallel file system itself wasn't POSIX, so we didn't have to make our tools POSIX either.
In math, at least, these transfer-of-copyright forms are increasingly allowing by default posting of (final submitted—see juretriglav (https://news.ycombinator.com/item?id=8194544)'s link for the importance of this modifier) papers to the arXiv, or even just to one's web page. This seems such an elemental freedom now that it can be hard to remember that it wasn't always the case. I remember, but can no longer find—does anyone know the source?—a post a while back from an academic who asked (probably) Elsevier for permission to post an article on his home page, and was denied it, and who then essentially dared them to sue him for posting it anyway.
Whenever I've been published, I've asked to retain copyright and that has been granted -- I'm not sure it's a grant, though, since it's mine to start with. The usual negotiating starting point is a contact from a publisher that automatically claims copyright. I simply delete that clause (and possibly insert "The author retains copyright", so that there's no doubt).
Clearly, that's not happening here and I don't understand why, unless publishers are refusing to publish without being given copyright. In which case, alarm bells should be ringing very loudly.
It's not about handing over copyright but it's generally agreed that publishing multiple copies of the same work is a bad thing. This is typically done by researchers to boost their paper count (+ a bit of self-citation), and can make a hypothesis seem better supported than it is.
So only dodgy journals will re-publish work that has been published elsewhere.
There are several versions of VNC that support xrandr resizing. Arch documents this working with TigerVNC [0]. Additionally, you can use xrdp + x11rdp.
I thought that was well addressed by this comment in the post:
"Looking at the top repo for each language also exposes a weakness in the methodology: GitHub’s language identification isn’t perfect and there are number of polyglot projects. The top Java repo is Storm, which uses enough Clojure (20.1% by GitHub’s measure) to make this identification questionable when you take into account Clojure’s conciseness over Java’s."
I've been using the font for years on both openSUSE and ubuntu and never seen the problem. Then again, the page says it only shows up at 12pt, which is not a font size I use.