You do not have to censor: you simply refer to yourselves in the third person to make your shared identity plausibly deniable. A work that is worthy of being of publication should largely be able to stand on its own anyway. This may have the effect of reducing minimum publishable unit (MPU) CV spam papers as a bonus.
I'm talking about editors censoring papers. Authors could avoid this problem by writing differently, but as a reader I'd prefer they don't; knowing that a body of work is closely related is very useful, especially when it comes to deciding which references to follow up.
The double blind process only requires that this be done during the review drafts. The final camera-ready version for publication can use the first person without compromising the integrity of the review process. I don't think the editors would remove self-referential text from a final draft after the paper had been accepted.
Usually if you count who has more citations you get the author of the paper (or the advisor, or the leader of the team). Sometimes it's a lifehack to get more citations, but most of the times is just natural because someone of the team was working in tool X and other member of the team in tool Y and now you are adjusting all the details to make X and Y work together and get a new result.
The title is misleading as written in the article. It should be "The Uselessness of Phenylephrine as a Decongestant". Phenylephrine is a lifesaving medicine in emergency medicine to increase the blood pressure of people with hypotension.
It also has a proprietary PHY protocol, which always struck me as an major downside to something whose adoption is closing to making it the next de facto standard. DASH7 [1] is an interesting alternative in this regard, good for urban areas, but not quite as long range for very sparse nodes in a rural environment. It does not the same duty cycle limitations that LoRa has and is actually used to complement LoRa even on the same device in some interesting case studies [2] which come from Semtech themselves (the patent holders on the LoRa PHY).
Just to calm your nerves: place and route algorithms are often classical AI anyway, the things that is changing is what still is and isn't referred to as "AI". The cool thing about this work is that it is looking at higher level parameters about layout instead of the kind of heuristic algorithms you see in APR. Would be cool to see this in tools one day.
Maybe as opposed to a theoretical treatment? In academia, there is a dichotomy in CS research between systems and theory, which may have been what they were getting at.
One of the interesting comments in that article was how they pinned limited width of x86 decode implementations on variable instruction length. There are obvious code density benefits to the x86 variable length approach (especially in immediate encoding), but I guess the need for realignment creates long critical paths on the frontend. I wonder if the more constrained variable instruction length of RV64GC (32- and 16-bit instructions only) will be able to similarly scale up to 8-wide instruction decode like Apple has been able to do with AArch64.
64bit x86 squanders it's variable length advantage by being a 40 year old design that has been extended over and over again.
Much of the opcode space is wasted with old single byte instructions that are rarely used these days. There are REX prefix bytes required everywhere to access all 16 registers. Modern instructions that are used all the time are hidden away behind prefix bytes.
64bit ARM is a complete redesign of the ARM instruction encoding, and they put a lot of thought into using the instruction space optimally. 32bit ARM also wasted a lot of it's instruction encoding space, but 64bit ARM is a massive improvement. Despite requiring roughly 10% extra instructions on average, the average arm64 is around the same size as the average x86 binary.
Immediates aren't a huge problem. All 32bit immediates and many 64bit immediates can be encoded in 1-2 instructions. Anything that can't should probably just use a PC relative load.
IMO, the much simpler instruction decoding massively outweighs the need for slightly more load bandwidth.
I've heard that there's other benefits, like the fact that memory RMW instructions are essentially a fairly clean way to address physical registers without using any architectural registers.
I'd love to see what a CISC-V ISA without the 40 years of baggage looks like (like jeeze, the hlt instruction gets a single byte on x86?)
Agreed. AMD didn't throw enough away when they moved X86 to 64bits. They also didn't add enough registers. Variable length CISC is fine. They should steal the bit manipulation instructions from ARMv8 and the vector extensions from RISC-V.
"Variable length CISC is fine." Well, actually, the Anandtech article points out:
"Other contemporary designs such as AMD’s Zen(1 through 3) and Intel’s µarch’s, x86 CPUs today still only feature a 4-wide decoder designs (Intel is 1+4) that is seemingly limited from going wider at this point in time due to the ISA’s inherent variable instruction length nature, making designing decoders that are able to deal with aspect of the architecture more difficult compared to the ARM ISA’s fixed-length instructions."
Given the complexity of x86, it's amazing that Intel+AMD have gotten 4-wide decoders. But the M1 has 8-wide and if they want to go wider, it's linear rather than quadratic in complexity.
I think they did a fine job considering all the constraints. The ISA was created in 1999, but the first widely used 64-bit OS (vista) didn't release until 7 years later. In truth, I doubt 64-bit systems became more than half of all systems until a couple years after the release of Window 7 in 2009 (a full decade later).
Every transistor used for x86_64 that couldn't also be used for x86 was a competitive liability (increasing size, power, R&D, etc without any real payoff). I think their decisions make a lot of sense given all the constraints.
I agree that they did a fine job given the constraints at the time. But by getting rid of x87, ... they would have used less size, power, R&D. SSE was circa 1999 and they could have just supported that.
They being AMD. The counterargument being that if AMD had been too aggressive, Intel could have done something more conservative. However, then there's the cross licensing agreement ... Arggh.
They made SSE and SSE2 extensions part of the core instruction set in AMD64.
But they didn't remove x87. It's not really used, compilers only emit it when code asks for a long double.
Personally, I do think they should have banned x87 from 64bit code. But it wouldn't have allowed them to remove the x87 units from the chip, as every AMD64 chip to this day still supports 32bit compatibility mode, and regularly uses it.
A code density prioritizing modern ISA is something that I've thought a lot about too. Would be an interesting possibility in the embedded space. I think that there are certain tricks that can be used to speed up the realignment problem, too.
Interesting point. I guess as cache sizes go up instruction density becomes less important. Also Arm abandoned Thumb - which was relatively easy to decode - for AArch64 and I guess they must have done quite a lot of analysis before doing so.
Yes, and it is also a bug riddled mess that is prone to synthesis errors. I am hoping maybe the AMD acquisition will encourage them to open more things up to get more eyes on the synthesis flow and allow more recourse/debugging when issues are encountered.
Random ones requiring keep attributes for no reason on larger designs to keep stuff from being inferred away erroneously. Certain issues with limitations on SV interfaces only supporting constructs used in the "IP integration" scripting as opposed to the full language spec. I am sorry if I came off as overly negative, but I really think the FOSS EDA tools are going to lap them unless they open up somewhat.
(I am also biased because the designs I work on are small enough that ECP and presumably upcoming Lattice FPGAs are plenty. I am excited by the Xilinx reverse engineering efforts, too. But there seems to be less official interest than we see with Lattice in supporting the OSS efforts.)
Yes, apparently some US plastic recycling programs amounted to fraud in that they would simply export waste without due diligence about the overseas sites they were shipping them to. There's a reason why "reduce" and "reuse" come first in the old adage before "recycle".
Fraud is a strong word. This was common knowledge to anyone interested in how recycling works. Wait until you hear about the mountains of glass sitting around never to be recycled.
If by "mountains of glass" you mean "landfilled", then yup.
It's not well-understood that municipal recycling requires private contractors to bid on the waste stream, and if they can't make money on it, it all gets landfilled like the good old days. At least consumers are now trained to separate it.
Those can grifters are likely the real recycling heros of our time. Make it easy for them to find them.