What a great seminar, that was. I really appreciated his advice on writing recommendation letters, too: the expectation is shifted wildly towards effusive. If you are plainly complimentary, it can come off as a secret warning that you don't think they are worth hiring.
But there were also great AI papers, and meta advice on reading them efficiently. (I don't remember any crimes against ferrets, but presumably the reading list changed over time)
I appreciated that class, and it's only grown on me over time. Another line that really stuck with me was something like "forsan et haec olim meminisse iuvabit" (Which I remembered as "Perhaps we will look back on even this with fondness") It's so easy to undervalue amazing things when they are happening to you. I was really convinced that I was appreciating it, even more than many around me. But I still look back and think I could have soaked it in, even more.
I also took Professor Winston's seminar in college and have similar feelings about it. It was far and away my favorite class and the wisdom in his advice has only become more apparent over time. At its heart, it was really about how to understand and communicate ideas.
One of the things I treasured the most was that Professor Winston overtly subscribed to the "make topics crystal clear and broadly accessible" school of technical communication. He would contrast this against the "make things incomprehensible so everyone thinks you're brilliant" school of thought. I am eternally grateful someone biased me early in life towards the former, not just when I'm speaking but when I'm choosing what to read and who to listen to.
I've also wondered lately what he would think about the current LLM wave. I'm sure he would have had a characteristically clear and profound take. I feel the world is losing out not having his voice during the current moment.
Absolutely! Thanks for bringing this up. I remember one of his points is that people have a tendency to hide behind obtuse language to try to make their insight seem more impressive. I think about this constantly when I see writing that clearly doesn't subscribe to this philosophy. (Especially Fine Art lectures)
It seems that Aphantasia does not globally bin into two groups, since I don't fit in either.
By my rough count of Figure 2 tests, where Derek is at 0 to Loren at 6 (ignoring F), I have about 3.5 atypical responses.
My experience with Figure 2:
A) I can flip between cone and weird triangle, saw the cone first
B) I see it as if someone placed identical cat stickers on the drawing. I can intellectually understand the perspective, how the upper-right one is supposed to be bigger, but don't experience it that way.
C) I see that there is an implicit rectangle (to me it looks slightly wider than tall). But the color doesn't "spread" to the middle, it's just like 2A -- a boundary in the surrounding shapes implicitly extends into the empty space to form a rectangle shape.
D) It takes minimal, but non-zero effort to see the vase
E) It's trivial to flip between the two orientations of the cube
F) skipped
G) I don't understand what I'm looking for here. I see clouds, sky, and a silhouette with a tree. Is there a face in it somewhere? I can see the smiley face on https://en.wikipedia.org/wiki/Pareidolia
Fascinating. Thanks for the clue. It's the most complete blank for me of all the tests. I looked up the reference image, but still cannot see it in figure 2G. I can't even guess at where the eyes/nose/mouth are in the clouds.
yeah, I think some of their example stimuli aren't the greatest in that figure. There are definitely some better perspective illusions online. I'm not sure if I really see the Neon Spreading Illusion in C either; maybe it's spreading a bit, haha.
Yeah, though I wonder if Aaronson’s writing tone is somewhat influenced by Alexander’s. At least he reminded me of him by the end.
I’m pretty sure they all know/read each other - related communities/ideas.
PBS also has a (now discontinued) YouTube show called infinite series that did a decent overview of the algorithm and showed examples of a lot of the stuff described here.
I don't buy that the reversion to income tax should happen on high percentage gains if the absolute gains are low. I think the tax could be made even more progressive. Something like:
The first $500k of capital gains get current cap gains treatment. After that, all gains are taxed as regular additional income.
This is gross double taxation: they already paid taxes when they earned the money you are investing, and you are suggested taxing them again at high rate.
Based on context, you seem to be referring to gas when you say 12 million "ether gwei" (the block limit being 12 million gas, lately). But gas is its own unit, not measured in ether (or in wei, the smallest atomic unit of ether).
The reason the math still works is that 12 million gas priced at 1 gwei/gas would cost 12 million gwei, which is .012 ether. (Since one ether is defined as a billion billion wei) ... Maybe this was all what you meant, but it was hard for me to decipher.
Yes, gas is its own unit paid in the native token of the blockchain.
The adjective is necessary as other EVMs are proliferating and in heavy use.
But even without that I dont think there is a distinction big enough to worry about it. There are 1 billion gwei in 1 ether, we can go from there and get the same results.
Yeah, digital-only NFTs will likely be a fad. I'll be interested again when the trend moves toward deeds. A title for a physical work could have staying power. Even potentially an iou from an artist for a one of a kind (physical) piece.
> Hypothesis is pretty smart to find edge cases that you usually don't think of.
This is worth reemphasizing.
For certain kinds of hard problems, I've found it really valuable for building confidence in an implementation.
If you happen to have a reference implementation, with a small surface API, it can be a very fast way to write tests: tell hypothesis to input anything it wants and validate that both libraries produce the same result. It's not magic, you will probably still have to give hints about how to build interesting inputs, but still a very quick way to get a lot of coverage. (Also be sure to crank up the number of examples to build more confidence, 100 is often not enough to really explore an input space).
RIP Pat Winston - he infuriated me the most when he said something so obviously true and which I wanted to be false. One example: "No one will ever read your paper in entirety, so write for the skimmers." As much as I loved his technical lectures, I especially remembered his peripheral aphorisms.
But he said one thing that I immediately agreed with (conceptually) and have recalled at many hard times in my life: "Perhaps we will look back on even this with fondness." (But in Latin, because Pat Winston)
But there were also great AI papers, and meta advice on reading them efficiently. (I don't remember any crimes against ferrets, but presumably the reading list changed over time)
I appreciated that class, and it's only grown on me over time. Another line that really stuck with me was something like "forsan et haec olim meminisse iuvabit" (Which I remembered as "Perhaps we will look back on even this with fondness") It's so easy to undervalue amazing things when they are happening to you. I was really convinced that I was appreciating it, even more than many around me. But I still look back and think I could have soaked it in, even more.