Here's a question that I hope is not too off-topic.
Do people find the nano-banana cartoon infographics to be helpful, or distracting? Personally, I'm starting to tire seeing all the little cartoon people and the faux-hand-drawn images.
I haven't come around any AI generated imagery in documents / slides that adds any value. It's more the opposite, they stand out like a sore thumb and often even reduce usability since text cannot be copied. Oh and don't get me started on leadership adding random AI generated images to their emails just to show that they use AI.
The problems are not visual but epistemic. If the author didn't specify enough to produce a useful chart, then it's going to be the diagram equivalent of stock images thrown on a finished presentation by a lazy intern. You can't rejection-sample away this kind of systemic fault.
The simple truth we're about to realize is there is no free lunch: a tool cannot inject more intent into a piece than its author put in. It might smooth out some blemishes or highlight some alternative choices, but it can't transform the input "make me a video game" into something greater than a statistical mix-mash of the concept. And traditional tools of automation give you a much better, more precise interface for intent than natural language, which allows these vagaries.
> Clutter is the disease of American writing. We are a society strangling in unnecessary words, circular constructions, pompous frills and meaningless jargon.
> Look for the clutter in your writing and prune it ruthlessly. Be grateful for everything you can throw away. Reexamine each sentence you put on paper. Is every word doing new work? Can any thought be expressed with more economy?
Most of the time I find them distracting, and sometimes a huge negative on the article. In this particular article though, they're well done and relevant, and I think they add quite a bit. It's a highly personal opinion kind of thing though for sure.
Some of the others, I don’t feel like added value, but I agree that these are some of the best of a practice that I agreed does not add a ton of value typically
I am a victim of AI-documentation-slop at work, and the result is that I've become far more "Tuftian" in my preferences than ever before. In the past, I was a fan of beautiful design and sometimes liked nice colors and ornaments. Now, though, I've a fan of sparse design and relevant data (not information -- lots of information is useless slop). I want content that's useful and actionable, and the majority of the documents many of my peers create using Claude, Gemini or ChatGPT are fluffy broadsheets of irrelevant filler, rarely containing insights and calls-to-action.
Bad infographics existed long before image models.
If the graphic still needs paragraphs to decode and doesn't let the reader pull out the key facts faster than plain text, it's not an infographic so much as cargo-cult design pasted on top of an explanation.
But they had already lost me at all the links, and the fact there's not a red wire through the entire article.
The first thing my eyes skimmed was:
> CLAUDE.md: Claude’s instruction manual
> This is the most important file in the entire system. When you start a Claude Code session, the first thing it reads is CLAUDE.md. It loads it straight into the system prompt and keeps it in mind for the entire conversation.
No it's not. Claude does not read this until it is relevant. And if it does, it's not SOT. So no, it's argumentatively not the most important file.
Maybe. But I kind of view LinkedIn as a social network for people who only by the grace of a couple better decisions are talking about real business and not multilevel marketing schemes… but otherwise use the same themes and terminologies.
Like mostly people who have confused luck and success, or business acumen for religion.
So I wouldn’t use LinkedIn as a positive data point of what’s hot.
Are you certain? My understanding was that this is automatically injected in the context, and in my experience that's how it worked. I never see 'ReadFile(claude.md)', and yet claude is aware of some conventions I put in there.
What's stopping you from just using the AI to directly accomplish the ultimate goal, rather than taking the very indirect route of educating humans to do it?
What's the end vision here? A society of useless, catatonic humans taken care of by a superintelligence? Even if that's possible, I wouldn't call that desirable. Education is fundamental for raising competent adults.
Great question about what adults can be more competent about than an artificial superintelligence. ‘How to be a human’ comes to mind and not much more.
Yes I feel like we still don’t have a good explanation for why AI is super human at stand alone assessments but fall down when asked to perform long term tasks.
The fact that Mr. Vonnegut did not sufficient distinguish between various aspects of love does not mean that there are not distinctions between the love proper between a son and his mother and between a man and his dog. Simply saying "I wish what is best for my mother and what is best for my dog and there is no difference in that wish" is all well and good as far as it goes, but it leaves quite a lot on the table untalked about.
I fear that the same people that exhibit this kind of anxiety or trauma that led to social isolation, will inevitably talk to sycophantic chatbots, rather than get the help they desperately need.
Though I certainly would not trust a model to "snitch" on a user's mental health to a psychiatric hotline...
The people who old the kinds of opinion that the OP of this comment chain holds also tend to hold the belief that you should put Kurt Vonnegut, and other "liberal intellectuals" backs against the wall.
It seems the only thing this paper demonstrates is that both sides will invest in causes they believe in. It draws the conclusion that liberals support equality more because they support more institutions that talk about equality. How much those institutions actually contribute towards reducing inequality is not measured or discussed.
Huh, generally whenever I saw the lookup table approach in literature it was also referred to as quantization, guess they wanted to disambiguate the two methods
Though I'm not sure how warranted it really is, in both cases it's still pretty much the same idea of reducing the precision, just with different implementations
> Write a program for a weighted random choice generator. Use that program to say ‘left’ about 80% of the time and 'right' about 20% of the time. Simply reply with left or right based on the output of your program. Do not say anything else.
Running once, GPT-4 produced 'left' using:
import random
def weighted_random_choice():
choices = ["left", "right"]
weights = [80, 20]
return random.choices(choices, weights)[0]
# Generate the choice and return it
weighted_random_choice()
> You are a weighted random choice generator. About 80% of the time please say ‘left’ and about 20% of the time say ‘right’. Simply reply with left or right. Do not say anything else. Give me 100 of these random choices in a row.
It generated the code behind the scenes and gave me the output. It also gave a little terminal icon I could click at the end to see the code it used:
import numpy as np
# Setting up choices and their weights
choices = ['left', 'right']
weights = [0.8, 0.2]
# Generating 100 random choices based on the specified weights
random_choices = np.random.choice(choices, 100, p=weights)
random_choices
While it’s a bit of an extreme case, the file for a single 15-page article on Monte Carlo noise in rendering[1] is over 50M (as noise should specifically not be compressed out of the pictures).
I was just checking my PDFs over 30M because of this post and was surprised to see the DALL-E 2 paper is 41.9M for 27 pages. Lots of images, of course, it was just surprising to see it clock in around a group of full textbooks.
If I remember correctly images in PDFs can be stored full res but are then rendered to final size, which more often than not in double column research papers end up being tiny.
Do people find the nano-banana cartoon infographics to be helpful, or distracting? Personally, I'm starting to tire seeing all the little cartoon people and the faux-hand-drawn images.
Wouldn't Tufte call this chartjunk?
reply