Hacker Newsnew | past | comments | ask | show | jobs | submit | MarkusQ's commentslogin

So the story is... a publication that opposes the party currently in power, quoting a few people from the side that's presently out of power, saying that their being out of power is really bad, and we may never recover?

How is this different than the whining we get when the roles are reversed?

I realize you folks hate each other, but it would be nice if either of you could talk about something without turning it into a rant about how great, noble and good your side is and how awful the other side is.


To someone neutral (yeah, humor me), the Trump administration has done far more to demolish the reputation of the US than any other administration in my lifetime (OK, maybe Nixon - I don't remember all that much about him firsthand).

But I would also say that Biden, while not as bad as Trump, was worse than anybody since Nixon.


Which of Biden's policies and actions did you find worse than any since Nixon? And where do you rank the Iraq debacle that Bush started? How about selling arms to Iran to fund the Contras in Nicaragua?

Remember what we're talking about. It's not about their policies per se, it's about what they do to the US's international reputation.

So what did Biden do? The botched withdrawal from Afghanistan was the biggest thing. But his own frailty didn't help (speech fumbling and falling on stairs). Yeah, I know, his personal frailty shouldn't affect the US's reputation. But I think it did.


Trump negotiated the Afghanistan withdrawal. Nearly all blame goes to him. Try again.

But didn't implement it.

I mean, yes, the fact that we were leaving at all is due to Trump. (Either credit or blame, depending on whether you think we should have stayed there.) But the absolute debacle of how we left is on Biden. And it's that debacle that tarnished the reputation of the US.


> And it's that debacle that tarnished the reputation of the US.

Worse than any since Nixon?


I love fake and nonsensical “neutrality”.

There isn't enough training data though, is there? The "secret sauce" of LLMs is the vast amount of training data available + the compute to process it all.

I think you could probably feed a copy of a toki pona grammar book to a big model, and have it produce ‘infinite’ training data

This is essentially a distillation on the bigger model; you'd wind up surfacing a lot of artifacts from the host model, amplifying them in the same way repeated photocopying introduces errors.

https://dailyai.com/2025/05/create-a-replica-of-this-image-d...


There are not enough samples in that book to generate new "infinite" data.

> Catching an LLM hallucinating often takes a basic understanding of what the answer should look like before asking the question.

We had the same problem in the early days of calculators. Using a slide rule, you had to track the order of magnitude in your head; this habit let you spot a large class of errors (things that weren't even close to correct).

When calculators came on the scene, people who never used a slide rule would confidently accept answers that were wildly incorrect (example: a mole of ideal gas at STP is 22.4 liters. If you typo it as 2204, you get an answer that's off by roughly two orders of magnitude, say 0.0454 when it should be 4.46. Easy to spot if you know roughly what the answer should look like, but easy to miss if you don't).


We do know. There have always been ways that people could avoid the painful process of learning, and...they don't learn.

Here's a competing thought experiment:

Jorge's Gym has a top notch body building program, which includes an extensive series of exercises that would-be body builders need to do over multiple years to complete the program. You enroll, and cleverly use a block and tackle system to complete all the exercises in weeks instead of years.

Did you get the intended results?


Playing devil's advocate here, but in theory, you could claim that setting up harnesses, targets, verification and incentives for different tasks might be the learning that you are doing. I think that there can be a fair argument made that we are just moving the abstraction a layer up. The learning is then not in the specifics of the field knowledge, but knowing the hacks, monkey patches, incentives and goals that the models should perform.

It doesn't contradict the logic of the essay.

There are flowers that look & smell like female wasps well enough to fool male wasps into "mating" with them. But they don't fly off and lay wasp eggs afterwards.


But there is a distinction we can make between flowers and wasps. If there is no distinction we can make between Schwartz and non-Schwartz, then we are susceptible to the sample problem with or without AI. And if there is a distinction then we can use that distinction to test Bob, and make him learn from his test failures.

Sure.

But the whole point is that there is a significant difference between Schwartz and non-Schwartz, that only turns up after they start working for real, producing new work rather than rehashing established material, and it takes years to detect. By that time, Bob's forty.

It isn't a "sample problem" it's a process problem. By perpetually raising the stakes and focusing on metrics (e.g. grades, number of publications for students, graduation rates for schools) we've created and fallen into a Poe's law trap. Adding a new metric isn't likely to help.

What might help? Making the metrics harder to game (e.g. something like oral exams, early and often), more discerning (grade deflation), and moving the wrong-track consequences earlier (start holding people back in grade school, make failing to graduate high school easier, make getting into college harder, etc.), and change the cash-cow funding models to remove the perverse incentives.

We aren't likely to do any of these things.


Just answered my own question to my satisfaction; they are stars.

The same specs, which match star charts, show up in two images taken a few moments apart at different exposures (links were given down-thread).


How do you know that they're stars? I believe they probably are stars as well (by visual comparison with a star chart, suitably rotated), but I've found no source for either claim.

I did find multiple sources, including TFA, for the brightest being Venus.


They're much brighter than the noise floor. Photographic noise doesn't really have such outliers.

Why would you think they are not stars? Not really sure the confusion on the matter. Are we leaning towards this being shot from a soundstage?

Well one of them is obviously Venus. How did you determine the others weren't stars?

I’m talking about the grainy noise over all the black parts (actually over the Earth disk as well), including beyond the window edge. The window edge itself looks like a denser and brighter stripe of stars.

Zoom into this higher-resolution version: https://www.nasa.gov/wp-content/uploads/2026/04/art002e00019...


Yep, that's definitely noise.

Sped through that, couldn't stomach the whole thing. Is there more to it than "argument by sneering dismissal"? (Basically, so far as I can tell, her point seems to be "this was intended as a joke to see if you're stupid, so if you believe it, you are, neener-neener!")

Some days I wonder if the most effective way to hide a message at this point in history is to simply write it out, as clearly as possible, in plain English. For some reason, many people have trouble reading (or even detecting) this.

Damn. Should have written this up and posted it two days ago; it would have made a great April Fools gag.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: