Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is an absurd argument that you are just making for the sake of argument and you know it. There's a difference between someone getting things wrong and a LLM shamelessly making something up out of the whole cloth.


No; I'm much more confident in ChatGPT's accuracy than I am in a random podcast's accuracy.

Podcasts tend to be relatively terrible, accuracy wise. If the alternative is learning from ChatGPT, your odds of getting correct information is substantially higher.

Podcasts are entertaining! But not where I would go to learn anything.


Problem with ChatGPT isn't that they get facts wrong, but that they're what the categorical name suggests, large language models.

At one point I came across this series of "are CJK languages related" questions in Quora with cached ChatGPT responses[1], all grammatically correct and very natural, largely turboencabulators, sometimes contradicting even within a single response.

Podcasters? They're not _this_ inconsistent.

1: https://imgur.com/a/7tlNDll


Be aware that you're talking about Quora's implementation of ChatGPT here. As far as I know, the cached answers were generated with an incredibly outdated version, which is definitely not indicative of its current quality.

Even worse, I think they actually prime it with answers already posted on the thread, or even just related threads. For example, one of the answers to the first question mentions the same Altaic root as ChatGPTs answer, and I've found multiple people that are seeing their own rephrased answers in the response.

If you preprompt ChatGPT with questionable data, then the answer quality will be massively degraded. I've noticed many times now that Bing will rephrase incorrect information or construct a very shallow summary out of unrelated articles when internet searches are allowed, but is able to generate a cohesive and detailed summary when they're disabled.

Throwing random answers - some contradicting each other and some talking about subtly different aspects of the topic - into a session without further guidance just isn't a great idea.


What's the issue with the answer for "Are Korean and Chinese mutually intelligible"? The highlighted part is definitely true, at least.


That part is correct, but not consistent with others.

My problem is not that GPTs are too often wrong, it's that they always prioritize syntax over facts since they are _language_ models.

The sentence "Colorless green ideas" always make more sense to LLM than "Water is wet", simply because the latter is syntactically invalid, and that would be problematic for many use cases including Podcast replacement. Sometimes us humans want AI to say "water is definitively wet", and that has been attempted by forcing LLM to accept that factoids are more syntactically correct, but that isn't a solution and it's still an architectural problem for these pseudo-AGI apps.


Listen to better podcasts. :P


It's a mistake to listen to a podcast to learn important information.


That's just like, your opinion, man.

There are podcasts for almost every topic where experts are present. Journalists, Scientists, Activists, Researchers etc. can be heard in podcasts, I don't really see why it's generally a mistake to listen to a podcast to learn important information.

During the pandemic my partner was attending university from home and listening to their professors via MS Teams, these classes were also recorded so that they could listen to them at a later point. In some ways that's just a professional podcast.


I think you know that isn’t a podcast in the sense that we’re discussing.

And depending on the class and the institution, I may still trust ChatGPT more than what gets taught.


Of course the part about university classes is different, but you seem to ignore everything else I've said.

There are tons of podcasts involving experts talking about their field of expertise, how can it be a mistake to listen to such podcasts to gather information?


It’s a mistake because what drives podcasts are their popularity, not their accuracy.

There is no “sort by accuracy” button in any podcasting app, nor are they peer reviewed.

Furthermore, podcasts are not a review of the body of knowledge on a subject; they’re often a complete layperson interviewing a single member of a given field, at best. Almost never do the views of any individual actually represent any field as a whole.

So once we’ve thrown out the concept of accuracy and completeness, ChatGPT fares exceedingly well in comparison. You’d do much worse than ChatGPT for idle conversation level accuracy.


What you just wrote makes more sense applied to LLM output than podcasts! You'd just as easily argue that "radio" or "news" is all bad if you don't want to differentiate between different forms of expression and communication within a medium. (Which, obviously, would be silly)


Sorry what? Nothing I wrote apples to LLMs; they are not optimized for popularity, they’ve been meticulously designed and built to be as accurate as possible.


No they're not. If they were, they would default to 0 temperature and have no Top P, frequency/presence penalty, and frankly not have knowledge as a function of language to begin with. They're designed to be convincing as a "presence" and output reasonable sounding language in context, with accuracy as an afterthought.


They absolutely have not been meticulously designed to be as factually accurate as possible.



That link doesn't say anything about the fundamental design goals of the network architecture or training process. It doesn't even mention factual correctness, except in the sense that it may broadly fall under "producing a desired output".


>Almost never do the views of any individual actually represent any field as a whole.

Setting aside the many individuals who currently lead their fields, history is filled with groundbreaking heretics.

https://informationisbeautiful.net/visualizations/mavericks-...


History is filled with many more plain ol’ heretics however, and we’re exceptionally bad at telling the difference in the meantime.

You, most importantly, are complete garbage at telling the difference between a crackpot and an innovator in a field you know nothing about.

Trying to drink unpasteurized knowledge will infect you a lot more quickly than it will enlighten you.


I'd say that I regret engaging with you, but that last line is golden.

It's not censorship, propaganda and official disinformation, it's "Pasteurized Knowledge®"


But that's the problem; you falsely believe you're capable of differentiating between propaganda and quality. You're not, on topics of which you are not an expert.

That's what flat-eartherism is, that's what jewish space lasers are. The argument you're giving is a tacit endorsement of that kind of "inquiry", which for reasonable people is unconscionable.


This is such a weird take. Ppl don't listen to podcast like wikipedia page to learn 'facts' . Why are ppl even comparing chatgpt to podcasts.


Slightly off-topic/meta but this debate reminds me of one people had 20 years ago: Should you trust stuff you read on Wikipedia or not?

In the beginning people were skeptical but over time, as Wikipedia matured, the answer has become (I think): Don't blindly trust what you read on Wikipedia but in most cases it's sufficiently accurate as a starting point for further investigation. In fact, I would argue people do trust Wikipedia to a rather high degree these days, sometimes without questioning. Or at least I know I do, whether I want to or not, because I'm so used to Wikipedia being correct.

I'm wondering what this means for the future of LLMs: Will we also start trusting them more and more?


why are you forced to listen to "random" podcasts vs podcasts by people you trust?

This argument is so dishonest its infuriating.


Trust is not a rational behavior, nor is it indicative of reliability of fact.


Sure but that doesn't make it "random" . I don't get why top comment was listening to podcasts knowing full well that you will be given possibly inaccurate facts. I have hard time accepting that podcasts are audible wikipedia, such a strange take is top voted comment.


I listen to podcasts; they’re fun! But I am comfortable operating with incomplete information, and thus know how to treat a low quality source. I also chat with LLMs to better understand topics, and am better off than the folks here who can’t do that as a result.

The main thing I’m getting from this discussion is that a lot more very smart people seem to have deluded themselves into thinking knowledge is objective or “locked in” than I had initially realized. The desire for certainty is an extremely human thing, but it’s a dead end, intellectually.


yea exactly "chatgpt replaced my podcasts" is kind of silly. People don't listen to joe rogan experience to learn facts about how particular jiu jitsu choke works. I don't know what kind of podcasts op replaced with chatgpt but i call BS .


Is there a similar difference between an LLM getting things wrong and someone shamelessly making something up out of the whole cloth? That's closer to what actually happens.


I'm pretty sure that in the podcasts I listen to, no one is shamelessly making something up completely.


The kinds of mistakes an LLM makes aren’t nearly as bad as the kind of nonsense you can find humans saying, particularly in podcasts.


Podcasts make stuff up from thin air all the time. Sometimes even maliciously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: