Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The take that its a sophisticated grammar parser is fine. Could be lol. But when it is better at humans then the definitions can just get tossed as usage changes. You can't deny its impact (or you can, but it's intellectually dishonest a bit to just call it old tech with monies and nothin' special from impact alone). But that's your experience so it's fine.

For the stuff about it being a hard problem , now I know you aren't expressly making a false equivocation right? But I did say simple not easy. You are saying hard not complex.

I think there's too much digression here. You're clearly smart and knowledgeable but think LLM are over rated, fine.

And yes I know it's always the best time to say it that's the point of a glass half full, some sugar in the tea, or anything else nice



(It's not just a grammar parser, for the record: that was imprecise of me. The best description of the thing is the thing itself. But, when considering those properties, that's sufficient.)

> But when it is better at humans then the definitions can just get tossed as usage changes.

I'm not sure what this means. We have the habit of formally specifying a problem, solving that specification, then realising that we haven't actually solved the original problem. Remember Deep Blue? (We could usually figure this out in advance – and usually, somebody does, but they're not listened to.) ChatGPT is just the latest in a long line.

> You are saying hard not complex.

Because reasoning is simple. Mathematical reasoning can be described in, what, two-dozen axioms? And scientists are making pretty good progress at describing large chunks of reality mathematically. Heck, we even have languages like (formal dialects of) Lojban, and algorithms to translate many natural languages into it (woo! transformers!).

… Except our current, simple reasoning algorithms are computationally-intractable. Reasoning becomes a hard problem with a complex solution if you want it to run fast: you have to start considering special-cases individually. We haven't got algorithms for all the special-cases, and those special-cases can look quite different. (Look at some heuristic algorithms for NP-hard problems if you want to see what I mean.)

> but think LLM are over rated,

I think they're not rated. People look at the marketing copy and the hype, have a cursory play with OpenAI's ChatGPT or GPT4, and go "hey, it does what they say it can!" (even though it can't). Most discussion seems to be about that idea, rather than the thing that actually exists (transformer models, but BIG). … but others in this thread seem to be actually discussing transformers, so I'll stop yelling at clouds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: