Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It depends on your tolerance for error.

When you have a machine that can only infer rules for reasoning from inputs [which are, more often than not, encoded in a very roundabout way within a language which is very ambiguous, like English], you have necessarily created something without "ground."

That's obviously useful in certain situations (especially if you don't know the rules in some domain!), but it's categorically not capable of the same correctness guarantees as a machine that actually embodies a certain set of rules and is necessarily constrained by them.



Are you contending that every human derives their reasoning from first principals rather than being taught rules in a natural language?


I'm contending that, like any good tool, there is a context where it is useful, and a context where it is not (and that we are at a stage where everything looks suspiciously like a nail).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: