Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

NP-hardness was a popular basis for arguments for/against various AI models back around 1990. In 1987, Robert Berwick co-wrote "Computational Complexity and Natural Language" which proposed that NLP models that were NP-hard were too inefficient to be correct. But given the multitude of ways in which natural organisms learn to cheat any system, it's likely that myriad shortcuts will arise to make even the most inefficient computational model sufficiently tractable to gain mindshare. After all, look at Latin...


Even simple inference problems are NP-hard (k means for example). I think what matters is that we have decent average case performance (and sample complexity). Most people can find a pretty good solution to travelings salesman problems in 2D. Not sure if that should be chalked up to myriad shortcuts or domain specialization.. Maybe there's no difference. What do you have in mind re Latin?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: