Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The framing of the question admits only one reasonable answer: There is no such threshold. Fooling people into believing something doesn't make it so.


Most peoples interactions are transactional. When I call into a company and talk to an agent, and that agent solves the problem I have regardless of if the agent is a person or an AI, where did the fooling occur? The ability to problem solve based on context is intelligence.


What criteria do you suggest, then?

As has been suggested, the models will get better at a faster rate than humans will get smarter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: