Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, that's basically what I'm saying. Just less bluntly. It's slightly more nuanced than "LLMs cannot reason" because lines of reasoning are often in their dataset and can sometimes be used by the model. It's just that the model can't be relied on to know the correct reasoning to use in a given situation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: