Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>That they can be randomly, nondeterministically and confidently wrong, and there is no way to know without manually reviewing every output.

I think I can confidently assert that this applies to you and I as well.



I choose a computer to do a task because I expect it to be much more accurate, precise, and deterministic than a human.


That’s one set of reasons. But you might also choose to use a computer because you need something fine faster, cheaper, or in a larger scale than humans could do it—but where human-level accuracy is acceptable.


Honestly, I am surprised by your opinion on this matter(something also echoed a few times in other comments too). Lets switch the context for a bit… human drivers kill few thousand people, so why make so much regulations for self driving cars… why not kick out pilots entirely, autopilot can do smooth(though damaging to tires) landing/takeoffs, how about we layoff all govt workers and regulatory auditors, LLMs are better at recall and most of those paper pushers do subpar work anyways…

My analogies may sound apples to gorillas comparison but the point of automation is that they perform 100x better than human with highest safety. Just because I can DUI and get a fine does not mean a self driving car should drive without fully operational sensors, both bear same risk of killing people but one has higher regulatory restrictions.


There's an added distinction; if you make a mistake, you are liable for it. Including jail time, community service, being sued by the other party etc.

If an LLM makes a mistake? Companies will get off scot free (they already are), unless there's sufficient loophole for a class-action suit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: