Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But it's not an argument from the paper. The idea is that superintelligence have to understand a consequence of any program to check for harm. And this is undecidable due to the halting problem. This argument is correct, but trivial and not worth a paper, in my opinion.


The harming algorithm is just used in the paper's proof as a simple wrapper around the halting algorithm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: