Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Blake Lemoine Interview on Sentient AI (youtube.com)
15 points by derangedHorse on June 26, 2022 | hide | past | favorite | 10 comments


Everyone seemed sooo eager to shit on this guy as some unhinged religious nut bag who couldn’t tell fact from fiction but it’s weird to see how detached that narrative has been from the claims he is ACTUALLY making.


In the interview he said that he believes the AI is sentient "based on his personal spiritual beliefs".

(This is same argument that was used to ban abortion in a dozen states this week.)


What’s your point? Personal spiritual beliefs are bad because they encourage people to speak out in defense of potential life?

Having personal spiritual beliefs automatically qualifies you as, and lumps you in with every other, “unhinged religious nut bag,” no matter what you’re trying to do as a result of those beliefs?


Personal spiritual beliefs are bad in both cases (and generally when not kept strictly private) because they are not rational, therefore the worst possible start to base an argument for any cause.

If you want to credibly argue for something, I would recommend not professing that your life is structured by magical thinking.


They're worried about a new type of colonialism through AI and I get that. But why not then use those countries' laws and culture as a baseline. Yes, you'd have to manually input this data somehow. Have an AI (or a questionaire) ask a simple set of questions to a set of people, start from there and learn from there. Keep the AI asking questions every so often to better its model.

I'm sure I'm missing something. It's never that simple, right?


There's no good way to validate that a model aligns with human values. The closest thing we have now would be extensive behavioral tests; e.g. does the chatbot say things that a majority of cultural members strongly agree with in the vast majority of cases, including in large numbers of nuanced situations where the context matters and differentiates a Western vs. non-Western response.

No one knows how to make those kinds of behavioral tests comprehensive, either.


Good interview. He seems like a level-headed guy. One of his questions is why does Google keep terminating their AI ethics experts?


Why would anyone need ethics experts for a glorified version of curve fitting? ;-)


Because syou don't need to be intelligent to do evil.

See: Weapons of Math Destruction

https://en.m.wikipedia.org/wiki/Weapons_of_Math_Destruction


Sure, but Google’s doesn’t seem to have anything against being evil. I’d expect the ethics committee exists mostly to avoid getting sued.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: