Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You have no idea whether you're talking to an LLM right now, and neither do I. That's good enough for me.
 help



I dunno, I am rather certain your comment was not made by an LLM. Moreover I am certain you knew my wasn't either.

And that's before the interrogation, which is the entire point of the test.

IMO, Turing test stands, but the experience you are referring to is basically a sub-human form of AGI.


It's crystal-clear that a model that was trained specifically to fool expert interrogators in a Turing test would, in fact, be able to do so. You'd have to sandbag the model just to keep it from tipping its hand by being too good.

We don't have any such models right now, AFAIK, so we can't run such a test. They wouldn't be much good for anything else, and would likely spark ethical concerns due to potential for misuse. But I have no doubt that it's possible to train for the Turing test.


I mean is it though? The top reasoning models suggest to walk to a car wash.

The top reasoning models suggest taking a car to the car wash.

Not 100% of time according to comments.

SotA doesn't matter, though. Only the first couple of time derivatives matter. Looking good for the clankers, not so much for us...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: