Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like to agree as sorta yes but also really no because it's a rubber ducky that doesn't give you the chance to come to your own conclusion and even if it does it has you questioning it.
 help



i find its the opposite, LLMs can be made to agree with anything.... largely because that agreeability is in their system prompt

Yeah, this. Every conversation inevitably ends with "you're absolutely right!" The number of "you're absolutely right"s per session is roughly how I measure model performance (inverse correlation).

Ha, touche!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: