Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

for another angle - depending on the provider, theyre going to train on these queries and responses, and i dont want folks training an Epstein LLM, or accidentally putting Epstein behaviour into LLMs
 help



Use an abliterated LLM and you can have it act like the worst person you can imagine.

I'm also pretty sure these docs are already being used for training, whether or not Jmail / Jemini exists.


I was just thinking today how I wonder what kind of abliterated models the US security apparatus is cooking up and what they're using them for. These kinds of things were a lot more fun when they were just silly dan brown novels and not real horrors on earth.

AFAIK, nation-state LLM's are likely using models that don't need to be abliterated. Why introduce a step that cripples their performance? Do you truly need refusals when trying to figure out zero days? I might need to watch Psycho Pass again.

Do you think Elon is working on building some kind of MechaEpstein?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: