Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The "write like the people who wrote the info you want" pattern absolutely translates across models.


Yes and no. I've found the order in which you give instructions matters for some models as well. With LLMs, you really need to treat them like black boxes and you cannot assume one prompt will work for all. It is honestly, in my experience, a lot of trial and error.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: