But dont they have clauses like. Our models sometimes hallucinate, give out wrong answers. Basically use at your own risk and cannot sue us if it hoes wrong
Exceptions exist, that's why we have courts, to figure it out.
You can't put up a sign on a rollercoaster that says it's going to kill customers if they ride and then get away scot-free when someone dies. Though it took a court to say that, I'm sure.
Same thing here. openai is saying things, and then a court is going to have to decide if that's acceptable.
Anything a LLM generates should be verified if it is going to be used in a high stakes decision. Not knowing how LLMs work won't be a good defence. Can't blame ugly handwriting on the pen.