> What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?
In a forum, it is the actual people who post who are responsible for sharing the recommendation.
In a chatbot, it is the owner (e.g. OpenAI).
But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.
Nah, OpenAI can’t have it both ways. If they’re going to assert that their model is intelligent and is capable of replacing human work and authority they can’t also claim that it (and they) don’t have to take the same responsibility a human would for giving dangerous advice and incitement.
Imagine a subreddit full of people giving bad drug advice. They're at least partially full of people who are intelligent and capable of performing human work - but they're mostly not professional drug advisors. I think at best you could hold OpenAI to the same standard as that subreddit. That's not a super high bar.
It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.
> I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.
If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.
You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.
This effect may force companies to simply ban chatbots from certain conversation.
The "at math" is the important part here - I've met more than a few people who are super smart about math but significantly less smart about drugs.
I don't think that it's a good policy to forcibly muzzle their drug opinions just because of their good arithmetic skills. Absent professional licensing standards, the burden is on the listener to decide where a resource is strong and where it is weak.
Aternately, Google claimed gMail wa in public beta for years. People did not treat it like a public beta that could die with no warning, despite being explicitly told to by a company that, in recent years, has developed a reputation for doing that exact thing.
In a forum, it is the actual people who post who are responsible for sharing the recommendation.
In a chatbot, it is the owner (e.g. OpenAI).
But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.