> Many of those people would be adequately served with a pair of glasses.
These people are being served by a preview of the service _right now_.
> Even if it could help people, it's an open question if it would be safe, to, for example, use this to scan medication when it is only a probabilistic model that may hallucinate something that isn't actually there.
Any OCR solution could also make a mistake, like misrecognizing a dosage on a prescription label.
> What you're talking about is a speculative use of a service that might one day exist based on this technology.
> What I am talking about is this actual service.
GPT-4 is six months old. ChatGPT is less than a year old. Why would you benchmark a service by the initial public preview? Of course it's _speculative use_, the damn thing has had its tires kicked for like a day.
These people are being served by a preview of the service _right now_.
> Even if it could help people, it's an open question if it would be safe, to, for example, use this to scan medication when it is only a probabilistic model that may hallucinate something that isn't actually there.
Any OCR solution could also make a mistake, like misrecognizing a dosage on a prescription label.
> What you're talking about is a speculative use of a service that might one day exist based on this technology.
> What I am talking about is this actual service.
GPT-4 is six months old. ChatGPT is less than a year old. Why would you benchmark a service by the initial public preview? Of course it's _speculative use_, the damn thing has had its tires kicked for like a day.