That's a different part of the implementation details. If you were to break the system into mocroservices, the model is a binary blob with a mocroservices wrapper and accessing web search is another microservice entirely. You really don't want the entire web to be constantly compressed and re-released as a new model iteration, it's super inefficient.
Technically you’re correct, but from a product point of view one should be able to get answers beyond the cut-off date. The current product fails to realise that some queries like “who is the current president of the USA” are time based and may need a search rather than an excuse.
This only holds water if they are able to retrain frequently, which they haven't demonstrated yet. But if they are as efficient as they seem, then maybe.