Is it proven that they serve the models at cost? Amodei has said that Anthropic's models make back their training cost - the reason they're so deep in the red is because they're investing substantially more in subsequent runs, and R&D dwarfs inference cost[1]. If the tech plateaus I would expect to see a lot of that R&D spend move into just powering inference.
[1] https://epoch.ai/data-insights/openai-compute-spend