Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OpenAI spends less on training than inference, so the worst case scenario is less than double the cost after factoring in training. Inference is still cheap.


Inference is cheap. Training is cheaper. Then where's all the money going? OpenAI is reporting heavy losses, but you're saying the unit economics of inference are all good. What are they spending money on?


Their spending is not a problem. It's quite low for a top-tier hard tech company that's also running a consumer service with 500M active users. They are making a loss because 95% of their users are on free accounts, and for now they're choosing not to monetize those users in any way (e.g. ads).


sama tweeted that the $200 tier was priced too low to cover costs a few months ago.


At that price level you run into serious adverse selection


Meaning someone paying $200 monthly is going to use it as much as possible to get their money's worth.

I slightly disagree.

My hypothesis would be that the distribution for $200 users would be bimodal.

That is there would be a one concentration of super heavy power users.

The second concentration would be of people who want the "best AI" but are not power users and feel that most expensive -> the best.

Their actual usage would be just like normal free tier of ChatGPT.


How credible are his PR statements, though?


You think "we're losing money on this subscription tier" is good PR for their investors?


"This car is priced so low we're practically giving it away!"


But are you owning something with a subscription in a market that has normalized hiking their prices?

There used to be contracts with service providers, and that — IIRC — usually shielded consumers from exorbitant increases.


Salary, mostly. It's useful to separate out the GPU cost of training from the salary cost of the people who design the training systems. They are expensive.

That does not mean, however, that inference is unprofitable. The unit economics of inference can be profitable even while the personnel costs of training next-generation models are extraordinary.


> Then where's all the money going?

They are giving vast amounts of inference away as part of their free tier to gain market share. I said inference is cheap, not that it is free. Giving away a large amount of a cheap product costs money.

> you're saying the unit economics of inference are all good

Free tiers do not contradict positive unit economics.


Salaries?


This is not generally true. Inference costs have just began to spike starting with the 'test-time scaling' trend[1]. I imagine most OpenAI users are free and the mini models available to them only cost a few cents per task[2]. The chart from The Information featured in this Reddit thread seems more reasonable[3].

Although that was posted in October, so not much time for the reasoning model costs to show up. It's also important to note their revenue is on track to more than double this year[4] and one can't make a complete picture without understanding the revenue spent on the inference provided by these reasoning models.

[1] https://techcrunch.com/2024/12/23/openais-o3-suggests-ai-mod...

[2] https://techcrunch.com/2024/12/23/openais-o3-suggests-ai-mod...

[3] https://www.reddit.com/r/singularity/comments/1g0acku/someho...

[4] https://techcrunch.com/2025/06/09/openai-claims-to-have-hit-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: