Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
kurthr
3 months ago
|
parent
|
context
|
favorite
| on:
Olmo 3: Charting a path through the model flow to ...
Are there quantized (eg 4bit) models available yet? I assume the training was done in BF16, but it seems like most inference models are distributed in BF8 until they're quantized.
edit
ahh I see it on huggingface:
https://huggingface.co/mlx-community/Olmo-3-1125-32B-4bit
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
edit ahh I see it on huggingface: https://huggingface.co/mlx-community/Olmo-3-1125-32B-4bit