Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
bigyabai
8 months ago
|
parent
|
context
|
favorite
| on:
Apple's MLX adding CUDA support
The inference side is fine, nowadays. llama.cpp has had a GPU-agnostic Vulkan backend for a while, it's the training side that tends to be a sticking point for consumer GPUs.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: