Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can run a bunch of LLMs on a phone (such as llama), depending on which phone you have.


Apple would need to stick an m4 in the next iPhone to even hope to run something like this and I bet that GPT4o would run either slowly, poorly, or not at all on a top spec m4.


Of course GPT 4, or even 3, are impossible to run on any consumer product. As far as I know it's an ensemble of several models which are huge by themselves, with enormous hardware requirements.

But there's a lot of smaller LLMs, and my point is that these models can already run in mobile phones.


You can run a SLM (small language model) on an iPhone, and not much else. It would use up all of the resources. The iPhone only has 6 GB RAM.


Where do you draw the line? GPT2 was introduced as a LLM, and you can easily run it on more limited devices than a recent iPhone. Did it stop being an LLM when bigger models were released? Is llama 7B an LLM or an "SLM"?


Relatively speaking. It's like the definition of a supercomputer 30 years ago is a cheap Android phone in your pocket today.

You can certainly run a transformer model or any other neural network based model on an iPhone. Siri is probably some kind of neural network. But obviously a model running on device is nowhere near comparable to the current state of the art LLM's. Can't fit a $40k GPU in your pocket (yet).

A transformer running on an iPhone would be roughly 2 orders of magnitude smaller than the state of the art LLM (GPT4 with a trillion parameters)


> Did it stop being an LLM when bigger models were released?

Yes it does make sense that what is considered L for a LLM is a moving target.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: