This isn’t the right thread for this but you can look at the model and the difference in parameters. A MacBook Pro will be cheap but have slow inference whereas the using cards will be more expensive , faster for inference (usage) and refining. If you have the money go for the MacBook Pro since it seems that networks under 100B that are trained for a very long time have improved performance and the amount of parameters you need would fit in the MacBook Pro