Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is more or less what I'm talking about. I wonder what possibilities lie with using the huge numerical computation available on a GPU applied to predictive parts of a CPU, such as memory prefetch prediction, branch prediction, etc.

Not totally dissimilar to the thinking behind NetBurst which seemed to be all about having a deep pipeline and keeping it fed with quality predictions.



I'm not sure if your idea in particular is possible but who knows. There may be fundamental limits to speeding up computation based speculative look-ahead not matter how many parallel tracks you have and it may run into memory through-put issues.

But take a look at the MOG code and see what you can do.

Check out H. Dietz' stuff. Links above.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: