Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nvidia has done great work with CUDA, but I wonder if AMD/ARM's HSA will make the work of the people using CUDA now much easier in the future - compute with a high-level language that you already know. That's got to be some kind of paradigm change there. I just hope they end up supporting Rust, too.


Compute on the GPU is unlikely to be unified in a meaningful way with the CPU in the near future. The GPU is a massive SIMD machine in contrast to the CPU.

Even if you use the same language for both, the style of code & algorithms are very different.


Check out the AMD APU. We've had good experiences with recent Intel integrated as well due to embedded RAM. You are right though in that we write our code with data parallel and sometimes manycore SIMT in mind.


I worked on the Intel integrated embedded RAM a couple years ago (before release)! It was tough getting the h/w bugs out. Are you using it in Windows or Linux?


Frustratingly, only when we do the client-only demos on our macbooks (iris). Our 'real' version is AWS or dedicated nvidia boxes, where we don't get it. OTOH, this sort of thing makes me excited that we can expect significant HW-driven speedups for our approach for years to come.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: