What do you mean, real-time GPUs? And: what interconnect are you running on? How does your scaling look? Is this just for embarrasingly parallel stuff?
Just curious; I'm running multi-GPUs myself for molecular dynamics.
Imagine moving a slider in your viz to change a filter, physics setting, etc. and have cluster start to immediately feed back new results.
We started on building fast-start multitenant access to single GPUs and approaching peak on those (full-GPU barnes hut, 10X over Keshav's work). We're now focusing on distributing, and as we are more interested on running on many GPUs for scale out, focusing on communication avoiding. This makes a path to giving companies time on 1000 GPUs (think Pixar-levels of compute) rather than shipping small 8 GPU boxes with infiniband. Via elasticity and time sharing, the analyst hour pricing is unprecedented.
The titan guys run on 20,000 GPUs for similar astronomy codes, so doable. We're making it in more accessible, big-team, and analyst-focused ways. E.g., load, interactively analyze with smart defaults & streamlined common paths, export/report, and share.
Ok, I see, so your approach makes sense for problems where throwing some GPUs at it gives you a solution in O(10) seconds? Sounds nice if you know you hav problems that fit into that category.
I found the Graphistry webpage lacking in answering the question "which specific problem does this solve?" Infoviz is too broad/vague.
Yep, except think magnitudes bigger & faster. Right now, we're applying this to visual graph analytics problems in a few key industries. If you do infoviz, email us and I'm happy to share more!