Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think 2 things need to happen for depth sensing to go mainstream (and I do believe they will happen sooner or later):

- Games. Look at the tennis ball example on the original article. Remember Google's IRL photo-based game (Ingress), now just imagine that with fine-grained 3D (Google will use it to crowd-source a centimeter-model of the earth). Games like this remain a bit of a niche, but just imagine if someone makes a massive social game out it on FB (a cross between Farmville, Sims, and Minecraft, projected onto your real world). Of course, someone could also create a shocking IRL FPS game (imagine your kids pointing this out the window in traffic and "shooting" at people and cars to watch them blow up). Finally something to use the processing power in theses little phones and tablets.

- 3D photography. I think this is the future of photography. Take a picture of something, extract the spacial data from the image, modify it/change the p.o.v. Recall the recent image/object maniupulation video (the SIGGRAPH one that used the PatchMatch algorithm to fix the background). Each photo becomes a mini-scene that you can navigate around (kinda like the "frozen" 360 degree pans in the Matrix). Next step is time-dimension, in other words 3D immersive movies where the viewer can move around almost anywhere while the movie unfolds. You can guess the first industry to adopt this...

Both of these can currently be done with flat images, processing power, and some human guidance. With depth sensing it can be faster, automated, and more accurate.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: