Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm guessing it's a regular camera lens, surrounded by a wide angle lens which is donut-shaped.

Both images end up superimposed on the sensor, and there is probably a lot of distortion too, but for AI that might not be an issue.



Needlessly complex, and machine vision camera users don't like the ambiguity that comes with ML processing on the frontend of their own stuff.


True, but if you're going for frontend ML, which is effectively a black box anyway, you might as well have some non-human-understandable bits in the optics and hardware too.

Various designs for microlens arrays do similar things - thousands of of 0.001 megapixel images from slightly different angles are fairly useless for most human uses, but to AI it could be a very powerful way to get depth info, cut the camera thickness by 10x, and have infinite depth of focus.


not sure how you took the idea "we want wide and narrow views of the same perspective" and thought building a light field camera might be a practical approach


While it is possible to build a consumer lightfield camera (Lytro was one example), they aren't as magical as you might think until you get much larger lens sizes than people are going to tolerate to get appreciable zoom range.

I did a bunch of manual creation of light-field photos over the years.[1] To get interesting compositions, you need an effective lens diameter of about 30 cm in diameter or more. To get super-resolution good enough for zoom, you're going to probably need something that size.

[1] https://www.flickr.com/photos/---mike---/albums/721777202979...


Not disputing the feasibility of light field imaging. That approach really doesn't do anything for the use case the camera Nikon/Mitsubishi are showcasing. Light field cameras have low resolution for their sensor sizes, lower optical efficiency, are expensive to manufacture, require processing that would make this a bad fit for the near-realtime ADAS functions you need for automotive machine vision, and have no advantage when it comes to favoring one part of the image in terms of angular resolution.

Like, why even mention them?


And I would have no idea how to calibrate it.

If it produces an EXR with clearly seperate images with different lenses, fine. Like a 3D EXR with left and right.


Depending on the lens production process, the relationship between the wide angle and regular angle might be fully defined (ie. you don't need to calibrate it, you can just read the transformation matrices off the datasheet and it's gonna be correct to within 0.1 pixels).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: