I have a calibrated camera and have the intrinsic parameters. I also have the extrinsic parameters relative to a point (the world origin) on a planar surface in the real world. This point I have set as the origin in the real world coordinates [0,0,0] with a normal of [0,0,1].
From these extrinsic parameters I can work out the camera position and rotation in the world plane 3d coordinates using this here: http://en.wikipedia.org/wiki/Camera_resectioning
Now I have a second point which I have extracted the image coordinates for [x, y]. How do I now get the 3d position of this point in the world coordinate system?
I think the intuition here is that I have to trace a ray that goes from the optical center of the camera (which I now have the 3D position for as described above), through the image plane [x,y] of the camera and then through my real world plane which I defined at the top.
Now I can intersect a world coordinate 3d ray with a plane as I know normal and point on that plane. What I don't get is how I find out the 3d position and direction when it leaves the image plane through a pixel. It's the transformation through different coordinate systems that is confusing me.
No comments:
Post a Comment