In In Proceedings of Third International Conference on 3-D Digital Imaging and Modeling, pages: 99-106, In Proceedings of Third International Conference on 3-D Digital Imaging and Modeling, June 2001 (inproceedings)
We introduce a novel outlook on the selfcalibration task, by considering images taken by a camera in motion, allowing for zooming and focusing. Apart from the complex relationship between the lens control settings and the intrinsic camera parameters, a prior offline calibration allows to neglect the setting of focus, and to fix the principal point and aspect ratio throughout distinct views. Thus, the calibration matrix is dependent only on the zoom position. Given a fully calibrated reference view, one has only one parameter to estimate for any other view of the same scene, in order to calibrate it and to be able to perform metric reconstructions. We provide a closeform solution, and validate the reliability of the algorithm with experiments on real images. An important advantage of our method is a reduced to one number of critical camera configurations, associated with it. Moreover, we propose a method for computing the epipolar geometry of two views, taken from different positions and with different (spatial) resolutions; the idea is to take an appropriate third view, that is "easy" to match with the other two.
Biologische Kybernetik, INP Grenoble, Warsaw University of Technology, September 2000 (diplomathesis)
For a planar scene, we propose an algorithm to estimate its 3D structure. Homographies between corresponding planes are employed in order to recover camera motion parameters - between camera positions from which images of the scene were taken. Cases of one- and multiple- corresponding planes present on the scene are distinguished. Solutions are proposed for both cases.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems