Vision-Based Localization and Data Fusion in a System of Cooperating Mobile Robots (bibtex)
by R Hanek and T Schmitt
Abstract:
The approach presented in this paper allows a team of mobile robots to estimate cooperatively their poses, i.e. positions and orientations, and the poses of other observed objects from images. The images are obtained by calibrated color cameras mounted on the robots. Model knowledge of the robots' environment, the geometry of observed objects, and the characteristics of the cameras are represented in curve functions which describe the relation between model curves in the image and the sought pose parameters. The pose parameters are estimated by minimizing the distance between model curves and actual image curves. Observations from possibly different view points obtained at different times are fused by a method similar to the extended Kalman filter. In contrast to the extended Kalman filter, which is based on a linear approximation of the measurement equations, we use an iterative optimization technique which takes non-linearities into account. The approach has been successfully used in robot soccer, where it reliably maintained a joint pose estimate for the players and the ball.
Reference:
Vision-Based Localization and Data Fusion in a System of Cooperating Mobile Robots (R Hanek and T Schmitt), In Proc. of the IEEE Intl. Conf. on Intelligent Robots and Systems, IEEE/RSJ, 2000.
Bibtex Entry:
@inproceedings{hanek_vision-based_2000, author = {R Hanek and T Schmitt}, title = {Vision-Based Localization and Data Fusion in a System of Cooperating Mobile Robots}, booktitle = {Proc. of the {IEEE} Intl. Conf. on Intelligent Robots and Systems}, year = {2000}, pages = {1199–1204}, publisher = {{IEEE/RSJ}}, abstract = {The approach presented in this paper allows a team of mobile robots to estimate cooperatively their poses, i.e. positions and orientations, and the poses of other observed objects from images. The images are obtained by calibrated color cameras mounted on the robots. Model knowledge of the robots' environment, the geometry of observed objects, and the characteristics of the cameras are represented in curve functions which describe the relation between model curves in the image and the sought pose parameters. The pose parameters are estimated by minimizing the distance between model curves and actual image curves. Observations from possibly different view points obtained at different times are fused by a method similar to the extended Kalman filter. In contrast to the extended Kalman filter, which is based on a linear approximation of the measurement equations, we use an iterative optimization technique which takes non-linearities into account. The approach has been successfully used in robot soccer, where it reliably maintained a joint pose estimate for the players and the ball.}, }
Powered by bibtexbrowser
Go Back