Local visual feature based localisation and mapping by mobile robots

Detta är en avhandling från Örebro : Örebro universitet

Sammanfattning: This thesis addresses the problems of registration, localisation and simultaneous localisation and mapping (SLAM), relying particularly on local visual features extracted from camera images. These fundamental problems in mobile robot navigation are tightly coupled. Localisation requires a representation of the environment (a map) and registration methods to estimate the pose of the robot relative to the map given the robot’s sensory readings. To create a map, sensor data must be accumulated into a consistent representation and therefore the pose of the robot needs to be estimated, which is again the problem of localisation.The major contributions of this thesis are new methods proposed to address the registration, localisation and SLAM problems, considering two different sensor configurations. The first part of the thesis concerns a sensor configuration consisting of an omni-directional camera and odometry, while the second part assumes a standard camera together with a 3D laser range scanner. The main difference is that the former configuration allows for a very inexpensive set-up and (considering the possibility to include visual odometry) the realisation of purely visual navigation approaches. By contrast, the second configuration was chosen to study the usefulness of colour or intensity information in connection with 3D point clouds (“coloured point clouds”), both for improved 3D resolution (“super resolution”) and approaches to the fundamental problems of navigation that exploit the complementary strengths of visual and range information.Considering the omni-directional camera/odometry setup, the first part introduces a new registration method based on a measure of image similarity. This registration method is then used to develop a localisation method, which is robust to the changes in dynamic environments, and a visual approach to metric SLAM, which does not require position estimation of local image features and thus provides a very efficient approach.The second part, which considers a standard camera together with a 3D laser range scanner, starts with the proposal and evaluation of non-iterative interpolation methods. These methods use colour information from the camera to obtain range information at the resolution of the camera image, or even with sub-pixel accuracy, from the low resolution range information provided by the range scanner. Based on the ability to determine depth values for local visual features, a new registration method is then introduced, which combines the depth of local image features and variance estimates obtained from the 3D laser range scanner to realise a vision-aided 6D registration method, which does not require an initial pose estimate. This is possible because of the discriminative power of the local image features used to determine point correspondences (data association). The vision-aided registration method is further developed into a 6D SLAM approach where the optimisation constraint is based on distances of paired local visual features. Finally, the methods introduced in the second part are combined with a novel adaptive normal distribution transform (NDT) representation of coloured 3D point clouds into a robotic difference detection system.

  KLICKA HÄR FÖR ATT SE AVHANDLINGEN I FULLTEXT. (PDF-format)