Jump to content

User:Lou.nick

From Wikipedia, the free encyclopedia

Range and intensity sensor fusion is a special case of sensor fusion where range sensor data are used in combination with intensity sensor data in order to yield better results. This case of sensor fusion is mostly applied in robotics. Sensor fusion is an important aspect of any autonomous robotic systems because it provides the robot with data which have better quality as well as robustness. Below are described methods of sensor fusion used to tackle various problems in robotics.

Outline

[edit]

Modern sensor based systems have many different types of sensors, each one with different capabilities. The idea behind sensor fusion is to collect data from each sensor and to combine these data, so as to have a better knowledge of the environment. The range sensor data can be acquired using various types of sensors (e.g. laser sensors or sonar sensors), while the intensity sensor data are acquired from a camera.

Specifically the range and intensity sensor fusion proves to be useful in robotics because it enables robots to make better decisions and achieve complicated tasks, due to the better knowledge provided by the fusion process.

Applications

[edit]

As mentioned before the sensor fusion has many applications in robotic systems. Here are presented some applications of this technique.

Sensor fusion method for object detection

[edit]

As proposed in [1], the sensor data fusion is achieved by first using sonar data which are much more computationally expensive to acquire, in order to find areas of interest in the images acquired from the camera and then using the region growing segmentation method, the areas are fully recognized. The data acquisition can be seen in Fig. 1.

Fig. 1 Data acquired from a sub-aquatic vehicle using sonar(yellow) and image(red) sensors.

The results of region growing and the detected object can be seen in Fig. 2.

Sensor fusion method for robot localization

[edit]

Another sensor fusion method used in robot localization is described in [2]. The robot achieves the localization by fusing data from a 2D laser range scanner as well as a camera.

This method uses a specialized version of the Extended Kalman filter in order to fuse the data acquired from laser range sensors and camera images. The data acquired through the laser are used to create a 2D representation of what the robot sees. The intensity images are then used together with the 2D representation to extract more information on the environment. This information are usually edges representing corners of wall sections or doors which help in the localization process. In order to use the data of the different sensors this method uses probabilistic models described in the SPModel[3] for data representation. Then these data are compared with a precomputed map stored in the robot and if they match are fused in order to be used for robot localization using the SPFilter.

Sensor fusion method for robot path planning

[edit]

A third method used in robot path planning is described in [4]. The robot achieves the path planning as well as the collision avoidance by fusing data from a 2D laser range scanner as well as two cameras, in order to use stereo vision.

This method is using a laser sensor to create a 2D representation of the world of the robot. This 2D representation is created using a line segmentation algorithm, namely the iterative end point fit algorithm described in [5]. Then using this 2D representation, a 3D model of the world is created by creating a plane for each line segment. These planes are perpendicular to the floor. In the next step the robot acquires a stereo image using its' cameras and evaluates the 3D model created using the laser sensor data. This is done by connecting each point of the image taken by the first camera to the 3D model. Next step is to project these points to the image captured by the second camera. This is easy to do since the position of the two cameras is known. If the points are having the same attributes then the 3D world created before is correct. Here the sensor fusion is used to ensure the correctness of the calculations. Moreover the fusion of the two sensor system ensures that information not captured by one sensor, will be captured by the other. For example, since the laser is a 2D sensor it can scan only a plane of the area. If an obstacle lies outside the scan plane, above or below, then it will not be mapped. But since multiple sensors are used it is ensured that the mapping process will yield better results. Also the stereoscopic image can be used to extract depth and distance information that were not well captured by the laser sensor.

All the above techniques enable the autonomous robot to successfully avoid collision with objects of its' environment and navigate to its' target.



References

[edit]
  1. ^ Mike Chantler and Ceri Reid, "Sensor Fusion of Range and Intensity Data for Subsea Robotics" Fifth International Conference on Advanced Robotics, 1991. 'Robots in Unstructured Environments', 91 ICAR.
  2. ^ Jose Neira, Juan D. Tardos, Joachim Horn, and Gunther Schmidt, "Fusing Range and Intensity Images for Mobile Robot Localization", IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 15, NO. 1, FEBRUARY 1999
  3. ^ J. D Tardos, “Representing partial and uncertain sensorial information using the theory of symmetries,” in Proc. IEEE Int. Conf. Robot. Automat., Nice, France, 1992
  4. ^ Haris Baltzakis, Antonis Argyros, Panos Trahanias, "Fusion of laser and visual data for robot motion planning and collision avoidance", Machine Vision and Applications (2003)
  5. ^ Duda R, Hart P (1973) Pattern classification and scene analysis. Wiley-Interscience, New York
[edit]

Category:Image processing Category:Artificial intelligence