RGB-D terrain perception and dense mapping for legged robots
Dominik Belter ; Przemysław Łabecki ; Péter Fankhauser ; Roland Siegwart
International Journal of Applied Mathematics and Computer Science, Tome 26 (2016), p. 81-97 / Harvested from The Polish Digital Mathematics Library

This paper addresses the issues of unstructured terrain modeling for the purpose of navigation with legged robots. We present an improved elevation grid concept adopted to the specific requirements of a small legged robot with limited perceptual capabilities. We propose an extension of the elevation grid update mechanism by incorporating a formal treatment of the spatial uncertainty. Moreover, this paper presents uncertainty models for a structured light RGB-D sensor and a stereo vision camera used to produce a dense depth map. The model for the uncertainty of the stereo vision camera is based on uncertainty propagation from calibration, through undistortion and rectification algorithms, allowing calculation of the uncertainty of measured 3D point coordinates. The proposed uncertainty models were used for the construction of a terrain elevation map using the Videre Design STOC stereo vision camera and Kinect-like range sensors. We provide experimental verification of the proposed mapping method, and a comparison with another recently published terrain mapping method for walking robots.

Publié le : 2016-01-01
EUDML-ID : urn:eudml:doc:276699
@article{bwmeta1.element.bwnjournal-article-amcv26i1p81bwm,
     author = {Dominik Belter and Przemys\l aw \L abecki and P\'eter Fankhauser and Roland Siegwart},
     title = {RGB-D terrain perception and dense mapping for legged robots},
     journal = {International Journal of Applied Mathematics and Computer Science},
     volume = {26},
     year = {2016},
     pages = {81-97},
     zbl = {1336.93111},
     language = {en},
     url = {http://dml.mathdoc.fr/item/bwmeta1.element.bwnjournal-article-amcv26i1p81bwm}
}
Dominik Belter; Przemysław Łabecki; Péter Fankhauser; Roland Siegwart. RGB-D terrain perception and dense mapping for legged robots. International Journal of Applied Mathematics and Computer Science, Tome 26 (2016) pp. 81-97. http://gdmltest.u-ga.fr/item/bwmeta1.element.bwnjournal-article-amcv26i1p81bwm/

[000] Belter, D., Łabecki, P. and Skrzypczyński, P. (2012). Estimating terrain elevation maps from sparse and uncertain multi-sensor data, IEEE 2012 International Conference on Robotics and Biomimetics, Guangzhou, China, pp. 715-722.

[001] Belter, D., Łabecki, P. and Skrzypczyński, P. (n.d.). Adaptive motion planning for autonomous rough terrain traversal with a walking robot, Journal of Field Robotics, (in print). | Zbl 1243.68284

[002] Belter, D., Nowicki, M., Skrzypczyński, P., Walas, K. and Wietrzykowski, J. (2015). Lightweight RGB-D SLAM system for search and rescue robots, in M.K.R. Szewczyk and C. Zieliński (Eds.), Recent Advances in Automation, Robotics and Measuring Techniques, Advances in Intelligent Systems and Computing, Vol. 351, Springer, Cham, pp. 11-21.

[003] Belter, D. and Skrzypczyński, P. (2011a). Integrated motion planning for a hexapod robot walking on rough terrain, 18th IFAC World Congress, Milan, Italy, pp. 6918-6923. | Zbl 1243.68284

[004] Belter, D. and Skrzypczyński, P. (2011b). Rough terrain mapping and classification for foothold selection in a walking robot, Journal of Field Robotics 28(4): 497-528. | Zbl 1243.68284

[005] Belter, D. and Skrzypczyński, P. (2013). Precise self-localization of a walking robot on rough terrain using parallel tracking and mapping, Industrial Robot: An International Journal 40(3): 229-237.

[006] Belter, D. and Walas, K. (2014). A compact walking robot-flexible research and development platform, in M.K.R. Szewczyk and C. Zieliński (Eds.), Recent Advances in Automation, Robotics and Measuring Techniques, Advances in Intelligent Systems and Computing, Vol. 267, Springer, Cham, pp. 343-352.

[007] Berger, M., Tagliasacchi, A., Seversky, L., Alliez, P., Levine, J., Sharf, A. and Silva, C. (2014). State of the art in surface reconstruction from point clouds, in S. Lefebvre and M. Spagnuolo (Eds.), Eurographics 2014-State of the Art Reports, The Eurographics Association, Geneve.

[008] Bloesch, M., Gehring, C., Fankhauser, P., Hutter, M., Hoepflinger, M.A. and Siegwart, R. (2013). State estimation for legged robots on unstable and slippery terrain, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, pp. 6058-6064.

[009] Dey, T.K., Ge, X., Que, Q., Safa, I., Wang, L. and Wang, Y. (2012). Feature-preserving reconstruction of singular surfaces, Computer Graphics Forum 31(5): 1787-1796.

[010] Dryanovski, I., Morris, W. and Xiao, J. (2010). Multi-volume occupancy grids: An efficient probabilistic 3D mapping model for micro aerial vehicles, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 1553-1559.

[011] Fankhauser, P., Bloesch, M., Gehring, C., Hutter, M. and Siegwart, R. (2014). Robot-centric elevation mapping with uncertainty estimates, International Conference on Climbing and Walking Robots (CLAWAR), Poznań, Poland, pp. 433-440.

[012] Handa, A., Whelan, T., McDonald, J. and Davison, A. (2014). A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM, IEEE International Conference on Robotics and Automation, ICRA, Hong Kong, China, pp. 1524-1531.

[013] Hebert, M., Caillas, C., Krotkov, E. and Kweon, I. (1989). Terrain mapping for a roving planetary explorer, Proceedings of the IEEE International Conference on Robotics and Automation, Scottsdale, AZ, USA, pp. 997-1002.

[014] Hornung, A., Wurm, K., Bennewitz, M., Stachniss, C. and Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees, Autonomous Robots 34(3): 189-206.

[015] Hutter, M., Gehring, C., Bloesch, M., Hoepflinger, M.A., Remy, C.D. and Siegwart, R. (2012). StarlETH: A compliant quadrupedal robot for fast, efficient, and versatile locomotion, International Conference on Climbing and Walking Robots (CLAWAR), Baltimore, MD, USA, pp. 483-490.

[016] Kleiner, A. and Dornhege, C. (2007). Real-time localization and elevation mapping within urban search and rescue scenarios, Journal of Field Robotics 24(8-9): 723-745.

[017] Khoshelham, K. and Elberink, S. (2012). Accuracy and resolution of kinect depth data for indoor mapping applications, Sensors 12(2): 1437-1454.

[018] Kolter, J., Kim, Y. and Ng, A. (2009). Stereo vision and terrain modeling for quadruped robots, Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, pp. 1557-1564.

[019] Konolige, K. (1997). Small vision systems: Hardware and implementation, 8th International Symposium on Robotics Research, Monterey, CA, USA, pp. 111-116.

[020] Kweon, I. and Kanade, T. (1992). High-resolution terrain map from multiple sensor data, IEEE Transactions on Pattern Analysis and Machine Intelligence 14(2): 278-292.

[021] Łabecki, P. and Belter, D. (2014). RGB-D based mapping method for a legged robot, in C.Z.K. Tchoń (Ed.), Zeszyty Naukowe Politechniki Warszawskiej, Warsaw University of Technology Press, Warsaw, pp. 297-306, (in Polish).

[022] Łabecki, P. and Skrzypczyński, P. (2013). Spatial uncertainty assessment in visual terrain perception for a mobile robot, in J. Korbicz and M. Kowal (Eds.), Intelligent Systems in Technical and Medical Diagnostics, Advances in Intelligent Systems and Computing, Vol. 230, Springer-Verlag, Berlin, pp. 357-368.

[023] Matthies, L. and Shafer, S. (1987). Error modeling in stereo navigation, International Journal of Robotics and Automation 3(3): 239-248.

[024] Nowicki, M. and Skrzypczyński, P. (2013). Combining photometric and depth data for lightweight and robust visual odometry, European Conference on Mobile Robots, Barcelona, Spain, pp. 125-130.

[025] Park, J.-H., Shin, Y.-D., Bae, J.-H. and Baeg, M.-H. (2012). Spatial uncertainty model for visual features using a kinect sensor, Sensors 12(7): 8640-8662.

[026] Pfaff, P., Triebel, R. and Burgard, W. (2007). An efficient extension to elevation maps for outdoor terrain mapping and loop closing, International Journal of Robotics Research 26(2): 217-230.

[027] Plagemann, C., Mischke, S., Prentice, S., Kersting, K., Roy, N. and Burgard, W. (2008). Learning predictive terrain models for legged robot locomotion, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, pp. 3545-3552. | Zbl 1243.68295

[028] Poppinga, J., Birk, A. and Pathak, K. (2010). A characterization of 3D sensors for response robots, in J. Baltes et al. (Eds.), RoboCup 2009, Lecture Notes in Artificial Intelligence, Vol. 5949, Springer, Berlin, pp. 264-275.

[029] Rusu, R., Sundaresan, A., Morisset, B., Hauser, K., Agrawal, M., Latombe, J.-C. and Beetz, M. (2009). Leaving flatland: Efficient real-time three-dimensional perception and motion planning, Journal of Field Robotics 26(10): 841-862.

[030] Saarinen, J., Andreasson, H., Stoyanov, T. and Lilienthal, A.J. (2013). 3D normal distributions transform occupancy maps: An efficient representation for mapping in dynamic environments, International Journal of Robotics Research 32(14): 1627-1644.

[031] Sahabi, H. and Basu, A. (1996). Analysis of error in depth perception with vergence and spatially varying sensing, Computer Vision and Image Understanding 63(3): 447-461.

[032] Sharf, A., Lewiner, T., Shklarski, G., Toledo, S. and Cohen-Or, D. (2007). Interactive topology-aware surface reconstruction, ACM Transactions on Graphics 26(3), Article No. 43.

[033] Skrzypczyński, P. (2007). Spatial uncertainty management for simultaneous localization and mapping, Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, pp. 4050-4055.

[034] Skrzypczyński, P. (2009). Simultaneous localization and mapping: A feature-based probabilistic approach, International Journal of Applied Mathematics and Computer Science 19(4): 575-588, DOI: 10.2478/v10006-009-0045-z. | Zbl 1300.93157

[035] Stelzer, A., Hirschmuller, H. and Gorner, M. (2012). Stereo-vision-based navigation of a six-legged walking robot in unknown rough terrain, International Journal of Robotics Research 31(4): 381-402.

[036] Szeliski, R. (2011). Computer Vision, Algorithms and Applications, Springer, London. | Zbl 1219.68009

[037] Thrun, S., Burgard, W. and Fox, D. (2005). Probabilistic Robotics (Intelligent Robotics and Autonomous Agents), The MIT Press, Cambridge, MA. | Zbl 1081.68703

[038] Walas, K. and Belter, D. (2011). Supporting locomotive functions of a six-legged walking robot, International Journal of Applied Mathematics and Computer Science 21(2): 363-377, DOI: 10.2478/v10006-011-0027-9. | Zbl 1282.93191

[039] Walas, K. and Nowicki, M. (2014). Terrain classification using Laser Range Finder, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, pp. 5003-5009.

[040] Ye, C. and Borenstein, J. (2004). A novel filter for terrain mapping with laser rangefinders, IEEE Transactions on Robotics and Automation 20(5): 913-921.

[041] Yoon, S., Hyung, S., Lee, M., Roh, K., Ahn, S., Geeb, A., Bunnunb, P., Calwayb, A. and Mayol-Cuevas, W. (2013). Real-time 3D simultaneous localization and map-building for a dynamic walking humanoid robot, Advanced Robotics 27(10): 759-772.

[042] Zucker, M., Bagnell, J., Atkeson, C. and Kuffner, J. (2010). An optimization approach to rough terrain locomotion, IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, pp. 3589-3595.