报告题目：Autonomous Robot Navigation Using Multi-Modal Sensor Fusion
报 告 人：Prof. Farshad Khorrami
主 持 人：魏同权 副教授
报告时间：2018年7月16日 周一 11:00
Farshad Khorrami received his Bachelors degrees in Mathematics and Electrical Engineering in 1982 and 1984 respectively from The Ohio State University. He also received his Master's degree in Mathematics and Ph.D. in Electrical Engineering in 1984 and 1988 from The Ohio State University. Dr. Khorrami is currently a professor of Electrical & Computer Engineering Department at NYU where he joined as an assistant professor in Sept. 1988. His research interests include adaptive and nonlinear controls, robotics and automation, unmanned vehicles (fixed-wing and rotary wing aircrafts as well as underwater vehicles and surface ships), smart structures, large-scale systems and decentralized control, cyber security for cyber-physical systems, and smart grid. Prof. Khorrami has published more than 250 refereed journal and conference papers in these areas. His book on "modeling and adaptive nonlinear control of electric motors" was published by Springer Verlag in 2003. He also has fourteen U.S. patents on novel smart micro-positioners and actuators, control systems, and wireless sensors and actuators. He has developed and directed the Control/Robotics Research Laboratory at Polytechnic University (Now NYU). His research has been supported by the Army Research Office, National Science Foundation, Office of Naval Research, DARPA, Sandia National Laboratory, Army Research Laboratory, Air Force Research Laboratory, NASA, and several corporations. Prof. Khorrami has served as general chair and conference organizing committee member of several international conferences. Prof. Khorrami has also commercialized UAVs as well as development of auto-pilots for various unmanned vehicles.
Environment perception and autonomous navigation using real-time sensor data in uncertain environments is a crucial capability for robotic vehicles. To this end, this talk will focus on machine learning based approaches for autonomous navigation of ground vehicles in unknown environments. Specifically, an end-to-end learning framework for real-time fusion of raw sensor data from camera and LIDAR will be presented. Experimental studies on small unmanned vehicles (ground and aerial platforms) will be presented including analyses of the proposed methodology to various types of sensor noise/nonidealities, sensor failures, occlusions, and environment variations. While the proposed end-to-end learning approach provides these strong robustness properties, it will then be shown that specifically crafted perturbations (adversarial perturbations) both in camera and LIDAR data can still generate undesirable behaviors. Lastly, methods to alleviate such fragility of learning based systems to adversarial perturbations will be presented based on generative adversarial learning based techniques.