Publication Type : Conference Paper
Source : IEEE MASCON, 2021
Url : https://ieeexplore.ieee.org/document/9563527
Campus : Bengaluru
School : School of Computing
Department : Computer Science
Year : 2021
Abstract : An autonomous vehicle requires a reliable system for high vehicle precision and relative estimation of its state for the safety of humans during the autonomous movement of the vehicle in an environment dominated by human drivers. Such systems have a complex environment involving multiple sensors (e.g. Vision modules, Global Navigation Satellite System (GNSS), LIDAR, RADAR). Through this paper, environment perception stack for self-driving cars is proposed to improve the intelligence for decision making and improve the safety measures. Semantic image segmentation, based on Fully convolutional Network architecture is implemented and the output received from the model is then used for implementing 3D space estimation and lane estimation. Considering the real-time cooperation required between the autonomous vehicles and other vehicles in the frame, a 2D object detector is implemented on the stack to detect different classes of objects and their relative distances are calculated. The proposed system is then implemented on the CARLA simulation software and generated outcomes are further discussed in the paper.
Cite this Research Publication : Manju Khanna, Tarun Tiwari, Satyam Agrawal, Aakarsh Etar, "Visual Perception Stack for Autonomous Vehicle using Semantic Segmentation and Object Detection", IEEE MASCON, 2021