TuC3 Deep Learning based Perception for Autonomous Vehicle
Time : 16:40-18:10
Room : Room 3 (Burano 2)
Chair : Prof.Youngbae Hwang (Chungbuk National University, Korea)
16:40-16:55        TuC3-1
Comparison of Neural Network Architectures for the Detection of Clutter in Automotive Radar Data

Johannes Kopp(Ulm University, Germany), Dominik Kellner(BMW AG, Germany), Aldi Piroli(Ulm University, Germany), Vinzenz Dallabetta(BMW AG, Germany), Klaus Dietmayer(Ulm University, Germany)

The unique properties of radar sensors make them an important part of the environment perception system of autonomous vehicles. However, the detection point clouds generated by automotive radar sensors contain a large amount of clutter, i.e. erroneous detections, which do not match any real object. To address this issue, we present and compare three different neural network architectures for identifying such clutter detections. In particular, a state-of-the-art PointNet++, a newly designed Transformer architecture and an optimized convolutional neural network (CNN) setup are examined.
16:55-17:10        TuC3-2
Performance Analysis of NIA Artificial Intelligence Training Data for 2D Object Detection and 2D Semantic Segmentation

Youn-Ho Choi, Seok-Cheol Kee(Chungbuk National University, Korea)

This paper presents a study on the analysis of datasets used for the validation of 2D object detection in the 35th dataset of passenger autonomous driving car's driving data, as well as the analysis of datasets used for 2D semantic segmentation in the 36th dataset of passenger autonomous driving car's driving data, within the context of the 2022 NIA Artificial Intelligence Training Data Construction Project. The goal of this project is to build a data set optimized for the domestic road environment for 2D Object Detection and 2D Semantic Segmentation, such as bad weather, night, and daytime scenarios, through data collection suitable for the domestic environment, and verify data validity.
17:10-17:25        TuC3-3
Safe Teleoperation of the Vehicle through Delay Compensation Combined Speed and Separation Monitoring with Potential Field Approach

Teressa Thalluri, Eugene Kim, Hyunrok Cha(Korea Institute of Industrial Technology, Korea)

The study focuses on the application of Speed and Separation Monitoring combined with the potential field approach for meeting the safety requirements in teleoperation during collision-free scenarios. To enhance the efficiency of the Speed and Separation Monitoring and potential field approach applications, a continuous adaption of the vehicle’s relative velocity to obstacles is proposed in this study. The research proposed an artificial potential field approach to explore the impact of speed separation monitoring on the mobility of teleoperated unmanned ground vehicles and to evaluate an innovative approach for addressing challenges.
17:25-17:40        TuC3-4
Robust lane detection in various environments for lane following assist

Sumin Kim(Chungbuk National University, Korea), Jiwon Heo, Yeongbae Hwang(chungbuk national university, Korea)

This paper proposes a simple system capable of detecting binary lane images in real-time across various environments and enabling autonomous driving along the center of detected lanes. The model is trained on CamVid with extracted lane regions. This model ensures accurate lane segmentation regardless of lighting conditions. Lightweight ENet model is used to perform real-time lane segmentation. The modified model achieves approximately twice the performance on the newly acquired test set in comparison to the original model while maintaining the inference speed. In addition, we present an algorithm for extracting the steering angle from the segmented lanes generated by our modified model.
17:40-17:55        TuC3-5
Visual Localization of Intersections on Autonomous Vehicles Based on HD Map

Bizza Shafwah Utsula, Yul Yunazwin Nazaruddin, Nadana Ayzah Azis, Muhammad Dhany Ashedananta, Vebi Nadhira(Institut Teknologi Bandung, Indonesia)

The increasing number of autonomous vehicles will result in a safety factor that must be maintained especially at intersections where most road accidents occur each year. This research is proposed to improve the localization process by implementing the High Definition Map (HD Map) method, which maps the condition of an area that is stored as memory on an autonomous vehicle computer. The proposed system uses a monocular camera to be processed into a semantic segmentation feature map at the pixel level, then uses point cloud to reconstruct a 3D intersection model, which is then segmented semantically as well so that position data is obtained.

<<   1   >>