FA1 AI Applications 1
Time : 09:00-10:30
Room : Room 1 (Convention Center)
Chair : Dr.Se Yoon Oh (Agency for Defense Development, Korea)
09:00-09:15        FA1-1
Simulation of Physical Adversarial Attacks on Vehicle Detection Models

Se Yoon Oh, Hunmin Yang(Agency for Defense Development, Korea)

Physical adversarial attacks are a type of attack that aim to fool deep learning based object detectors by modifying the appearance of real-world objects or scenes. CG based simulation techniques are a type of method that use computer graphics to generate realistic adversarial examples that can be printed or projected onto physical objects or scenes. This technical research paper investigates physical adversarial attacks using synthetic image data and simulation based on computer graphics. Two application areas are explored, including object detection and physical adversarial attacks.
09:15-09:30        FA1-2
Fuzzy Inference System-applied Spacecraft Control for Final Approach of Rendezvous Process

Daegyun Choi(University of Cincinnati, United States), Anirudh Chhabra(University of cincinnati, United States), Donghoon Kim(University of Cincinnati, United States)

This work proposes an intelligent spacecraft control strategy using a fuzzy inference system (FIS) for a safe final approach of a servicing spacecraft, referred to as the chaser, toward a target spacecraft in need of servicing. The proposed FIS models are trained by a genetic algorithm using representative initial conditions, without considering disturbances, with the objective of minimizing the chaser's energy consumption. To validate the performance and effectiveness of the proposed model, the trained FIS-based control strategy is applied to various testing scenarios that consider random initial relative positions of the chaser, even in the presence of external disturbances.
09:30-09:45        FA1-3
Development of Mobile Robot with Autonomous Mobile Robot Weeding and Weed Recognition by Using Computer Vision

Azamat Nurlanovich Yeshmukhametov(Nazarbayev University, Kazakhstan), Daniyar Dauletiya(Astana IT University, Kazakhstan), Mukhtar Zhassuzak, Zholdas Buribayev(Kazakh National University, Kazakhstan)

Development of Mobile Robot with Autonomous Mobile Robot Weeding and Weed Recognition by Using Computer Vision
09:45-10:00        FA1-4
Multi-Source Tasks in Transfer Learning for Deep Reinforcement Learning: Application to Robotics Simulations

Younjae Go(Hanyang University, Korea), jun moon(hanyang university, Korea)

In deep reinforcement learning, applying transfer learning can lead to incorrect learning biases and result in unstable performance. To solve this limitation, we propose a more efficient approach called multi-source tasks, which utilizes multiple source tasks instead of the traditional single-source task approach. Moreover, we present the optimal way to learn multi-source tasks. We demonstrate that in a robotic manipulation environment, the method that uses multi-source tasks outperforms the method that uses a single-source task in transfer learning.
10:00-10:15        FA1-5
Enhancing Autonomous Robot Navigation based on Deep Reinforcement Learning: Comparative Analysis of Reward Functions in Diverse Environments

Nabih Pico(Sungkyunkwan University, Korea), Junsang Lee, Estrella Montero, Eugene Auh, Meseret Tadese, Jeongmin Jeon(Sungkyungkwan University, Korea), Manuel Alvarez(Escuela Politecnica del Litoral, Ecuador), Hyungpil Moon(Sungkyungkwan University, Korea)

Autonomous robot navigation in complex environments presents a significant challenge due to efficient decision-making for reaching goals and avoiding obstacles. This paper addresses this issue through the use of deep reinforcement learning techniques and a comprehensive analysis of reward functions and their impact on autonomous navigation. The study emphasizes the importance of selecting the most effective reward functions to achieve maximum robot performance in a variety of scenarios. Moreover, we propose a new reward mechanism that enables the robot to avoid collisions when objects move faster than the robot, resulting in the robot halting its motion to allow the object to pass before res

<<   1   >>