WB8 Adaptive and Robust Control
Time : 13:00-14:30
Room : Room 8 (Ocean Bay)
Chair : Prof.Kyunghwan Choi (GIST, Korea)
13:00-13:15        WB8-1
Quadrotor Attitude Control Using Adaptive Novel Super-Twisting Algorithm

Hyunchang Kim, Hyeongki Ahn, Kwanho You(Sungkyunkwan University, Korea)

In this study, we introduced an adaptive novel super-twisting algorithm (ANSTA) controller, which includes STA to overcome the chattering phenomenon in sliding mode control (SMC) and allows for more stable flight. we established the mathematical stability proof for ANSTA and validated its performance through simulations. Detailed simulation of ANSTA demonstrates its superior performance over traditional STA and SMC in attitude stabilization. ANSTA offers valuable insights for advancing quadrotor flight control, enhancing stability, and paving the way for future research.
13:15-13:30        WB8-2
DDPG-Based PID Optimal Controller for Position and Sway Angle of RTGC

Steven Bandong, Yul Yunazwin Nazaruddin(Institut Teknologi Bandung, Indonesia)

Currently developing artificial intelligence, namely reinforcement learning based Deep Deterministic Policy Gradient (DDPG) approach, can provide continuous action or output that can be used for controller optimization problems. DDPG can learn from its interactions with the environment, allowing it to identify the best optimal parameter region. This paper proposes DDPG as an PID controller optimizer for position and swing angle of RTGC tested on six episode variations and a trajectory reference. The results garnered from this proposed methodology reveal that the optimal PID parameters perform well across a spectrum of reference trajectories, as well as under varying system parameters.
13:30-13:45        WB8-3
Trajectory tracking control of reconfigurable space manipulator

Ying Tian, Ying-Hong Jia(Beihang University, China)

The reconfigurable space manipulator can adjust its configuration according to the task requirements, which is of great research value in the face of complex and diverse space tasks. In this paper, the trajectory tracking control problem during system reconstruction is considered when there is a small uncertainty in the length of arms. First, a linearized relative error control model is derived supposing that small quantities of second and higher order can be ignored. On this basis, the adaptive control and parameter updating law are designed, and the stability of the control is proved by Lyapunov theory. The effectiveness of the proposed method is verified by numerical simulation.
13:45-14:00        WB8-4
Adaptive Neural Nonlinear Dynamic Inversion Control of Aircraft in Icing Conditions

Amin Rabiei Beheshti, Yoonsoo Kim, Rho Shin Myong(Gyeongsang National University, Korea)

This study addresses challenges posed by aircraft icing, including performance reduction and control issues. Ice accumulation changes aerodynamic coefficients, highlighting the need for an autopilot system. The research analyzes aircraft response in normal and icing conditions, introducing an Adaptive Neural Nonlinear Dynamic Inversion Adaptive Flight Control System. To handle icing uncertainties, Radial Basis Functions (RBF) are employed. Control design is achieved through Lyapunov function, considering desired, actual, and estimated models and defining errors. Simulation results show this method effectively manages aircraft under icing conditions.
14:00-14:15        WB8-5
The Koopman-based Reinforcement Learning Environment for the Quadrotor Control

YunA Oh, Jun Moon(Hanyang University, Korea)

In this paper, we propose the Koopman-based linear system identification for the soft actor-critic environment. We identify the linear state space of the quadrotor and apply this model as the update rule of the soft actor-critic environment. By using the Koopman-based state space, the soft actor-critic can learn the policy, which can be applied even in noisy environment. We demonstrate the proposal through an experiment with a quadrotor in a windy environment.
14:15-14:30        WB8-6
EID Observer based Reinforcement Learning Control for a Rotary Inverted Pendulum

Jiwon Seo(Chung-Ang University, Korea), Sesun You, Kwankyun Byeon, Wonhee Kim(Chung-Ang, Korea)

In this paper, an equivalent input disturbance(EID) observer based reinforcement learning(RL) control for a rotary inverted pendulum is proposed. A RL-based controller overcomes the model complexity of the rotary inverted pendulum for an accurate control. The RL-based controller is learned by using TD3 algorithm, which is based on a deep deterministic policy gradient(DDPG).

<<   1   >>