FB2 Human Robot Interaction
Time : 14:10-15:40
Room : Room 2 (Burano 1)
Chair : Dr.Kazuhisa Nakasho (Yamaguchi University, Japan)
14:10-14:25        FB2-1
Persona-based Dialogue Generation with Sentence Embedding for Prolonged Human-Robot Communication

Chen Yue, Armagan Elibol, Nak-Young Chong(JAIST, Japan)

It is one of the most fundamental and challenging problems for current dialogue generation systems to maintain the consistency of dialogue logic during the conversation.we propose a vertical-structure model based on the BERT model with the sentence embedding method. The model generates a raw response based on the sentence embeddings of context and persona and finally revises the raw response according to the persona. Moreover, an understanding task is designed for the BERT decoder to have a better revision ability. Considering the difference between the generation and the understanding models, three kinds of input methods are designed for each part of the model.
14:25-14:40        FB2-2
Tracking of Needle Grip for the Augmented Reality Simulation of Vascular Injection

Qiaodi Yuan, Doo Yong Lee(KAIST, Korea)

This paper presents a method to track the motion of the needle grip during an AR simulation of vascular injection. The needle grip is occluded under the user’s holding hand. Hence the method tracks the holding hand instead of tracking the needle grip directly. The displacement of the needle grip along the needle direction is estimated by tracking of the selected joint of the holding hand, combined with the readings of the sensor embedded in the haptic interface. A method to mitigate the visual artifacts caused by detection errors is also developed. The methods are evaluated by nurses carrying out the simulation.
14:40-14:55        FB2-3
Posture Estimation for Bed Monitoring System Using RFID

Kazuhisa Nakasho, Chiaki Kohama, Kenta Sawada(Yamaguchi University, Japan), Katsumi Wasaki(Shinshu University, Japan), Nobuhiro Shimoi(Akita Prefectural University, Japan)

In this study, we introduce an RFID-based bed monitoring system tailored for the elderly and delineate a methodology for classifying in-bed postures using machine learning. Our previous research indicated a decline in posture recognition accuracy when the individual in the training data differed from the one in the testing data. In this study, we examine inter-individual differences by projecting subject data onto a two-dimensional space, and explore the reasons behind the diminished recognition accuracy observed in prior research. Furthermore, we discuss future directions for enhancing the posture recognition rate.
14:55-15:10        FB2-4
Virtual Reality based Intuitive Spatial Visual Interface for Avatar Robot System

Jaeyong Shin, Junewhee Ahn, Suhan Park, Beomyeong Park, Junhyeok Cha, Jaeheung Park(Seoul National University, Korea)

This paper explores the development and evaluation of an avatar robot system, prioritizing the operator's visual experience. The aim is to create an intuitive, immersive system using virtual reality (VR) technology. It details the hardware and software setup, emphasizing VR equipment and control methods. Also, user-centric interface design is discussed and the system's impact on operator performance and satisfaction are evaluated. The results of this study contribute to the field of Avatar robot system by providing insight into the use of VR technology for intuitive and immersive operator experience.
15:10-15:25        FB2-5
Design of a Dog-type Social Robot to Support Children's Reading Activities and Development of a Touch Sensor Module for Users' Touch Interaction

YongSeop Kwon(UNIST, Korea), Seungbin Jeong(Ulsan National Institute of Science and Technology (UNIST), Korea), Haeun Park, Hui Sung Lee(UNIST, Korea)

This study aims to develop a dog-type robot that can be operated in the library based on the research contents of the corresponding “Read to a Dog”. The purpose of this study is to stimulate children’s interest in reading through a reading activity support robot, while giving them with an emotional experience of artificial intelligence through robots.Through robot use environment analysis and user analysis, the appearance shape of the robot is designed and the service requirements are defined.In addition, for touch interaction for emotional communication between children and the dog-type robot, this paper proposes a touch sensor technology .
15:25-15:40        FB2-6
Measurement of Gait Information in Wheeled AssistiveWalker using RGBD Camera

Seonghee Jeong(Osaka Electro-Communication University, Japan)

In this study, we proposed a method to measure various gait parameters of a walking aid using only an RGBD camera integrated into a commercially available walking aid. The proposed method involves attaching markers to a person’s feet and tracking the 3D positions of the markers during walking using the RGBD camera. By employing this method, we were able to estimate gait data such as stride length, step width, walking angle (distance factors), gait phases(stance and swing phases), foot height, and walking distance (time factors). We confirmed that the proposed method successfully measured each gait parameter through walking experiments for four different walking patterns.

<<   1   >>