Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Human Body Key Point Estimation Based on FMCW Radar Images

Booth Id:
ROBO001

Category:
Robotics and Intelligent Machines

Year:
2024

Finalist Names:
Vajda, Adam (School: ELTE's Radnoti Miklos Affiliated Secondary School)

Abstract:
This research addresses the challenge of estimating human body key points from radar images, a crucial capability in computer vision with applications in surveillance, gesture recognition, and virtual reality. The study introduces the "See-Thru Model," a deep learning convolutional neural network designed for key point detection in radar data, particularly focused on enhancing rescue operations during and after natural disasters. The investigation entailed the development of a radar-camera synchronization system for data acquisition. Subsequently, several deep learning models were designed and trained to identify and predict human body key points from radar imagery. This process demonstrates the adaptation of convolutional neural networks (CNNs) to interpret radar image data, emphasizing innovative approaches to model training for improved accuracy. The See-Thru Model exhibited promising accuracy in identifying human body key points within radar images, addressing the inherent complexities of radar image interpretation. These findings highlight the model's potential to augment situational awareness and facilitate critical decision-making in emergency scenarios, such as rescue missions and close combat situations, when visual occlusion is present. This research demonstrates the viability and effectiveness of employing deep learning for human body key point estimation from radar images. The See-Thru Model's performance advances the field of computer vision and presents new possibilities for applications in emergency response and interactive technologies. This marks a significant advancement in integrating AI with radar imaging for real-world problem-solving.