Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

A New Approach to Image-Based 3D Virtual Try-On Using Deep Learning

Booth Id:

Systems Software


Finalist Names:
Huh, ChaeWon (School: Kyeongbuk High School)
Huh, Chae Young (School: Daegu NamSan High School)
Chang, Eun Woo (School: Gyeongil Girl’s High School)

New ordered clothes from online fashion mall often don’t look good on us when we try them on after delivery. We look at the clothed model photo, imagine if the clothes will look good on us like the model, but our imagination and reality are different. If we can try on clothes in a virtual space before placing an order, we will be able to buy clothes more successfully. The current virtual fitting puts digitized 3D clothes on virtual avatars. It needs digitized 3D clothes, but it takes time and money to make clothes into 3D data. This is the main reason why the current virtual fitting is not practically used. Our new virtual try-on uses clothed model photos and user photos instead of 3D digitized clothes and avatars. The result is a 3D shape in which the user is wearing the model's clothes. We use a Deep Learning UNET model to remove the background from the model picture and the user picture, and use Dlib, BiseNet and OpenCV WarpAffine to create a new picture with the model’s face replaced with the user's face. We perform human shape 3D reconstruction from the new picture and user’s picture using PIFuHD. Next, we modify new 3D body shape that has user’s head and model’s body by moving the surface in the direction of the normal vector on each vertex according to the body shape ratio of user to standard body. Finally, we merge selected background. By working with pre-existing model photos instead of digitizing clothes into 3D data, the new virtual try-on can be compatible with any outfit from online fashion malls. We evaluate this new approach as showing the potential of practical virtual fittings for the successful online clothing purchases.