Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Cross-Age Face Recognition Based on Deep Neural Network with Multi-Stage Feature Decomposition

Booth Id:
ROBO017

Category:
Robotics and Intelligent Machines

Year:
2021

Finalist Names:
Jiang, Xinyun (School: Hangzhou Foreign Languages School)

Abstract:
Although automated face recognition has extensive real-life applications, in scenarios such as looking for lost or kidnapped children and recognizing ID photos, the pictures of the same person in the face database and the pictures to be recognized may be taken at different ages and possess significant differences, which leads to the challenging problem of cross-age face recognition. The key technique in cross-age face recognition is to decompose facial features and separate the identity-related features from the mixed features. In particular, it is vital to thoroughly separate age-related features and identity-related features to reduce the interference of age. The existing cross-age face recognition models have weaknesses, such as incomplete feature decomposition, overlapping of the decomposed features, and information loss caused by decomposition. To better achieve cross-age face recognition, this study proposes a multi-task deep neural network model (SWNet) based on multi-stage feature decomposition. Specifically, residual decomposition or orthogonal decomposition is adopted for each feature extracting layer. Then, in each step of the eigendecomposition in the deep neural network, the identity-related features and age-related features are directly connected to the final loss function for supervised learning so that the separated features in each feature extracting layer directly affect identity recognition. The features of each layer are integrated with those of the previous layers to compensate for the information loss caused by the previous decomposition. Extensive experiments on popular cross-age datasets show the superior performance of the proposed approach as compared with state-of-the-art methods.