Demonstrating the impact of transfering the face aging effect from one cohort to another. Top row are original images from cohorts. Bottom row are aged variants. (Image: Zhenhua Feng et al.)

A facial recognition system that recognizes racial differences will yield more accurate results than a “one-size-fits-all” model, researchers from the University of Surrey report in the journal Pattern Recognition. The team found that a 3-D morphing face model trained to specifically recognize black, white and Asian faces in 2-D images performed better identifying faces at different angles and in different lighting scenarios than previous models.

“The main target of our paper is to deal with extreme pose variations in face recognition. We found that multi-modal 3D face models constructed using face attributes (e.g. race, gender age or expression) can recover the 3D face from a single 2D image better than a unified model, hence improve the performance in pose-invariant face recognition,” explained lead author Zhenhua Feng, from University of Surrey’s Centre for Vision, Speech and Signal Process, in an email to Forensic Magazine.

The team worked with a dataset of 942 3-D-scanned faces of people from different racial backgrounds to develop their model, the Gaussian Mixture 3D Morphable Face Model (GM-3DMM), which recognizes that the average quantitative values correlating to certain facial features will differ between different racial populations. Working with these distinct averages, GM-3DMM can make better predictions when attempting to match a 2-D face image to a 3-D subject.

The researchers tested their model using 249 subjects from the Multi-PIE face dataset, a set containing thousands of images of subjects posed at seven different angles and in varying states of illumination, and found that, on average, their multi-racial model had a 94.1 percent accuracy rate. Previous 3-D morphing models developed by University of Surrey researchers—the Unified 3DMM and the Efficient Stepwise Optimisation 3DMM—had average accuracy rates of 85 percent and 91 percent respectively. The new model also performed significantly better than five other state-of-the-art recognition methods.

The researchers also report increased precision in predicting how a subject will age over time. This could allow for recognition to remain accurate even after significant time has passed between when a subject’s image is first captured and when the system later tries to recognize them. This could be helpful in cases such as that of a fugitive who was captured in Nevada earlier this year when a trip to renew his driver’s led to the DMV’s facial recognition system matching him to his 1993 state ID card, acquired under a different name one year after he fled from a federal prison.

Feng noted that the collection of more face data can help to further train and improve systems.

“The main challenge of the work is data collection,” he said. “We travelled to China to collect more than 700 3D face scans for the experiments. However, I think this is still not enough for building a good 3D face model, or training a learning-based 3D face reconstruction algorithm.”

While this study focused mostly on lighting and poses, future research may be needed to tackle additional challenges to the advanced technology.

“For a frontal face with good image resolution, a deep neural network can perform much better than a human. The key obstacles to face recognition include strong/extreme variations in pose, expression, lighting, occlusion, makeup, image blurring and low resolution,” Feng said.