The method depends on three major steps — first an algorithm automatically landmarks facial scans, labeling the tip of the nose and other points, then another algorithm lines up all scans according to their landmarks and combines them into a model, and finally an algorithm identifies and removes any poor scans.
Booth and colleagues also applied their method to a set of almost 10,000 demographically diverse facial scans, conducted at London's renowned Science Museum by plastic surgeons who endeavor to improve reconstructive surgery. Applying the algorithm to those scans created a "large scale facial model" (LSFM).
Paper The LSMM of The pdf https://t.co/kCzecrVanE
- 3DBrainz #VR #AR #AI (@ 3dbrainz) May 1, 2017
Tests demonstrated the team's LSFM much more accurately represented faces when pitted against other applications. In one comparison, models of a child's face were created using a photograph — every other popular morphable provision struggled to emulate a child's looks, while the LSFM almost perfectly recreated them.
Booth's application was also able to create specific morphable models for different races and ages, and to intuitively classify individuals into particular demographic groups. Booth's team has already put the new model to work.
In another paper, the researchers document using 100,000 faces synthesized by the LSFM to transform 2-dimensional snapshots into accurate 3D models. This technology would be hugely beneficial for identification purposes, whether by law enforcement or other interested parties — for instance, creating a detailed picture of an individual based on photographs or CCTV footage. Alternatively, individuals in portraits and photographs could be effectively brought to 3D life.
The LSFM could even have medical applications, such as helping optimize the design of bespoke prosthetic organs, limbs and appendages. Moreover, as facial scans often assist in the diagnosis of particular diseases and syndromes, enhanced modeling could augment such tests. They could even assist in machine learning, allowing artificial intelligence applications to identify emotions via monitoring facial expressions and movements.