Just like art, facial recognition technology is often misunderstood. It’s not an all-knowing big brother that keeps track of your weekly—or daily—trips to the pizzeria. More accurately, it’s a lead generator for law enforcement—akin to a more reliable eye witness.
For example, an arrest can never be made based solely on facial recognition results. But, any results achieved can be used as a lead for investigators. The onus then falls on law enforcement to establish probable cause before an arrest can be made. Beyond that, it’s important to note facial recognition software is only as good as the image galley it’s working with, which is more often than not overwhelmingly populated with publically available mugshots.
Moreover, facial recognition is not a machine-dominated technology. There are two parts to the technology—facial recognition, which is software-based; and facial identification, which is human-based.
“As is the case with most forms of newer technology, facial recognition is not necessarily helping us do something different. It’s just enabling us to do something better,” Roger Rodriguez, retired NYPD detective and facial recognition expert told Forensic Magazine. “What was once a manual drawn out process of viewing mugshot images to determine someone’s identity, or conducting neighborhood canvasses, facial recognition has streamlined the process by returning investigative results quicker and more efficiently.”
After 9/11, Rodriguez shifted his career path toward police technology, where he was recruited to the intelligence division of the NYPD. There, he designed and built the nation’s first dedicated facial recognition unit that triaged lower-quality photos for post-investigation analysis.
In a four-year window, his facial recognition unit generated 2,700 arrests from the 8,500 cases presented to the team.
The unit focused its efforts on taking low-quality photos—those not in good pose, or with poor lighting—and turning them into higher-quality images that would met the minimum requirements of facial recognition software. In the past, a poor image, like that taken from an off-axis CCTV camera or social media, meant an investigator could not perform a facial recognition search. Now, however, that is not always the case.
Rodriguez says image editing and enhancement is the breakthrough facial recognition technology was waiting for. With the NYPD, Rodriguez used primarily Photoshop to enhance images that included heavy pixilation, overexposure, poor lighting or subject pose, fisheye effects and more. Now, as the manager of image analytics with Vigilant Solutions, Rodriguez has created a platform that utilizes the company’s own suite of enhancement tools, based on his previous experiences at the NYPD.
“Any end-user, even a novice, can go in and not be afraid to tackle images,” Rodriguez explained. “We built the software tools to be very easy to use.”
Best Practices For Facial Recognition Investigation Workflow
1) Identify the image
The human facial identifier first must look at the probe image to determine if it meets the minimum criteria for a facial recognition search. Does it have to be rejected due to low quality, a bad pose, bad lighting, distance, etc? If it’s a good quality image, or a controlled image, it can be imported into the software for a result. If it’s not, can the photo be enhanced to give it a second life? Can pose and lighting be corrected using image enhancement tools and other graphic art design principles?
2) Run the search
Once an analyst has identified the image, you can run the search and apply filters. The most important question here is, what kind of image gallery are you using—one that has 140 million images, or one that has 1,000 images? The probability factor of a smaller gallery will give you a higher matching accuracy. Rodriguez always recommends letting the software return anywhere from 250 to 500 possible candidate matches. “If you can thumb through 500 candidates and locate your candidate at 300 or 400, then you are ahead of the game,” he says.
3) Facial identification
Once the software returns candidates, it’s time for the human element. An analyst must look at both the probe image and possible candidates and match unique identifers, such as stars, markers, moles, tattoos, hair texture, etc. “The ears—they are just as unique as the fingerprint,” says Rodriguez. “We ruled out a ton of doppelgangers by simply looking at the profile of the ear.”
4) Verify choice
Once it is determined this person, through a subjective analysis, meets all the traits physically, it must be validated that he/she was at the scene of the crime. Does he live in the area, has she committed the same crime before, what is the arrest history? A secondary phase of validation is peer-review. The analyst should show three to five peers how he/she matched up the physical attributes from the probe image to the candidate.
5) Possible match report
Place the probe image in the gallery, verify who it is and then create a possible match report. But the report is just that—a report. No law enforcement agency can go out and make an arrest based on a facial recognition match. They must establish probable cause by other means. This is no different than a called-in tip from someone on the street. The law enforcement agency still has to go out, bring that person in, and establish proper cause.
Art Meets Science
For example, the technique of graphically replacing closed eyes with a set of open eyes in a probe image has yielded hundreds if not thousands of returns, according to Rodriguez. This important yet simple enhancement technique changes the entire dynamic of the facial recognition search, as results would include only candidates with closed eyes if the image was not altered. The color of the eyes doesn’t even need to match—just manually editing open eyes into the probe image allows the facial recognition algorithm to make proper measurements of the face.
Facial recognition systems rely heavily on eye locations to properly orient the probe image before search. Eyes are like the cog to the facial recognition wheel—they’re a centralized focal point that the recognition process works around.
This is where the value of a human analyst comes into play. He/she would need to identify those low-quality images in need of manual eye enhancement. If an analyst was to rely only on the software, the matching ability of the algorithm would be severely compromised at the start of the search. For example, there are instances when a person’s nostrils can be mistaken for eyes. This happen when heads are positioned slightly upward in a probe image (also known as bad pose), and the nostrils are found to be more prevalent.
“When you [edit an image], an analyst is giving that photo a second opportunity to return a match,” said Rodriguez. “That is where the NYPD paved the way in their thinking. We focused on enhancing tools and giving photos a second opportunity, so that’s where the art comes in. Art and biometric science combined with graphic design elements increase the odds of finding potential matches.”
A Separate Biometric
All facial recognition matches are classified as “possible matches.” When a search returns a candidate, analysts must validate the match with a visual validation first and foremost. If this is confirmed, they can move on to determine if the match is strong enough using further intelligence-driven assessments, including peer review of the candidate and the initial probe image.
But, facial recognition is not a science. It is not regulated, there are no restrictions in place, no standards.
“In my opinion, it is a separate biometric,” Rodriguez said. “Biometric sciences are absolute. You have blood, DNA, fingerprints. That requires contact from a person. To me, facial recognition is a technology tool that has a lot of scientific principles, but it’s not absolute. There are a lot of variables that go into facial recognition. Lighting, pose, if someone is wearing glasses or a hat—it throws off the algorithms, they won’t work. It has problems. Anytime you are dealing with images and video, which are reprints and not the actual person, it should be treated as a separate biometric.”
Although facial recognition has been around for years, it’s still considered an up-and-coming technology based on its growing capabilities and perception—be that public or law enforcement.
Last summer, the watchdog Government Accountability Office released a report criticizing the FBI, saying it had not properly tested its facial recognition system or balanced civil liberties and privacy. The federal agency has amassed more than 411 million photos as part of its vast facial recognition database. This includes millions of driving license photos, photos of foreigners applying for visas, and criminal mugshots.
Rodriguez says the GAO’s fears are based on misinformation and a misunderstanding of what happens when law enforcement agencies use this technology every day in the real world. The misunderstanding comes from a multitude of sources, including Hollywood’s inaccurate representation of facial recognition technology, which can also cloud the opinions of law enforcement. Even they can be unsure where the technology is situated in the field, and also have unrealistic expectations.
“[To move the technology forward] you need the vendor community and the biometric science community to get together and educate the masses,” Rodriguez says. “There needs to be a push for education in this field. You need the biometric science community to accept that there is a certain level of art in facial recognition, you need them to separate it from the biometric sciences.”