Can Humans Distinguish Between Reality and CGI?
In 1996, the United States Congress passed the Child Pornography Prevention Act (CPPA), which, at the time, defined child pornography as “any visual depiction including any photograph, film, video, picture or computer-generated image that is, or appears to be, of a minor engaging in sexually explicit conduct.” But the 2002 court case Ashcroft v. Free Speech Coalition challenged that definition, claiming that the CPPA was too broad and infringed on protected speech in regards to computer-generated imagery. The U.S. Supreme Court agreed.
In an attempt to remedy the situation, Congress passed the PROTECT Act, which classified computer-generated child pornography as obscene. But according to researchers from Dartmouth College and the Univ. of California, Berkeley, defense attorneys can still utilize the “virtual defense” and claim that a client’s images are computer-generated. If that happens, it falls on the prosecution to prove whether the images are photographic, or computer-generated.
In a world where computer-generated imagery increasingly rivals photography, it’s becoming more difficult for people, and potentially a jury, to distinguish between the two. For Prof. Hany Farid, of Dartmouth, this has brought rise to a number of complex forensic and legal issues.
“Most of my work is focused on developing computer software to determine if a photo is real or fake,” Farid said to R&D Magazine. “Over the past few years we have invested considerable time in measuring and understanding the strengths and weaknesses of the human observer in performing the same task.”
In a new study published in ACM Transactions on Applied Perception, Farid and colleagues showed that humans are finding it increasingly difficult to distinguish between computer-generated and photographic images.
The study consisted of two experiments. In the first, 250 participants were shown a collection of 60 images, half of which were photographic while other half were computer-generated replicas. Participants classified photographic images with 92% accuracy, however, computer-generated images were classified correctly only 60% of the time.
“It is important to understand what a human analyst can and cannot visually verify in a photo so as to focus our efforts on developing a software for those areas where the human analyst is weak,” Farid said, “and…understanding human weakness also informs us as to where a forger may make a mistake.”
In the follow-up experiment, Farid and colleagues trained the participants to distinguish between photographic and computer-generated images. “The training consisted of showing observers 10 (computer-generated imagery) and 10 photographic images and telling them which was which,” Farid said.
Following the training, participant accuracy of identifying computer-generated images jumped to 76%. Conversely, the accuracy of identifying photographic images fell to 85%.
“Although performance on photographic images fell a bit, their performance on (computer-generated imagery) improved quite a bit and this ‘overall’ improvement is what is most critical,” Farid said.
In 1970, roboticist Masahiro Mori coined the term “uncanny valley,” which defines a fear or revulsion felt by humans when they views an object that is humanlike but is also somehow off in appearance. The term can apply to computer-generated imagery, as well.
“It is possible to render extremely high quality images of inanimate objects that are very hard or impossible to distinguish from the real,” Farid said. “When it comes to images of humans, however, we are still quite good at distinguishing. This is almost certainly because our visual system is highly tuned at recognizing people.”
As technology advances, though, it’s easy to think that the valley may be closing. And Farid and colleagues are working to develop computational methods that are more accurate than the human visual system.