fbpx
The Stack Archive

Neural network identifies criminals by facial features

Thu 24 Nov 2016

A study from Shanghai Jiao Tong University found that a machine could be trained to identify criminals based on their facial features, raising ethical concerns regarding uses of AI.

The researchers used 1856 photographs of real people, half of whom were criminals. They found that via supervised machine learning, the system was able to correctly separate criminals from non-criminals based on photographs alone.

They divided the sample and used 90% of the photographs to train the machine, then used the remaining 10% for testing. Results of the testing showed a very high (above 80%) accuracy across all identifiers.

'Figure 10. (a) and (b) are ”average” faces for criminals and noncriminals generated by averaging of eigenface representations ; (c) and (d) are ”average” faces for criminals and non-criminals generated by averaging of landmark points and image warping.'

‘Figure 10. (a) and (b) are ”average” faces for criminals and noncriminals generated by averaging of eigenface representations ; (c)
and (d) are ”average” faces for criminals and non-criminals generated by averaging of landmark points and image warping.’

The system identified three structural differences between criminal and non-criminal populations: the curvature of the upper lip, which was 23% more pronounced in criminals, the distance between the inner eye corners, which was 6% smaller in criminals, and the angle between the tip of the nose and the corners of the mouth, which was 19% smaller in criminals.

The study also determined a separation between criminal and non-criminal features, with a concentric data set implying that non-criminals have a higher degree of resemblance to one another, and criminals have a higher degree of dissimilarity from one another.

Perhaps most worryingly, the researchers believe that their system has identified discriminating structural features for predicting criminality. In their study, the researchers note that the machine makes automated inference as to the criminality of the subjects, “free of any biases of subjective judgments of human observers.”

This is a cause for concern because it discounts the possibility of machine bias – the idea that even an algorithm can show bias toward a subset of the population. Machine bias could be caused by an unrecognized preconception in the creation of the algorithm or data set used in machine learning, or in the way that results are extrapolated by human interpreters and subsequently used.

For example, a study by ProPublica found that a software program created to predict potential recidivism in criminals was unfairly biased against African-Americans, assigning a much higher risk of committing future crimes to black defendants than to Caucasians. When results were examined, the study showed racial disparity along two distinct lines: the software was more likely to falsely flag black defendants as future criminals, wrongfully labeling them at almost twice the rate as white defendants; and mislabeling white defendants as low-risk.

Software similar to the one in the study are used in several states as information given to the judge during sentencing, and a bill pending review in Congress would mandate the use of these assessments in federal prisons.

Tags:

AI crime legal news research
Send us a correction about this article Send us a news tip


Sorry. No data so far.


Do NOT follow this link or you will be banned from the site!