fbpx
News Hub

MIT researchers propose model for debiasing AI algorithms

Written by Mon 28 Jan 2019

Researchers demonstrate increased overall performance and decreased categorical bias with novel approach

Researchers at Massachusetts Insitute of Technology’s Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) have described an AI model that can evaluate and “debias” facial recognition software.

In a paper scheduled to be presented at the Association for the Advancement of Artificial Intelligence’s conference on AI, Ethics and Society in Honolulu, MIT CSAIL researchers propose an algorithm that identifies “under-represented” parts of training data and resamples them to increase their likelihood of being sampled. They claim the “learning debiasing model” can reduce categorical bias by up to 60 percent without affecting precision.

AI for all

Facial recognition tasks have been shown to exhibit strong bias among certain demographics, such as women and minorities. In 2012, researchers showed that the face detection system used by the US police force was significantly less accurate at identifying 18-30-year-old women with darker skin.

Only last week, Amazon’s facial recognition software Rekognition – which is used by US police, immigration and customs enforcement – was criticised for exhibiting bias in favour of white men by struggling to identify light-skinned women and men and women with darker skin.

The criticism stems from the findings of another research paper to be presented by MIT and the University of Toronto this week, that shows women with dark skin were incorrectly labelled as men 31 percent of the time by Rekognition. Amazon refutes the findings and says it has since updated the software to reduce bias.

While MIT CSAIL’s paper focussed on the issue of racial and gender bias in facial detection systems, it says its debiasing model is ‘generalizable across various data modalities and learning tasks’.

“By learning the underlying latent variables in an entirely unsupervised manner, we can scale our approach to large datasets and debias for latent features without ever hand labelling them in our training set,” the paper reads.

Last week, researchers at MIT CSAIL also announced that they have developed a cryptocurrency that uses 99 percent less data than popular cryptocurrencies such as Bitcoin.

MIT CSAIL researchers also recently described a model developed with Microsoft that identifies “blindspots” in autonomous systems, such as those used in driverless cars, that may cause ‘dangerous errors in the real world’.

In June, Microsoft expanded the datasets it uses to train its Azure-based recognition software Face API to encompass more tones, genders and ages. It claims the move has reduced error rates for men and women with darker skin by up to 20 times over.

Accenture, Facebook and Google have also all made efforts to mitigate racial or gender bias in facial recognition software.

Written by Mon 28 Jan 2019

Tags:

AI artificial intelligence bias
Send us a correction Send us a news tip



Do NOT follow this link or you will be banned from the site!