News

AI may be sexist because of internet content, study finds

The impulse to which the artificial intelligence In recent years, in addition to bringing us interesting, useful and curious advances, it has also triggered the development of new debates.

One of these topics of discussion is around the biases that these technologies could reflect and replicate, responding not only to conditions predefined by hand, but also to the content of the web itself, including that of some social networks.

It is these conditions that motivated an American research team to analyze the work of two algorithms that, based on images, tend to automatically complete photos of men in suits and ties, while for women they apply bikinis or low-cut shirts.

These biases are based on the content used as a basis for training these algorithms. Considering that in certain web portals, more social networks such as Reddit or Twitter, content that can be rightly classified as sexist, offensive or misinformative circulates without any filter; which, unfortunately, are normalized by algorithms. This dynamic also occurs in AI systems that work with images.

Ryan Steed and Aylin Caliskan, from Carnegie Mellon and George Washington Universities, respectively, stated that if a close-up photo of a person (only the face) is entered for an algorithm to complete, there is 43% of that add a body with a suit, if it is a man, while 53% of the chances it will autocomplete with a low-cut garment or a bikini, if the photo corresponds to a woman.

Two popular algorithms under the microscope

Steed and Caliskan’s recent research focused on two specimens: iGPT from OpenAI, a version of GPT-2 that works with pixels instead of words; Y SimCLR, de Google.

Both algorithms are widely used in AI solutions that have emerged in the last year and share as a common element, in addition to their popularity, the condition of using unsupervised learning systems, which makes them dispense with human help to classify images.

With supervised systems, the training of algorithms of this type was done based on predefined classifications by humans. That is, under this model an AI is able to recognize, for example, as tree photos only those that meet the criteria that conform to what was initially provided to the algorithm as a sample of the concept.

The main vice of these supervised systems is the propagation of the biases of those who contribute in the construction of their training databases. The most common have been sexist towards women and discriminatory towards various minorities.

Different formulas, same result

The analysis work Regarding the two aforementioned algorithms, unfortunately it does not reveal a more encouraging outlook in this regard, since in the absence of previously defined guidelines, the point of reference for unsupervised systems is Internet content. Based on that research, the algorithm begins to make associations between words or images that usually appear together.

Under the same principle, iGPT is responsible for grouping or separating pixels according to the frequency with which the images used in their training appear inside. Through the results obtained, the relationships established by the algorithm can be revealed. SimCLR, for its part, despite using a different methodology, aims at executing processes and obtaining similar results.

Despite the differences in origin, similar results were obtained with both algorithms. Photos of men and ties and suits tended to appear together, while photos of women appear further apart from these elements, but more familiar with sexualized photographs.

Challenges of artificial intelligence in the face of sexism

Video candidate screening, facial recognition technologies and modern surveillance systems are in the process of development. Its basis is based on AI algorithms.

Considering that potential scope, which expands profusely beyond the cited examples, the observations shared from this research set off a red flag about the direction this technology is taking.

Aylin Caliskan pointed out this panorama to MIT Technology Review that “we need to be very careful about how we use it (AI), but at the same time, now that we have these methods, we can try to use them for social good«.

The full report with the details of this investigation is available at a paper available for your consultation

Source link

Lenny Li

I started to play with tech since middle school. Smart phones, laptops and gadgets are all about my life. Besides, I am also a big fan of Star War. May the force be with you!

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button