In the context of AI applications such as ChatGPT, a redistribution of work is taking place. ChatGPT's avoidance of racist or sexist language, for example, comes at the cost of poorly paid workers in Kenya screening out and flagging content that is punishable and deemed offensive. On this occasion, guests and ECDF members discussed gender aspects and the discrimination potential of AI applications at the first network meeting of the ECDF Gender & Diversity Network on May 31st, 2023. The focus was on the problem of content moderation. In particular, the question "Who should advocate for diversity?" sparked controversy.
Guests on the panel were researchers Maris Männiste (Södertörn University, Sweden), Leah Nann (LMU Munich) and Corinna Canali (UdK Berlin/Weizenbaum Institute). From ECDF, Florian Conradi (TU Berlin), Helena Mihaljević (HTW Berlin/moderation) and Michelle Christensen (TU Berlin/moderation) were present.
The invited experts, who deal with AI and discrimination and AI and participation, presented aspects of their work related to the topic of the evening at the beginning of the event:
Leah Nann is currently researching on anti-immigrant discourse on social media as well as online misogyny towards women politicians with migration background in the German context. She works as a PhD student in the AI4Dignity project at LMU Munich. The project explores concepts for collaborative coding of data used to train content moderation models. "Online speech is situated in different cultural conexts and thus the annotation of data needs to reflect this diversity," Leah Nann emphasizes – but currently this is not happening.
Corinna Canali is a doctoral resarcher at the University of Arts Berlin and the Weizenbaum Institute. She examines the digital moderation of Nudity and Sexual Activity on (and beyond) Western-based social networks. She analyses the digital systems and power structures that perpetuate the perception of femininity as obscenity. She showed that digital technologies currently heavily sexualize female bodies. Generative AI models, for example, are not capable of generating female nipple images. Due to censorship, images of female nipples are simply not available – as a basis of learning for AI.
However, Corinna Canali also illustrated the different digital treatment of female and male bodies with Apple's rules for personal engravings on the "AirTag" tracking device. While "fat" can be engraved without any problems, "boob" is not allowed. She also pointed out corresponding unequal treatment and a gender bias in YouTube's strategy as to which content can be monetized and which cannot.
Maris Männiste is a postdoctoral researcher at Södertörn University in Sweden. Her digitization research focuses on government welfare systems that are trying to leverage recent technological advances such as NLP to develop service-oriented chatbots for citizen services. Using examples from Sweden and Estonia, she showed that even with precise and straightforward input, the applications are far from making government services more accessible or visible to citizens. One chatbot she employed, for example, could not provide information on renewing children's passports. "If you have to articulate exactly what you need and then the service itself doesn't help, it's built against the interests of most citizens," Männiste said. What's relevant, she said, is whether those who will use these tools are involved in their development. And it makes a qualitative difference whether these technologies can be implemented in such a way that they also suggest services that citizens are not aware of but that offer them advantages.
Discussion:
At the beginning, examples from Corinna Canali's presentation were taken up. In cases where learning from the vast world of data is not done by machine, but is coded manually, who makes the rules and makes decisions? The relationships between data annotators and commercial platforms were highlighted as critical.
"In my perspective, a crucial issue with digital technology is the belief that human behavior can be reduced to simplistic binary categories of right and wrong. Consequently, there is an attempt to address complexity – of images of bodies for example – through automated systems that apply localized logics to a global audience, masking it as a universally applicable truth for everyone to follow," Canali states.
Maris Männiste addressed a particular problem in the moderation of hate speech. When it comes to automatic recognition, mainly the English language is taken into account. Problematic content in "small languages" can spread very quickly before it can even be removed.
Since content moderation is in the hands of large companies - the platform operators - the discussants and the audience posed the question of how to intervene critically at all. One suggestion from the audience was: "Participation! Something can only change if from now on everyone participates in writing, speaking and shaping." Against this, Gesche Joost critically interjects that she does not think it is possible to bring about change only in a participatory way. We would be dealing with entrenched power structures. Corinna Canali confirms the objection: "Exactly, we are dealing with structural relationships. This is clearly shown by the example of the female body being read as obscene and censored. White males have established this for centuries." The renewed question of diversity-postive agency from the audience sparked controversy. The question "Who should act for diversity ?" was raised.
Christine Kurmeyer formulated an answer: "A systematic and institutional approach must be taken, otherwise only the same people will ever come forward and a diversity of opinions cannot be represented. This must be actively constructed. Digitalization in particular offers opportunities to finally analyze these structures more precisely and thus also to identify better options for action at the structural level. For this, one must recognize how artificially - and not "naturally" - the conditions are shaped. In this way, concrete options for active intervention become visible."