Co-liberative Computing


WASP-HS Workshop in conjunction with the conference AI for Humanity and Society 2024

This workshop is dedicated to exploring the critical importance of identifying, acknowledging, and addressing biases – such as those related to gender, race, and culture – embedded within generative models like large language models (LLMs). These models are often trained on vast datasets from diverse sources and may inadvertently encode biases. As a result, the content they generate – ranging from text to images and audio – can perpetuate stereotypes, reinforce social inequities, and marginalize certain societal groups. Our workshop aims to raise awareness and discuss the multifaceted nature of bias on generative models, extending beyond textual content to include imagery and audio outputs in various contexts such as education, health care and so on. This diversity in generated content underscores the imperative to tackle biases to ensure ethical development effectively.

Participants in this workshop will engage in a series of activities:

  • Keynote talk: A session discussing the latest research on biases in generative content, focusing on recent advancements and remaining challenges.

  • Short talks: Accepted abstracts are presented in short talks, where speakers share their experiences and methodologies for addressing bias in generative models.

  • Hands-on group exercises: Hands-on group exercises: Participants engage in practical exercises to practice identifying and mitigating biases directly within generative models.

We aim to facilitate discussions that raise awareness, pinpoint specific biases, and foster the understanding of comprehensive mitigation strategies to address them effectively, ultimately contributing to human beings' flourishing. Interactive discussions will enhance knowledge sharing among participants, enabling them to gain valuable insights into the ethical considerations and challenges inherent in developing and deploying generative models in diverse societal contexts.


Call for Participation

We are delighted to invite you to engage in our forthcoming workshop, available in two distinct roles: as a speaker or as an attendee.

  • For speakers: Please submit a 1-page extended abstract detailing your research or practical effort in identifying or/and addressing bias in generative models in a selected context. Approved abstracts will be selected for a 10-minute presentation during the workshop.

  • For attendees: Ordinary participation does not require an abstract, but we ask that you submit a brief statement of motivation. Please share how your interests or expertise align with the workshop’s theme. Additionally, you are welcome to submit a case study (150-200 words) that you would like to discuss during the workshop. We look forward to your contributions and insights at this interactive and enriching workshop.


Registration information

  • For speakers, please submit your abstract as a 1-page PDF file to: payberah@kth.se and oviberg@kth.se

  • Workshop registration form: link

  • Abstract submission deadline (only speakers): October 10, 2024

  • Workshop registration deadline: October 25, 2024

  • Workshop date: November 19, 2024, 09.00–12.00

Please note that in order to participate in this workshop you must also register for the conference via the event page.


Organizers

  • Amir H. Payberah, payberah@kth.se

  • Olga Viberg, oviberg@kth.se

  • Shirin Tahmasebi, shirint@kth.se

  • Alexandra Farazouli, alexandra.farazouli@edu.su.se