Reading and Implementation on Intersectionality
For this assignment, please read the following book chapters and articles:
1. Demarginalizing the Intersection of Race and Sex, Kimberlé Crenshaw, Feminist Legal Theories. Routledge, 2013
2. Are "Intersectionally Fair" AI Algorithms Really Fair to Women of Color? A Philosophical Analysis, Youjin Kong, ACM FAccT, 2022
3. Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness, Anaelia Ovalle et al., AAAI/ACM AIES, 2023
4. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness, Michael Kearns et al., PMLR, 2018
5. Multicalibration: Calibration for the (Computationally-Identifiable) Masses, Ursula Hébert-Johnson et al., PMLR, 2018
6. Differential Fairness: An Intersectional Framework for Fair AI, Rashidul Islam et al., Entropy, 2023
After completing these readings, please critical answer the following questions.
Based on paper [1], answer the following questions:
1. The author critiques single-axis approaches to anti-discrimination law, suggesting they are insufficient to address the complex experiences of individuals with intersecting identities. What specific critiques does she offer, and how does she argue that single-axis approaches may inadvertently reinforce discrimination? What alternative strategies does she propose?
2. The author introduces three forms of intersectionality: structural, political, and representational. How does she define each form, and what examples does she use to illustrate them? How do these forms of intersectionality interact to shape the lived experiences of Black women?
3. The author critiques both mainstream feminist and anti-racist movements for failing to adequately address the experiences of Black women. How does she argue that these movements unintentionally marginalize Black women, and what solutions does she propose? How might her critique inform the development of more inclusive social justice movements?
Based on paper [2], answer the following questions:
1. How does the paper define "intersectional fairness" in AI, and what specific challenges does it highlight for ensuring fairness for women of color? What limitations does the paper identify in using statistical parity for intersectional identities, particularly for women of color? How might this approach unintentionally reinforce existing inequalities?
2. The paper suggests that conventional fairness metrics may be inadequate for addressing intersectional bias. What alternative or additional approaches does it propose or imply? How might these approaches better serve women of color in algorithmic decision-making?
3. The author advocates for an intersectional approach to fairness, emphasizing group-based identities and their overlapping oppressions. Could an emphasis on intersectional fairness unintentionally obscure the unique needs of individuals within these intersections? How might an algorithm balance the needs of an intersectional group with the specific experiences of individuals, particularly those who may belong to unique, less-recognized intersections?
Based on paper [3], answer the following questions:
1. The authors advocate for using the matrix of domination to guide AI fairness practices. What challenges might arise when attempting to operationalize this framework in AI systems? How could these challenges affect the effectiveness of AI fairness efforts, and what solutions, if any, do the authors propose to address these challenges?
2. How do the authors incorporate the concept of power into their vision of intersectional fairness? What new responsibilities does this power-centered focus place on AI developers, researchers, and policymakers, and what ethical tensions might emerge from these responsibilities?
3. The authors critique traditional AI fairness metrics (e.g., demographic parity, equal opportunity) through the lens of the matrix of domination. What specific limitations do they identify in these metrics, and how does the matrix of domination framework offer a deeper understanding of fairness that extends beyond these metrics?
Based on papers [4-6], answer the following questions:
Using AIF360 library, evaluate the COMPAS model's performance for intersectional biases based on gender and race across four fairness algorithms:
Statistical Parity Subgroup Fairness [4]: Measure the proportion of favorable outcomes across intersectional subgroups (e.g., gender × race).
False Positive Rate Subgroup Fairness [4]: Analyze false positive rates for each subgroup to uncover disparities, especially critical in the criminal justice context.
Multicalibration Fairness [5]: Apply multicalibration fairness to examine if the model is calibrated for each intersectional subgroup.
Differential Fairness [6]: Use differential fairness to measure fairness across intersectional groups, considering multiple overlapping identities.
Note that AIF360 doesn't directly support Multicalibration Fairness or Differential Fairness, so you need to approximate these concepts. After identifying biases, select and apply two bias mitigation techniques from the following list to reduce observed biases while maintaining model performance:
Pre-processing techniques (e.g., Reweighing, Optimized Preprocessing).
In-processing techniques (e.g., Adversarial Debiasing, Prejudice Remover).
Post-processing techniques (e.g., Equalized Odds Postprocessing, Reject Option Classification).