skip to main content
10.1145/3531146.3533157acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

The Forgotten Margins of AI Ethics

Published: 20 June 2022 Publication History
  • Get Citation Alerts
  • Abstract

    How has recent AI Ethics literature addressed topics such as fairness and justice in the context of continued social and structural power asymmetries? We trace both the historical roots and current landmark work that have been shaping the field and categorize these works under three broad umbrellas: (i) those grounded in Western canonical philosophy, (ii) mathematical and statistical methods, and (iii) those emerging from critical data/algorithm/information studies. We also survey the field and explore emerging trends by examining the rapidly growing body of literature that falls under the broad umbrella of AI Ethics. To that end, we read and annotated peer-reviewed papers published over the past four years in two premier conferences: FAccT and AIES. We organize the literature based on an annotation scheme we developed according to three main dimensions: whether the paper deals with concrete applications, use-cases, and/or people’s lived experience; to what extent it addresses harmed, threatened, or otherwise marginalized groups; and if so, whether it explicitly names such groups. We note that although the goals of the majority of FAccT and AIES papers were often commendable, their consideration of the negative impacts of AI on traditionally marginalized groups remained shallow. Taken together, our conceptual analysis and the data from annotated papers indicate that the field would benefit from an increased focus on ethical analysis grounded in concrete use-cases, people’s experiences, and applications as well as from approaches that are sensitive to structural and historical power asymmetries.

    References

    [1]
    Philip E Agre. 1994. Surveillance and capture: Two models of privacy. The Information Society 10, 2 (1994), 101–127.
    [2]
    Sara Ahmed. 2007. A phenomenology of whiteness. Feminist theory 8, 2 (2007), 149–168.
    [3]
    M. Alexander. 2010. The new Jim Crow: Mass incarceration in the age of colorblindness. New Press, New York, NY, USA. https://search.library.wisc.edu/catalog/9910095136402121 Includes bibliographical references (pages 249-279) and index.
    [4]
    Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. Discrimination through optimization: How Facebook’s Ad delivery can lead to biased outcomes. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–30.
    [5]
    Roger T. Ames and David L. Hall. 2003. Dao De Jing—Making This Life Significant—A Philosophical Translation. Ballantine Books.
    [6]
    Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica, May 23(2016), 2016.
    [7]
    Ruha Benjamin. 2019. Race after technology: Abolitionist tools for the new jim code. John Wiley & Sons.
    [8]
    Abeba Birhane. 2021. Algorithmic injustice: a relational ethics approach. Patterns 2, 2 (2021), 100205.
    [9]
    Abeba Birhane and Vinay Uday Prabhu. 2021. Large image datasets: A pyrrhic win for computer vision?. In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1536–1546.
    [10]
    E. Bonilla-Silva. 2015. The Structure of Racism in Color-Blind, “Post-Racial” America. American Behavioral Scientist 59, 11 (2015), 1358–1376. https://doi.org/10.1177/0002764215586826
    [11]
    Simone Browne. 2015. Dark matters: On the surveillance of blackness. Duke University Press.
    [12]
    Matthew Browning and Bruce Arrigo. 2020. Stop and Risk: Policing, Data, and the Digital Age of Discrimination. American Journal of Criminal Justice(2020), 1–19.
    [13]
    J. Buolamwini and T. Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 2018 Conference on Fairness, Accountability and Transparency, Vol. 81. PMLR, New York, NY, USA, 77–91. http://proceedings.mlr.press
    [14]
    Alexandra Chouldechova, Diana Benavides-Prado, Oleksandr Fialko, and Rhema Vaithianathan. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Proceedings of the 2018 Conference on Fairness, Accountability and Transparency, Vol. 81. PMLR, New York, NY, USA, 134–148. http://proceedings.mlr.press
    [15]
    Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement 20, 1 (1960), 37–46.
    [16]
    Patricia Hill Collins. 2002. Black feminist thought: Knowledge, consciousness, and the politics of empowerment. routledge.
    [17]
    Sasha Costanza-Chock. 2020. Design justice: Community-led practices to build the worlds we need.
    [18]
    Kate Crawford. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
    [19]
    C. L. Dancy and P. K. Saucier. 2022. AI and Blackness: Towards moving beyond bias and representation. IEEE Transactions on Technology and Society 3, 1 (2022), 31–40. https://doi.org/10.1109/TTS.2021.3125998
    [20]
    Lorraine Daston. 2018. Calculation and the division of labor, 1750-1950. Bulletin of the German Historical Institute 62 (2018), 9–30.
    [21]
    Hanne De Jaegher. 2019. Loving and knowing: Reflections for an engaged epistemology. Phenomenology and the Cognitive Sciences(2019), 1–24.
    [22]
    Catherine D’ignazio and Lauren F Klein. 2020. Data feminism. MIT press.
    [23]
    Hubert Dreyfus. 1972. What computers can’t do: The limits of artificial intelligence. (1972).
    [24]
    Severin Engelmann, Mo Chen, Felix Fischer, Ching-yu Kao, and Jens Grossklags. 2019. Clear Sanctions, Vague Rewards: How China’s Social Credit System Currently Defines” Good” and” Bad” Behavior. In Proceedings of the conference on fairness, accountability, and transparency. 69–78.
    [25]
    Virginia Eubanks. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
    [26]
    Casey Fiesler, Natalie Garrett, and Nathan Beard. 2020. What do We teach when We teach tech ethics? A syllabi analysis. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 289–295.
    [27]
    R. Fogliato, A. Xiang, Z. Lipton, D. Nagin, and A. Chouldechova. 2021. On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes. In 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, 100–111. https://doi.org/10.1145/3461702.3462538
    [28]
    Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2016. On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236(2016).
    [29]
    Oscar H Gandy Jr. 2021. The panoptic sort: A political economy of personal information. Oxford University Press.
    [30]
    Michael Gardiner. 1996. Alterity and ethics: A dialogical perspective. Theory, Culture & Society 13, 2 (1996), 121–143.
    [31]
    Thomas Krendl Gilbert and Yonatan Mintz. 2019. Epistemic Therapy for Bias in Automated Decision-Making. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 61–67. https://doi.org/10.1145/3306618.3314294
    [32]
    R. W. Gilmore. 2017. Abolition geography and the problem of innocence. In Futures of Black Radicalism, G. T. Johnson and A. Lubin (Eds.). Verso, Brooklyn, NY, 225–240.
    [33]
    Naman Goel, Mohammad Yaghini, and Boi Faltings. 2018. Non-Discriminatory Machine Learning through Convex Fairness Criteria. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (New Orleans, LA, USA) (AIES ’18). Association for Computing Machinery, New York, NY, USA, 116. https://doi.org/10.1145/3278721.3278722
    [34]
    Hila Gonen and Yoav Goldberg. 2019. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 609–614.
    [35]
    L. M. Hampton. [n.d.]. Black Feminist Musings on Algorithmic Oppression. In 2021 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 1. https://doi.org/10.1145/3442188.3445929
    [36]
    bell hooks. 1984. Feminist Theory: From Margin to Center. Routledge.
    [37]
    B. Jefferson. 2020. Digitize and Punish: Racial criminalization in the digital age. University of Minnesota Press, Minneapolis, MN, USA, 171.
    [38]
    Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (2019), 389–399.
    [39]
    Alicia Juarrero. 2000. Dynamics in action: Intentional behavior as a complex system. Emergence 2, 2 (2000), 24–57.
    [40]
    Pratyusha Kalluri. 2020. Don’t ask if artificial intelligence is good or fair, ask how it shifts power.Nature 583, 7815 (2020), 169–169.
    [41]
    Shivaram Kalyanakrishnan, Rahul Alex Panicker, Sarayu Natarajan, and Shreya Rao. 2018. Opportunities and Challenges for Artificial Intelligence in India. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (New Orleans, LA, USA) (AIES ’18). Association for Computing Machinery, New York, NY, USA, 164–170. https://doi.org/10.1145/3278721.3278738
    [42]
    Sampath Kannan, Aaron Roth, and Juba Ziani. 2019. Downstream Effects of Affirmative Action. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 240–248. https://doi.org/10.1145/3287560.3287578
    [43]
    Michael P. Kim, Amirata Ghorbani, and James Zou. 2019. Multiaccuracy: Black-Box Post-Processing for Fairness in Classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 247–254. https://doi.org/10.1145/3306618.3314287
    [44]
    M. L. King Jr.1968. Where do we go from her: Chaos or community?Beacon Press, Boston, MA, USA.
    [45]
    P. M. Krafft, Meg Young, Michael Katell, Jennifer E. Lee, Shankar Narayan, Micah Epstein, Dharma Dailey, Bernease Herman, Aaron Tam, Vivian Guetler, Corinne Bintz, Daniella Raz, Pa Ousman Jobe, Franziska Putz, Brian Robick, and Bissan Barghouti. 2021. An Action-Oriented AI Policy Toolkit for Technology Audits by Community Advocates and Activists. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 772–781. https://doi.org/10.1145/3442188.3445938
    [46]
    Peter Kuhn and Kailing Shen. 2013. Gender discrimination in job ads: Evidence from china. The Quarterly Journal of Economics 128, 1 (2013), 287–336.
    [47]
    Chenyang Li. 1994. The Confucian Concept of Jen and the Feminist Ethic of Care: A Comparative Study. Hypatia: A Journal of Feminist Philosophy 9, 1 (1994), 70–89.
    [48]
    Kristian Lum and William Isaac. 2016. To predict and serve?Significance 13, 5 (2016), 14–19.
    [49]
    Ivana Marková. 2016. The dialogical mind: Common sense and ethics. Cambridge University Press.
    [50]
    Jeanna Matthews, Marzieh Babaeianjelodar, Stephen Lorenz, Abigail Matthews, Mariama Njie, Nathaniel Adams, Dan Krane, Jessica Goldthwaite, and Clinton Hughes. 2019. The Right To Confront Your Accusers: Opening the Black Box of Forensic DNA Software. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 321–327. https://doi.org/10.1145/3306618.3314279
    [51]
    John S Mbiti. 1969. African Religions and Philosophy. New York: Frederick A.
    [52]
    Charlton D McIlwain. 2019. Black software: The internet and racial justice, from the AfroNet to Black Lives Matter. Oxford University Press, USA.
    [53]
    Daniel McNamara. 2019. Equalized Odds Implies Partially Equalized Outcomes Under Realistic Assumptions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 313–320. https://doi.org/10.1145/3306618.3314290
    [54]
    Sabelo Mhlambi. 2020. From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance. Carr Center for Human Rights Policy Discussion Paper Series (2020).
    [55]
    Alan Mishler. 2019. Modeling Risk and Achieving Algorithmic Fairness Using Potential Outcomes. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 555–556. https://doi.org/10.1145/3306618.3314323
    [56]
    Shakir Mohamed, Marie-Therese Png, and William Isaac. 2020. Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology 33, 4 (2020), 659–684.
    [57]
    K. G. Muhammad. 2010. The Condemnation of Blackness: Race, Crime, and the Making of Modern Urban America. Harvard University Press, Cambridge, MA.
    [58]
    Safiya Umoja Noble. 2018. Algorithms of oppression: How search engines reinforce racism. NYU Press.
    [59]
    N Noddings. 2013. Caring: A Relational Approach to Ethics and Moral Education. University of California Press.
    [60]
    Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453.
    [61]
    Cathy O’Neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
    [62]
    Frank Pasquale. 2015. The black box society. Harvard University Press.
    [63]
    Neil R Powe. 2020. Black kidney function matters: use or misuse of race?Jama 324, 8 (2020), 737–738.
    [64]
    H. L. T. Quan. 2017. ”It’s hard to stop rebels that time travel”: Democratic living and the radical reimagining of old worlds. In Futures of Black Radicalism, G. T. Johnson and A. Lubin (Eds.). Verso, Brooklyn, NY, 173–193.
    [65]
    Inioluwa Deborah Raji. 2020. The discomfort of death counts: Mourning through the distorted lens of reported COVID-19 death data. Patterns 1, 4 (2020), 100066.
    [66]
    K. T. Rodolfa, E. Salomon, L. Haynes, I. H. Mendieta, J. Larson, and R. Ghani. [n.d.]. Case study: predictive fairness to reduce misdemeanor recidivism through social service interventions. In 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 142–153. https://doi.org/10.1145/3351095.3372863
    [67]
    Richard Rothstein. 2017. The color of law: A forgotten history of how our government segregated America. Liveright Publishing.
    [68]
    R Joshua Scannell. 2019. This is not Minority Report: Predictive policing and population racism. Captivating technology: Race, carceral technoscience, and liberatory imagination in everyday life(2019), 107–129.
    [69]
    Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency. Atlanta, GA, USA, 59–68.
    [70]
    John Shotter. 2006. Vygotsky, Bakhtin, Goethe: Consciousness and the dynamics of voice. (2006).
    [71]
    Till Speicher, Muhammad Ali, Giridhari Venkatadri, Filipe Nunes Ribeiro, George Arvanitakis, Fabrício Benevenuto, Krishna P. Gummadi, Patrick Loiseau, and Alan Mislove. 2018. Potential for Discrimination in Online Targeted Advertising. In Proceedings of the 2018 Conference on Fairness, Accountability and Transparency, Vol. 81. PMLR, New York, NY, USA, 5–19. http://proceedings.mlr.press/v81
    [72]
    Biplav Srivastava and Francesca Rossi. 2018. Towards Composable Bias Rating of AI Services. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (New Orleans, LA, USA) (AIES ’18). Association for Computing Machinery, New York, NY, USA, 284–289. https://doi.org/10.1145/3278721.3278744
    [73]
    Sylvia Tamale. 2020. Decolonization and Afro-feminism. Daraja Press.
    [74]
    B. Taskesen, J. Blanchet, D. Kuhn, and V. A. Nguyen. 2021. A Statistical Test for Probabilistic Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 648–665. https://doi.org/10.1145/3442188.3445927
    [75]
    Keeanga-Yamahtta Taylor. 2019. Race for profit: How banks and the real estate industry undermined black homeownership. UNC Press Books.
    [76]
    Shannon Vallor. 2016. Technology and the Virtues: A Philosophical Guide to a World Worth Wanting. Oxford University Press.
    [77]
    B Van Norden. 2017. Taking Back Philosophy: A Multicultural Manifesto. Columbia University Press.
    [78]
    Marisa Vasconcelos, Carlos Cardonha, and Bernardo Gonçalves. 2018. Modeling Epistemological Principles for Bias Mitigation in AI Systems: An Illustration in Hiring Decisions. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (New Orleans, LA, USA) (AIES ’18). Association for Computing Machinery, New York, NY, USA, 323–329. https://doi.org/10.1145/3278721.3278751
    [79]
    Heinz Von Foerster and Bernhard Pörksen. 2002. Understanding systems: Conversations on epistemology and ethics. Kluwer Academic/Plenum Publishers New York.
    [80]
    Darshali A Vyas, Leo G Eisenstein, and David S Jones. 2020. Hidden in plain sight—reconsidering the use of race correction in clinical algorithms.
    [81]
    Alexander G Weheliye. 2014. Law: Property. In Habeas viscus: Racializing assemblages, biopolitics, and black feminist theories of the human. Duke University Press, Durham, NC, 74–88.
    [82]
    Joseph Weizenbaum. 1976. Computer power and human reason: From judgment to calculation.(1976).
    [83]
    Ida B. Wells. 1999 (1893). Lynch Law. University of Illinois Press, Urbana, Book section 4.
    [84]
    Terry Winograd and Fernando Flores. 1986. Understanding computers and cognition: A new foundation for design. Intellect Books.
    [85]
    S. Wynter. 2003. Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, After Man, Its Overrepresentation - An Argument. CR: The New Centennial Review 3, 3 (2003), 257–337. http://www.jstor.org/stable/41949874
    [86]
    Shoshana Zuboff. 2019. The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

    Cited By

    View all
    • (2024)Epistemic Power in AI Ethics Labor: Legitimizing Located ComplaintsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658973(1295-1304)Online publication date: 3-Jun-2024
    • (2024)Lazy Data Practices Harm Fairness ResearchProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658931(642-659)Online publication date: 3-Jun-2024
    • (2024)Diversity of What? On the Different Conceptualizations of Diversity in Recommender SystemsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658926(573-584)Online publication date: 3-Jun-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
    June 2022
    2351 pages
    ISBN:9781450393522
    DOI:10.1145/3531146
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 June 2022

    Check for updates

    Author Tags

    1. AI Ethics
    2. AIES
    3. FAccT
    4. Justice
    5. Trends

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    FAccT '22
    Sponsor:

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2,012
    • Downloads (Last 6 weeks)197

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Epistemic Power in AI Ethics Labor: Legitimizing Located ComplaintsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658973(1295-1304)Online publication date: 3-Jun-2024
    • (2024)Lazy Data Practices Harm Fairness ResearchProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658931(642-659)Online publication date: 3-Jun-2024
    • (2024)Diversity of What? On the Different Conceptualizations of Diversity in Recommender SystemsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658926(573-584)Online publication date: 3-Jun-2024
    • (2024)Data Feminism for AIProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658543(100-112)Online publication date: 3-Jun-2024
    • (2024)Generative AI in Creative Practice: ML-Artist Folk Theories of T2I Use, Harm, and Harm-ReductionProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642461(1-17)Online publication date: 11-May-2024
    • (2024)Data Ethics Emergency Drill: A Toolbox for Discussing Responsible AI for Industry TeamsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642402(1-17)Online publication date: 11-May-2024
    • (2024)Defining acceptable data collection and reuse standards for queer artificial intelligence research in mental health: protocol for the online PARQAIR-MH Delphi studyBMJ Open10.1136/bmjopen-2023-07910514:3(e079105)Online publication date: 15-Mar-2024
    • (2024)Artificial intelligence and learning environment: Human considerationsJournal of Computer Assisted Learning10.1111/jcal.13011Online publication date: 21-May-2024
    • (2024)AI auditing: The Broken Bus on the Road to AI Accountability2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML59370.2024.00037(612-643)Online publication date: 9-Apr-2024
    • (2024)Medical artificial intelligence should do no harmNature Reviews Electrical Engineering10.1038/s44287-024-00049-21:5(280-281)Online publication date: 12-Apr-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media