A Visual Tour of Bias Mitigation Techniques for Word Representations

Part of AAAI 2021 Tutorial Forum, MH9

A virtual tutorial, 8:30 AM – 11:45 AM PST, Wednesday, February 3rd, 2021.

Visual Demo Download: https://github.com/tdavislab/visualizing-bias

Organizers


Archit Rathore
Ph.D. Student, School of Computing
Scientific Computing and Imaging (SC) Institute
University of Utah
architrathore1 AT gmail.com

Archit Rathore is a Ph.D. student at the School of Computing at the University of Utah. His current research focuses on probing machine learning models through visualization techniques to improve interpretability.

Sunipa Dev
Postdoctoral researcher
University of California, Los Angeles (UCLA)
sunipa AT cs.ucla.edu

Sunipa Dev received her Ph.D. at the School of Computing at the University of Utah in Fall 2020 and is a CI Postdoctoral Fellow at UCLA. Her research focuses on understanding the structure of language representations and leveraging that to isolate and decouple associations and concept subspaces within.

Jeff Phillips
Associate Professor, School of Computing
University of Utah
jeffp AT cs.utah.edu

Jeff M. Phillips is an Associate Professor at the School of Computing, and Director of the Utah Center for Data Science, at the University of Utah. He is an expert in the geometry of data, and actively publishes in top venues in machine learning & data mining, algorithms & geometry, and databases.

Vivek Srikumar
Associate Professor, School of Computing
University of Utah
svivek AT cs.utah.edu

Vivek Srikumar is an Associate Professor at the School of Computing at the University of Utah. His research focuses on machine learning in the context of natural learning processing and has primarily been driven by questions arising from the need to reason about textual data with limited explicit supervision and to scale NLP to large problems.

Bei Wang
Assistant Professor, School of Computing
Scientific Computing and Imaging (SC) Institute
University of Utah
beiwang AT sci.utah.edu

Bei Wang is an Assistant Professor at the School of Computing, a faculty member in the Scientific Computing and Imaging (SCI) Institute, University of Utah. Her research interests include data visualization, topological data analysis, computational topology, computational geometry, machine learning, and data mining.

Overview

Motivation

Word vector embeddings have been shown to contain and amplify biases in data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this tutorial, we will review a collection of state-of-the-art debiasing techniques. To aid this, we provide an open source web-based visualization tool and offer hands-on experience in exploring the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, we decompose each technique into interpretable sequences of primitive operations and study their effect on the word vectors using dimensionality reduction and interactive visual exploration.

Prerequisite Knowledge

The attendees are expected to understand the basics of linear algebra and dimensionality reduction. Familiarity with basic NLP would be helpful but is not required. The audience is not assumed to have knowledge about bias in NLP; however, the tutorial will still be applicable to those who do.

Format

The workshop will be a half-day event, 8:30 AM – 11:45 AM PST, Wednesday, February 3rd, 2021.

Tutorial Materials

Visualization software

Tutorial Slides

Videos

Schedule

Tutorial time: 8:30 AM – 11:45 AM.

All times are in Pacific Time (PST) Vancouver.

References

Please email Bei Wang (beiwang AT sci.utah.edu) if a certain paper should be added to the list.

  1. OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings.
    Sunipa Dev and Tao Li and Jeff M Phillips and Vivek Srikumari
    arxiv preprint arXiv:2007.00049  (2020)

  2. What are the biases in my word embedding?
    N. Swinger and M. De-Arteaga and N. T. H. IV and M. D. M. Leiserson and A. T. Kalai
    arxiv preprint arXiv:1812.08769      (2018)
    http://arxiv.org/abs/1812.08769

  3. The Trouble with Bias
    K. Crawford
    Conference on Neural Information Processing Systems, Keynote      (2017)

  4. Social bias in Elicited Natural Language Inferences
    R. Rudinger and C. May and B. Van Durme
    Proceedings of the 1st ACL Workshop on Ethics in Natural Language Processing    74-79  (2017)

  5. On Measuring Social Biases in Sentence Encoders
    C. May and A. Wang and S. Bordia and S. R. Bowman and R. Rudinger
    arxiv preprint arXiv:1903.10561      (2019)
    http://arxiv.org/abs/1903.10561

  6. On Measuring and Mitigating Biased Inferences of Word Embeddings
    S. Dev and T. Li and J. M. Phillips and V. Srikumar
    AAAI      (2020)

  7. Offline bilingual word vectors, orthogonal transformations and the inverted softmax
    S. L. Smith and D. H. P. Turban and S. Hamblin and N. Y. Hammerla
    International Conference on Learning Representations      (2017)

  8. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
    S. Ravfogel and Y. Elazar and H. Gonen and M. Twiton and Y. Goldberg
    arXiv preprint arXiv:2004.07667      (2020)

  9. Mitigating Gender Bias in Natural Language Processing: Literature Review
    T. Sun and A. Gaut and S. Tang and Y. Huang and M. ElSherief and J. Zhao and D. Mirza and E. M. Belding and K.-W. Chang and W. Y. Wang
    arXiv preprint arXiv:1906.08976      (2019)
    http://arxiv.org/abs/1906.08976

  10. Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
    K. Webster and M. Recasens and V. Axelrod and J. Baldridge
    Transactions of the Association for Computational Linguistics  6  605-617  (2018)

  11. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
    H. Gonen and Y. Goldberg
    Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)     609-614  (2019)

  12. Learning Gender-Neutral Word Embeddings
    J. Zhao and Y. Zhou and Z. Li and W. Wang and K.-W. Chang
    Proceedings of the Conference on Empirical Methods in Natural Language Processing      ( 2018)
    https://www.aclweb.org/anthology/D18-1521
    https://doi.org/10.18653/v1/D18-1521

  13. Language Technology is Power: A Critical Survey of ``Bias'' in NLP
    S. L. Blodgett and S. Barocas and H. Daumé III and H. Wallach
    Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics      (2020)
    https://www.aclweb.org/anthology/2020.acl-main.485
    https://doi.org/10.18653/v1/2020.acl-main.485

  14. Gender Bias in Coreference Resolution
    R. Rudinger and J. Naradowsky and B. Leonard and B. Van Durme
    Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)    8-14  (2018)

  15. Gender Bias in Contextualized Word Embeddings
    J. Zhao and T. Wang and M. Yatskar and R. Cotterell and V. Ordonez and K.-W. Chang
    Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)    629-634  (2019)
    https://www.aclweb.org/anthology/N19-1064
    https://doi.org/10.18653/v1/N19-1064

  16. Gender as a Variable in Natural-Language Processing: Ethical Considerations
    B. Larson
    Proceedings of the 1st ACL Workshop on Ethics in Natural Language Processing      (2017)
    https://www.aclweb.org/anthology/W17-1601
    https://doi.org/10.18653/v1/W17-1601

  17. Fairness in Representation: Quantifying Stereotyping as a Representational Harm
    M. Abbasi and S. A. Friedler and C. Scheidegger and S. Venkatasubramanian
    Proceedings of the 2019 SIAM International Conference on Data Mining      (2019)
    https://epubs.siam.org/doi/pdf/10.1137/1.9781611975673.90
    https://doi.org/10.1137/1.9781611975673.90

  18. Consumer credit-risk models via machine-learning algorithms
    A. E. Khandani and A. J. Kim and A. Lo
    Journal of Banking \& Finance  34  2767-2787  (2010)
    https://EconPapers.repec.org/RePEc:eee:jbfina:v:34:y:2010:i:11:p:2767-2787

  19. Big data's disparate impact
    S. Barocas and A. D. Selbst
    California Law Review  104  671  (2016)

  20. Attenuating Bias in Word vectors
    S. Dev and J. M. Phillips
    Proceedings of Machine Learning Research, PMLR    879-887  (2019)
    http://proceedings.mlr.press/v89/dev19a.html

  21. Assessing Social and Intersectional Biases in Contextualized Word Representations
    Y. C. Tan and L. E. Celis
    arXiv preprint arXiv:1911.01485      (2019)

  22. A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces
    A. Lauscher and G. Glavas and S. P. Ponzetto and I. Vulic
    AAAI      (2020)

  23. A Decomposable Attention Model for Natural Language Inference
    A. Parikh and O. Täckström and D. Das and J. Uszkoreit
    Conference on Empirical Methods in Natural Language Processing    2249-2255  (2016)

  24. The Technique of Semantics
    J. R. Firth
    Transactions of the Philological Society  24  36-73  (1935)
    https://doi.org/10.1111/j.1467-968X.1935.tb01254.x

  25. A Synopsis of Linguistic Theory, 1930-1955
    J. R. Firth
    Studies in linguistic analysis      (1957)

  26. Distributed Representations of Words and Phrases and Their Compositionality
    T. Mikolov and I. Sutskever and K. Chen and G. S. Corrado and J. Dean
    Advances in Neural Information Processing Systems    3111-3119  (2013)

  27. Efficient Estimation of Word Representations in Vector Space
    T. Mikolov and K. Chen and G. Corrado and J. Dean
    arXiv preprint arXiv:1301.3781      (2013)

  28. Linguistic Regularities in Continuous Space Word Representations
    T. Mikolov and W.-t. Yih and G. Zweig
    Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)  13  746-751  (2013)

  29. Glove: Global Vectors for Word Representation
    J. Pennington and R. Socher and C. D. Manning
    Proceedings of the Empirical Methods in Natural Language Processing (EMNLP)    1532-1543  (2014)

  30. Fasttext.Zip: Compressing Text Classification Models
    A. Joulin and E. Grave and P. Bojanowski and M. Douze and H. Jégou and T. Mikolov
    arXiv preprint arXiv:1612.03651      (2016)

  31. Deep Contextualized Word Representations
    M. Peters and M. Neumann and M. Iyyer and M. Gardner and C. Clark and K. Lee and L. Zettlemoyer
    Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies    2227-2237  (2018)
    https://doi.org/10/gft5gf

  32. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding
    J. Devlin and M.-W. Chang and K. Lee and K. Toutanova
    Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies    4171-4186  (2019)

  33. RoBERTa: A Robustly Optimized {BERT} Pretraining Approach
    Y. Liu and M. Ott and N. Goyal and J. Du and M. Joshi and D. Chen and O. Levy and M. Lewis and L. Zettlemoyer and V. Stoyanov
    arXiv preprint arXiv:1907.11692      (2019)

  34. Large Image Datasets: A Pyrrhic Win for Computer Vision?
    A. Birhane and V. U. Prabhu
    Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision    1537-1547  (2021)