Bio
Hila is a PhD student at Bar Ilan University in the field of Natural Language Processing and Deep Learning, under the supervision of Prof. Yoav Goldberg.
Prior to that, She obtained her M.Sc in Computer Science from the Hebrew University, under the supervision of Prof. Orna Kupferman. She is fascinated by languages and interested in relations between different languages and the way multilingual signals can be used for various tasks.
Bio
Hila is a PhD student at Bar Ilan University in the field of Natural Language Processing and Deep Learning, under the supervision of Prof. Yoav Goldberg.
Prior to that, She obtained her M.Sc in Computer Science from the Hebrew University, under the supervision of Prof. Orna Kupferman. She is fascinated by languages and interested in relations between different languages and the way multilingual signals can be used for various tasks.
Abstract
Word embeddings are widely used in the NLP community for a vast range of tasks. We will show that these models, which are derived from text corpora, reflect gender biases in society – a phenomenon that is pervasive and consistent across different word embedding models, causing serious concern.
Several recent works tackle this problem, and propose methods for significantly reducing this gender bias in word embeddings, demonstrating convincing results. We will review these methods and inspect the resulting debiased embeddings.
Abstract
Word embeddings are widely used in the NLP community for a vast range of tasks. We will show that these models, which are derived from text corpora, reflect gender biases in society – a phenomenon that is pervasive and consistent across different word embedding models, causing serious concern.
Several recent works tackle this problem, and propose methods for significantly reducing this gender bias in word embeddings, demonstrating convincing results. We will review these methods and inspect the resulting debiased embeddings.