Measuring and Mitigating Gender Bias in Legal Contextualized Language Models

As the popularity and importance of large language models (LLMs) are soaring by the day, I am happy to announce that our recent work on gender bias and fairness in contextualized language models for natural legal language processing (NNLP) is now out at ACM Transactions on Knowledge Discovery from Data!

Computational law, mostly based on natural language processing (NLP) and machine learning, has recently gained significant interest due to the technological improvements, abundance of legal texts, and increasing demand from legal professionals for technology. Since law is probably one of the most influential areas that touch upon lives, fairness and bias-free algorithm development are even more critical when we talk about artificial intelligence in law practices.

In our article titled “Measuring and Mitigating Gender Bias in Legal Contextualized Language Models”,   we have introduced the first study on detecting and eliminating gender and societal bias issues present in contextualized large language models for legal NLP. This work brings the important issue of bias to the attention of computational law community, where the effects of law on people’s lives has doubled the need for studying bias.





Transformer-based contextualized language models constitute the state-of-the-art in several natural language processing (NLP) tasks and applications. Despite their utility, contextualized models can contain human-like social biases as their training corpora generally consist of human-generated text. Evaluating and removing social biases in NLP models has been a major research endeavor. In parallel, NLP approaches in the legal domain, namely legal NLP or computational law, have also been increasing. Eliminating unwanted bias in legal NLP is crucial since the law has the utmost importance and effect on people. In this work, we focus on the gender bias encoded in BERT-based models. We propose a new template-based bias measurement method with a new bias evaluation corpus using crime words from the FBI database. This method quantifies the gender bias present in BERT-based models for legal applications. Furthermore, we propose a new fine-tuning-based debiasing method using the European Court of Human Rights (ECtHR) corpus to debias legal pre-trained models. We test the debiased models’ language understanding performance on the LexGLUE benchmark to confirm that the underlying semantic vector space is not perturbed during the debiasing process. Finally, we propose a bias penalty for the performance scores to emphasize the effect of gender bias on model performance.