When does pretraining help? Assessing self-supervised learning for law and the CaseHOLD dataset of 53,000+ legal holdings

Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, Daniel E. Ho

Research output: Chapter in Book/Report/Conference proceedingConference contribution

111 Scopus citations

Abstract

While self-supervised learning has made rapid advances in natural language processing, it remains unclear when researchers should engage in resource-intensive domain-specific pretraining (domain pretraining). The law, puzzlingly, has yielded few documented instances of substantial gains to domain pretraining in spite of the fact that legal language is widely seen to be unique. We hypothesize that these existing results stem from the fact that existing legal NLP tasks are too easy and fail to meet conditions for when domain pretraining can help. To address this, we first present CaseHOLD (Case <u>H</u>oldings <u>O</u>n <u>L</u>egal <u>D</u>ecisions), a new dataset comprised of over 53,000+ multiple choice questions to identify the relevant holding of a cited case. This dataset presents a fundamental task to lawyers and is both legally meaningful and difficult from an NLP perspective (F1 of 0.4 with a BiLSTM baseline). Second, we assess performance gains on CaseHOLD and existing legal NLP datasets. While a Transformer architecture (BERT) pretrained on a general corpus (Google Books and Wikipedia) improves performance, domain pretraining (on a corpus of 3.5M decisions across all courts in the U.S. that is larger than BERT's) with a custom legal vocabulary exhibits the most substantial performance gains with CaseHOLD (gain of 7.2% on F1, representing a 12% improvement on BERT) and consistent performance gains across two other legal tasks. Third, we show that domain pretraining may be warranted when the task exhibits sufficient similarity to the pretraining corpus: the level of performance increase in three legal tasks was directly tied to the domain specificity of the task. Our findings inform when researchers should engage in resource-intensive pretraining and show that Transformer-based architectures, too, learn embeddings suggestive of distinct legal language.

Original languageEnglish (US)
Title of host publicationProceedings of the 18th International Conference on Artificial Intelligence and Law, ICAIL 2021
PublisherAssociation for Computing Machinery, Inc
Pages159-168
Number of pages10
ISBN (Electronic)9781450385268
DOIs
StatePublished - Jun 21 2021
Externally publishedYes
Event18th International Conference on Artificial Intelligence and Law, ICAIL 2021 - Virtual, Online, Brazil
Duration: Jun 21 2021Jun 25 2021

Publication series

NameProceedings of the 18th International Conference on Artificial Intelligence and Law, ICAIL 2021

Conference

Conference18th International Conference on Artificial Intelligence and Law, ICAIL 2021
Country/TerritoryBrazil
CityVirtual, Online
Period6/21/216/25/21

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Law

Keywords

  • benchmark dataset
  • law
  • natural language processing
  • pretraining

Fingerprint

Dive into the research topics of 'When does pretraining help? Assessing self-supervised learning for law and the CaseHOLD dataset of 53,000+ legal holdings'. Together they form a unique fingerprint.

Cite this