Recent Transformer-based architectures, e.g., BERT, provide impressive results in many Natural Language Processing tasks. However, most of the adopted benchmarks are made of (sometimes hundreds of) thousands of examples. In many real scenarios, obtaining high-quality annotated data is expensive and time-consuming; in contrast, unlabeled examples characterizing the target task can be, in general, easily collected. One promising method to enable semi-supervised learning has been proposed in image processing, based on Semi-Supervised Generative Adversarial Networks. In this paper, we propose GAN-BERT that extends the fine-tuning of BERT-like architectures with unlabeled data in a generative adversarial setting. Experimental results show that the requirement for annotated examples can be drastically reduced (up to only 50-100 annotated examples), still obtaining good performances in several sentence classification tasks.

Croce, D., Castellucci, G., Basili, R. (2020). GAN-BERT: Generative adversarial learning for robust text classification with a bunch of labeled examples. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp.2114-2119). Association for Computational Linguistics (ACL) [10.18653/v1/2020.acl-main.191].

GAN-BERT: Generative adversarial learning for robust text classification with a bunch of labeled examples

Croce D.;Castellucci G.;Basili R.
2020-01-01

Abstract

Recent Transformer-based architectures, e.g., BERT, provide impressive results in many Natural Language Processing tasks. However, most of the adopted benchmarks are made of (sometimes hundreds of) thousands of examples. In many real scenarios, obtaining high-quality annotated data is expensive and time-consuming; in contrast, unlabeled examples characterizing the target task can be, in general, easily collected. One promising method to enable semi-supervised learning has been proposed in image processing, based on Semi-Supervised Generative Adversarial Networks. In this paper, we propose GAN-BERT that extends the fine-tuning of BERT-like architectures with unlabeled data in a generative adversarial setting. Experimental results show that the requirement for annotated examples can be drastically reduced (up to only 50-100 annotated examples), still obtaining good performances in several sentence classification tasks.
58th Annual Meeting of the Association for Computational Linguistics, ACL 2020
usa
2020
58
Amazon
Rilevanza internazionale
2020
Settore INF/01
Settore ING-INF/05
English
Intervento a convegno
Croce, D., Castellucci, G., Basili, R. (2020). GAN-BERT: Generative adversarial learning for robust text classification with a bunch of labeled examples. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp.2114-2119). Association for Computational Linguistics (ACL) [10.18653/v1/2020.acl-main.191].
Croce, D; Castellucci, G; Basili, R
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/359264
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 117
  • ???jsp.display-item.citation.isi??? ND
social impact