Natural Language Inference (NLI) is a key, complex task where machine learning (ML) is playing an important role. However, ML has progressively obfuscated the role of linguistically-motivated inference rules, which should be the core of NLI systems. In this paper, we introduce distributed inference rules as a novel way to encode linguistically-motivated inference rules in learning interpretable NLI classifiers. We propose two encoders: the Distributed Partial Tree Encoder and the Distributed Smoothed Partial Tree Encoder. These encoders allow modeling syntactic and syntactic-semantic inference rules as distributed representations ready to be used in ML models over large datasets. Although far from the state-of-the-art of end-to-end deep learning systems on large datasets, our shallow networks positively exploit inference rules for NLI, improving over baseline systems. This is a first positive step towards interpretable and explainable end-to-end deep learning systems.

Zanzotto, F.m., Ferrone, L. (2017). Can we explain natural language inference decisions taken with neural networks? Inference rules in distributed representations. In Proceedings of the International Joint Conference on Neural Networks (pp.3680-3687). Institute of Electrical and Electronics Engineers Inc. [10.1109/IJCNN.2017.7966319].

Can we explain natural language inference decisions taken with neural networks? Inference rules in distributed representations

Zanzotto, Fabio Massimo;Ferrone, Lorenzo
2017-01-01

Abstract

Natural Language Inference (NLI) is a key, complex task where machine learning (ML) is playing an important role. However, ML has progressively obfuscated the role of linguistically-motivated inference rules, which should be the core of NLI systems. In this paper, we introduce distributed inference rules as a novel way to encode linguistically-motivated inference rules in learning interpretable NLI classifiers. We propose two encoders: the Distributed Partial Tree Encoder and the Distributed Smoothed Partial Tree Encoder. These encoders allow modeling syntactic and syntactic-semantic inference rules as distributed representations ready to be used in ML models over large datasets. Although far from the state-of-the-art of end-to-end deep learning systems on large datasets, our shallow networks positively exploit inference rules for NLI, improving over baseline systems. This is a first positive step towards interpretable and explainable end-to-end deep learning systems.
2017 International Joint Conference on Neural Networks, IJCNN 2017
usa
2017
Brain-Mind Institute (BMI)
Rilevanza internazionale
contributo
2017
Settore INF/01 - INFORMATICA
Settore ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI
English
Software; Artificial Intelligence
Intervento a convegno
Zanzotto, F.m., Ferrone, L. (2017). Can we explain natural language inference decisions taken with neural networks? Inference rules in distributed representations. In Proceedings of the International Joint Conference on Neural Networks (pp.3680-3687). Institute of Electrical and Electronics Engineers Inc. [10.1109/IJCNN.2017.7966319].
Zanzotto, Fm; Ferrone, L
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/190405
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
social impact