Understanding textual description to generate code seems to be an achieved capability of instruction-following Large Language Models (LLMs) in zero-shot scenario. However, there is a severe possibility that this translation ability may be influenced by having seen target textual descriptions and the related code. This effect is known as Data Contamination. In this study, we investigate the impact of Data Contamination on the performance of GPT-3.5 in the Text-to-SQL code-generating tasks. Hence, we introduce a novel method to detect Data Contamination in GPTs and examine GPT-3.5's Text-to-SQL performances using the known Spider Dataset and our new unfamiliar dataset Termite. Furthermore, we analyze GPT-3.5's efficacy on databases with modified information via an adversarial table disconnection (ATD) approach, complicating Text-to-SQL tasks by removing structural pieces of information from the database. Our results indicate a significant performance drop in GPT-3.5 on the unfamiliar Termite dataset, even with ATD modifications, highlighting the effect of Data Contamination on LLMs in Text-to-SQL translation tasks.

Ranaldi, F., Ruzzetti, E.s., Onorati, D., Ranaldi, L., Giannone, C., Favalli, A., et al. (2024). Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL Translation. In Findings of the Association for Computational Linguistics: ACL 2024 (pp.13909-13920). Association for Computational Linguistics [10.18653/v1/2024.findings-acl.827].

Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL Translation

Ranaldi F.;Ruzzetti E. S.;Onorati D.;Ranaldi L.;Zanzotto F. M.
2024-01-01

Abstract

Understanding textual description to generate code seems to be an achieved capability of instruction-following Large Language Models (LLMs) in zero-shot scenario. However, there is a severe possibility that this translation ability may be influenced by having seen target textual descriptions and the related code. This effect is known as Data Contamination. In this study, we investigate the impact of Data Contamination on the performance of GPT-3.5 in the Text-to-SQL code-generating tasks. Hence, we introduce a novel method to detect Data Contamination in GPTs and examine GPT-3.5's Text-to-SQL performances using the known Spider Dataset and our new unfamiliar dataset Termite. Furthermore, we analyze GPT-3.5's efficacy on databases with modified information via an adversarial table disconnection (ATD) approach, complicating Text-to-SQL tasks by removing structural pieces of information from the database. Our results indicate a significant performance drop in GPT-3.5 on the unfamiliar Termite dataset, even with ATD modifications, highlighting the effect of Data Contamination on LLMs in Text-to-SQL translation tasks.
62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024)
Bangkok, Thailand
2024
62
Rilevanza internazionale
2024
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
English
Intervento a convegno
Ranaldi, F., Ruzzetti, E.s., Onorati, D., Ranaldi, L., Giannone, C., Favalli, A., et al. (2024). Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL Translation. In Findings of the Association for Computational Linguistics: ACL 2024 (pp.13909-13920). Association for Computational Linguistics [10.18653/v1/2024.findings-acl.827].
Ranaldi, F; Ruzzetti, Es; Onorati, D; Ranaldi, L; Giannone, C; Favalli, A; Romagnoli, R; Zanzotto, Fm
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/389003
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact