The integration of Vehicle-to-Grid (V2G) systems in smart grids requires accurate and interpretable predictive models to estimate Aggregated Available Capacity (AAC) and optimize energy flows. This study presents a proof of concept for explainable AI (XAI) methodologies applied to AAC prediction, leveraging a dataset of vehicle mobility patterns, meteorological conditions, and calendar effects in the metropolitan area of Rome. Both linear and nonlinear data-driven models were evaluated, considering FIR and nonlinear FIR using exogenous inputs, and ARX and nonlinear ARX considering autoregressive and exogenous input selections. Black-box nonlinear models were explained using Shapley Additive Explanations (SHAP). Results highlight the importance of exogenous variables in improving AAC prediction for the different intervals of the day. The findings emphasize the need for interpretable models in decision support systems for energy providers and V2G infrastructure managers, ensuring reliable and transparent decision-making.

Sapuppo, F., Patané, L., Comi, A., Elnour, E., Napoli, G., Xibilia, M.g. (2025). Explainability analysis of V2G aggregated capacity prediction. In 2025 International Conference on Control, Automation and Diagnosis, ICCAD 2025 (pp.1-6). New York : IEEE [10.1109/iccad64771.2025.11099476].

Explainability analysis of V2G aggregated capacity prediction

Comi, Antonio;Elnour, Elsiddig;
2025-01-01

Abstract

The integration of Vehicle-to-Grid (V2G) systems in smart grids requires accurate and interpretable predictive models to estimate Aggregated Available Capacity (AAC) and optimize energy flows. This study presents a proof of concept for explainable AI (XAI) methodologies applied to AAC prediction, leveraging a dataset of vehicle mobility patterns, meteorological conditions, and calendar effects in the metropolitan area of Rome. Both linear and nonlinear data-driven models were evaluated, considering FIR and nonlinear FIR using exogenous inputs, and ARX and nonlinear ARX considering autoregressive and exogenous input selections. Black-box nonlinear models were explained using Shapley Additive Explanations (SHAP). Results highlight the importance of exogenous variables in improving AAC prediction for the different intervals of the day. The findings emphasize the need for interpretable models in decision support systems for energy providers and V2G infrastructure managers, ensuring reliable and transparent decision-making.
2025 International Conference on Control, Automation and Diagnosis (ICCAD 2025)
Barcelona, Spain
2025
Rilevanza internazionale
2025
Settore ICAR/05
Settore CEAR-03/B - Trasporti
English
Data-driven predictive model; Explainable artificial intelligence; Interpretability; Machine learning; System identification; Vehicle-to-Grid
Intervento a convegno
Sapuppo, F., Patané, L., Comi, A., Elnour, E., Napoli, G., Xibilia, M.g. (2025). Explainability analysis of V2G aggregated capacity prediction. In 2025 International Conference on Control, Automation and Diagnosis, ICCAD 2025 (pp.1-6). New York : IEEE [10.1109/iccad64771.2025.11099476].
Sapuppo, F; Patané, L; Comi, A; Elnour, E; Napoli, G; Xibilia, Mg
File in questo prodotto:
File Dimensione Formato  
Explainability_Analysis_of_V2G_Aggregated_Capacity_Prediction.pdf

solo utenti autorizzati

Descrizione: full text
Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 2.43 MB
Formato Adobe PDF
2.43 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/431023
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact