We prove the large deviation principle (LDP) for posterior distributions arising from subfamilies of full exponential families, allowing misspecification of the model. Moreover, motivated by the so-called inverse Sanov Theorem (see e.g. Ganesh and O’Connell (1999, 2000)), we prove the LDP for the corresponding maximum likelihood estimator, and we study the relationship between rate functions. In our setting, even in the non misspecified case, it is not true in general that the rate functions for posterior distributions and for maximum likelihood estimators are Kullback–Leibler divergences with exchanged arguments.
Macci, C., Piccioni, M. (2023). An inverse Sanov theorem for exponential families. JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 224, 54-68 [10.1016/j.jspi.2022.10.003].
An inverse Sanov theorem for exponential families
Claudio Macci;
2023-01-01
Abstract
We prove the large deviation principle (LDP) for posterior distributions arising from subfamilies of full exponential families, allowing misspecification of the model. Moreover, motivated by the so-called inverse Sanov Theorem (see e.g. Ganesh and O’Connell (1999, 2000)), we prove the LDP for the corresponding maximum likelihood estimator, and we study the relationship between rate functions. In our setting, even in the non misspecified case, it is not true in general that the rate functions for posterior distributions and for maximum likelihood estimators are Kullback–Leibler divergences with exchanged arguments.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.