In many Natural Language Processing tasks, kernel learning allows to define robust and effective systems. At the same time, Online Learning Algorithms are appealing for their incremental and continuous learning capability. They allow to follow a target problem, with a constant adaptation to a dynamic environment. The drawback of using kernels in online settings is the continuous complexity growth, in terms of time and memory usage, experienced both in the learning and classification phases. In this paper, we extend a state-of-the-art Budgeted Online Learning Algorithm that efficiently constraints the overall complexity. We introduce the principles of Fairness and Weight Adjustment: the former mitigates the effect of unbalanced datasets, while the latter improves the stability of the resulting models. The usage of robust semantic kernel functions in Sentiment Analysis in Twitter improves the results with respect to the standard budgeted formulation. Performances are comparable with one of the most efficient Support Vector Machine implementations, still preserving all the advantages of online methods. Results are straightforward considering that the task has been tackled without manually coded resources (e.g. Word Net or a Polarity Lexicon) but mainly exploiting distributional analysis of unlabeled corpora. © 2013 IEEE.
Filice, S., Castellucci, G., Croce, D., Basili, R. (2013). Robust language learning via efficient budgeted online algorithms. In Proceedings - IEEE 13th International Conference on Data Mining Workshops, ICDMW 2013 (pp.913-920). IEEE Computer Society [10.1109/ICDMW.2013.87].
Robust language learning via efficient budgeted online algorithms
CROCE, DANILO;BASILI, ROBERTO
2013-12-01
Abstract
In many Natural Language Processing tasks, kernel learning allows to define robust and effective systems. At the same time, Online Learning Algorithms are appealing for their incremental and continuous learning capability. They allow to follow a target problem, with a constant adaptation to a dynamic environment. The drawback of using kernels in online settings is the continuous complexity growth, in terms of time and memory usage, experienced both in the learning and classification phases. In this paper, we extend a state-of-the-art Budgeted Online Learning Algorithm that efficiently constraints the overall complexity. We introduce the principles of Fairness and Weight Adjustment: the former mitigates the effect of unbalanced datasets, while the latter improves the stability of the resulting models. The usage of robust semantic kernel functions in Sentiment Analysis in Twitter improves the results with respect to the standard budgeted formulation. Performances are comparable with one of the most efficient Support Vector Machine implementations, still preserving all the advantages of online methods. Results are straightforward considering that the task has been tackled without manually coded resources (e.g. Word Net or a Polarity Lexicon) but mainly exploiting distributional analysis of unlabeled corpora. © 2013 IEEE.File | Dimensione | Formato | |
---|---|---|---|
SENTIRE_2013_dm706.pdf
solo utenti autorizzati
Licenza:
Copyright dell'editore
Dimensione
382.73 kB
Formato
Adobe PDF
|
382.73 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.