Recent results achieved by statistical approaches involving Deep Neural Learning architectures suggest that semantic inference tasks can be solved by adopting complex neural architectures and advanced optimization techniques. This is achieved even by simplifying the representation of the targeted phenomena. The idea that representation of structured knowledge is essential to reliable and accurate semantic inferences seems to be implicitly denied. However, Neural Networks (NNs) underlying such methods rely on complex and beneficial representational choices for the input to the network (e.g., in the so-called pre-Training stages) and sophisticated design choices regarding the NNS inner structure are still required. While optimization carries strong mathematical tools that are crucially useful, in this work, we wonder about the role of representation of information and knowledge. In particular, we claim that representation is still a major issue, and discuss it in the light of Spoken Language capabilities required by a robotic system in the domain of service robotics. The result is that adequate knowledge representation is quite central for learning machines in real applications. Moreover, learning mechanisms able to properly characterize it, through expressive mathematical abstractions (i.e. trees, graphs or sets), constitute a core research direction towards robust, adaptive and increasingly autonomous AI systems.
Basili, R., Croce, D. (2017). Structured knowledge and kernel-based learning: The case of grounded spoken language learning in interactive robotics. In CEUR Workshop Proceedings (pp.63-68). CEUR-WS.
Structured knowledge and kernel-based learning: The case of grounded spoken language learning in interactive robotics
BASILI, ROBERTO;CROCE, DANILO
2017-01-01
Abstract
Recent results achieved by statistical approaches involving Deep Neural Learning architectures suggest that semantic inference tasks can be solved by adopting complex neural architectures and advanced optimization techniques. This is achieved even by simplifying the representation of the targeted phenomena. The idea that representation of structured knowledge is essential to reliable and accurate semantic inferences seems to be implicitly denied. However, Neural Networks (NNs) underlying such methods rely on complex and beneficial representational choices for the input to the network (e.g., in the so-called pre-Training stages) and sophisticated design choices regarding the NNS inner structure are still required. While optimization carries strong mathematical tools that are crucially useful, in this work, we wonder about the role of representation of information and knowledge. In particular, we claim that representation is still a major issue, and discuss it in the light of Spoken Language capabilities required by a robotic system in the domain of service robotics. The result is that adequate knowledge representation is quite central for learning machines in real applications. Moreover, learning mechanisms able to properly characterize it, through expressive mathematical abstractions (i.e. trees, graphs or sets), constitute a core research direction towards robust, adaptive and increasingly autonomous AI systems.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.