This week I attended ECAI (European Conference on Artificial Intelligence) 2016 and I will summarize some of my impressions and further thoughts here.
Machine learning and automated reasoning
Various talks and discussions at ECAI (with Simon Bin, among others) have further spurred my interest in the connection between automated reasoning (AR, which can also be referred to by symbolic resoning or knowledge representation and reasoning) and machine learning (ML).
While usually these fields are rather separate, there is some work on the connection. For instance, Leon Bottou wrote a paper “From machine learning to machine reasoning” proproposing a framework that subsumes both (although, from a very superficial view, it seems to contain the tautology “the essence of computational induction and computational deduction is computation”). Another paper I stumbled upon is “Learning Structured Embeddings of Knowledge Bases” by Bordes et al.
I guess the great advantage of ML over AR is that the former allows for a higher degree of automation, which makes it cheaper, as it can in principle directly be applied to low-level data, in particular sensory data often again delivered by machines. In contrast, I guess the knowledge bases that AR relies on so far usually have to be hand-crafted. Note though that the input of ML algorithms often tends to be the mathematical structure R^n, which is also a restriction.
Nonetheless, there seem to be various reasons why combining ML with AR could help:
- Vast knwoledge bases exist by now, containing a lot of relevant knowledge about the world, e.g. Wikidata.
- While ML heavily relies on low-level data as input, clearly it always also relies on appropriate prior assumptions (e.g. on the function class) which are by definition something conceptual, law-like and not low-level measurements. AR or related approaches could help to reason more explicitly, formally about priors.
- Various steps of inference, that are often treated as external to ML, such as variable/feature selection/definition could also profit from AR.
- Furthermore, as we argue in our paper on integrating agents, the ML approach to intelligent agents - reinforcement learning - often focuses on a fixed way in which the software is wired (via sensors and actuators) to the world. However, when there are many agents, then it is likely that knowledge has to be transferred between different types, e.g., agents of different hardware. Moreover, form an engineering and economical perspective, contracts are necessary about the semantics of what one agent delivers to another one - again something law-like/conceptual which may profit from AR.
- More generally, AR seems to provide tools for “plugging together” or “linking” data sets, while ML usually assumes the data set to be homogeneous (e.g. i.i.d.). Most generally, AR can be considered as reasoning with certainty and without restrictions, while ML (statistics) is about reasoning under uncertainty and model assumptions.
Also note the following connection between ML and symbolic reasoning: Consider an axiomatic theory, i.e., set of statements. In the sense of supervised learning, this could be seen as a one-label training set: all points in that set get the label “true”. And the question of implications of the axioms then boils down to extrapolation from the training set. But while in classical supervised learning the points have no further meaning than being an element of (usually) the structure R^n, the “axiom” points can have symbolic meaning in the sense that they express statements about other points.
Argumentation in AI
Another line of research that was presented at ECAI and which I found interesting is AI argumentation. They formalize arguments (disputes) as games, where at each step one of the players can put forth an argument (e.g. “attack” a previous argument by an opponent), and then the rules of the game decide who wins in the end. A survey paper I found on the web is “Argumentation in artificial intelligence”. Alternatively, one can have a look at one of the well-presented conference papers.
What I didn’t really get is the connection to classical logical reasoning in the sense of say predicate logic. There, either an arguments (sequence of statements) is according to the rules or not, and I don’t see how one could make a game out of it. But they talked a lot about extensions, so maybe it is about extending an incomplete set of premises (an incomplete structure).