KonWAx (Konzeptwandel mit Axiomen / Concept Drift with Axioms)

In most applications of machine learning the aim is to learn from a given training  set a target function f: Dom -> Range that has to predict the correct f-value with high probability (and low error) on a test set. A simple example is predicting the type Y of a car (family car, sports car) from input feature vectors X of the form (Power,NumberOfDoors). The domain of the function is a set of data presented as feature vector (in the example above: (Power,NumberOfDoors)). Similarly, the range of f consists of vectors over possibly different features (in the example: vectors of the form (TypeOfCar)). Due to uncertainty regarding the completeness of the data and the features, the mapping by f is not uniquely determined. Hence, one represents the features by random variables (RV) so that it is possible to talk about the a-priori distribution P(Power, NumberOfDoors) or the conditional distribution P(TypeOfCar | Power, NumberOfDoors). The RVs Y of the range are called target variables, the others, X, data variables. In case the target variable is boolean, the function f is also called a concept. In the important class of supervised learning methods, the training sets also contain the correct values for the target variable.

In this  project we investigate supervised machine learning methods for which the data are not given in advance but arrive in data streams. In these cases  one has to deal with a frequent phenomenon called "concept drift": The probability distributions P(X)  or P(Y |X) are not stationary, they may change. Machine learning techniques handling this phenomenon are also called adaptive online learning methods.

The notion of  "concept" used in "concept drift" is the one mentioned above and hence has a probabilistic flavor. In the project, the aim is to bridge the gap between this probabilistic notion of a concept and the qualitative logical notion of concept as used, say,  in ontologies.  By this approach, one will  profit from advantages of both worlds, namely, the data driven modeling from machine learning and the axiomatic, model-elimination, and deductive methodology of logics.   

In the project, an adaptive online-learning is designed in which the statistical models (conditional probabilities) are controlled by axioms. A simple but illustrative example is, e.g., adding disjointness axioms that state - in the case of the car scenario above - that no sports car is a family car. This axiom prohibits that for a given input datum x both probabilities P(TypeofCar= family car | x) and P(TypeOfCar = sports car | x) are greater than 0.5. For the axiomatization we consider expressive logics such as full first-order logic (with natural domain semantics).