The course covers about 75% of the following topics, depending on the year:

- mathematical foundations of machine learning (random variables and probabilities, probability distrubtions, high-dimensional spaces)
- overview of machine learning (supervise, semi-supervised, unsupervised learning, inductive and trasductive frameworks)
- classification algorithms: linear and non-linear algorithms(logistic regression, naive Bayes, decision tress, neural networks, support vector machines)
- regression algorithms (least squares linear regression, neural networks, relevance vector machines, regression trees)
- density estimation (expectation-maximization algorithm, kernel-based density estimation)
- kernel methods (dual representations, RBF networks)
- graphical models (Bayesian networks, Markov random fields, inference)
- ensemble methods (bagging, boosting, random forests)
- practical aspects in machine learning (data preprocessing, overfitting, accuracy estimation ,parameter and model selection)
- special topics (introduction to PAC learning, sample selection bias, learning from graph data, learning from sequential data)