Foundations of Machine Learning
Seminar
Basic Information
Lectures: | Wednesday, 16:00 to 18:00, Location : Room 024, E1 4, the MPI-INF building |
---|---|
Coordinators: | Yonggang Jiang, Kurt Mehlhorn, Adam Polak, Roohani Sharma, Hans Simon, Shreyas Srinivas |
First lecture: | April 12 |
Type | Seminar |
Credits: | 7 |
Exam: | No exam |
Registration: | Please join the mailing list : https://lists.mpi-inf.mpg.de/listinfo/ml-foundations LSF Registration deadline will be announced on April 26th |
Prerequisites: | Basic mathematical fluency is important. You must be comfortable with writing proofs. Being fluent in basic calculus, linear algebra, and probability is very useful. The book's appendix covers some of the essential ideas. |
Description
Course Mechanics : This course is organised as a reading group which introduces students to the theoretical foundations of machine learning through reading and discussion of the book "Foundations of Machine Learning" by Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. The course will be divided into two parts: lectures and student-led presentations.
During the first three weeks, the coordinators of the course will give lectures based on the first four chapters of the book. The lectures will cover the PAC Learning model, notions like VC-Dimension and Rademacher complexity, and Regularisation (Chapters 1-4).
In the following weeks, the course will focus on student-led presentations. Each week, we will try to cover one chapter from chapters 5, 6, 7, 8, 9, 10, 11, and 15. Two students, selected at random will present each chapter, with some chapters potentially presented by the coordinators depending on the number of enrolled students. Prior to the lecture, students are required to read the assigned chapter. If you face difficulties with understanding the material for the reading material prescribed for the upcoming session, please contact one or more of the organisers as soon as possible. Asking questions on the course mailing list is also encouraged.
What You Should Do : Please read the material and be prepared to give a whiteboard presentation on the topics for the session for upto 45 minutes. A typical presentation will motivate some definitions, describe the key results we seek to prove, and explain how to prove them, and then describe the key takeaways. Providing a conceptual overview is encouraged. From the first week, the chapters to be read for the next session will be announced in class. We don't expect very polished presentations. Further, you may assume that your audience is familiar with the contents and also offer you help if you get stuck somewhere. We are after all learning the topic together. The key thing is that you put in your best effort at understanding the content, and either discuss it amongst yourselves or ask one of the coordinators if something is unclear.
What this course is not : This is not a course where we learn how to implement machine learning algorithms. In this sense, it is different from the introductory courses on machine learning. Here, we focus on mathematical models of learning algorithms, and obtaining provable guarantees on their predictive power and complexity measures, typically sample complexity. We do not cover more recent advances in the theoretical understanding of neural nets and deep learning.
Other related courses on campus: ML is a huge topic and we cannot possibly hope to be comprehensive and cover every aspect of it. Our focus is limited to the computational and in particular theoretical side. If you would like to learn about other topics and aspects, there are a number of other courses being offered on campus this semester. The following is an almost certainly incomplete list:
- Core Course: Machine Learning
- Course: High Dimensional Analysis : Random Matrices and Machine Learning
- Advanced Course : Optimization for Machine Learning
- Advanced Course : Attacks against Machine Learning Models
- Pro seminar : Optimization Methods for Large-Scale Machine Learning
- Seminar : Do We Really Need overparametrization in Deep Learning
- Seminar: Topics in Adversarial Machine Learning
- Block Seminar : Machine Learning for NLP
- Seminar: AI for Programming Education
- Seminar: AI for Social Good
- ...
Schedule
Lecture | Lecturer | Topic | Reference (see below) | Location |
---|---|---|---|---|
April 12 | Adam Polak | Introduction and PAC Learning | Chapters 1 and 2 (mostly 2) | E1 4, 024 |
April 19 | Adam Polak, Hans Simon | PAC Learning for Finite and Infinite Hypothesis Classes | Chapters 2 and 3 | E1 4, 024 |
April 26 | Hans Simon | PAC learning for Infinite Hypothesis Classes, Intro to hypothesis selection, and ERM | Chapters 3.3, 3.4, 4.1, and 4.2 | E1 4, 024 |
May 3 | Shreyas Srinivas | Model Selection | Chapter 4 | E1 4, 024 |
May 10 | Georgi Vitanov, Yonggang Jiang | SVM | Chapter 5 | E1 5, 005 |
May 17 | Georgi Vitanov, Hans Simon | Kernel Methods | Chapter 6 | E1 4, 024 |
May 24 | Kurt Mehlhorn | Boosting | Chapter 7 | E1 4, 024 |
May 31 | Adam Polak | Online Learning | Chapter 8 | E1 4, 024 |
Jun 7 | Adam Polak, Yonggang Jiang | Online Learning, Ranking | Chapter 8 and 10 | E1 4, 024 |
June 14 | Georgi Vitanov | Regression | Chapter 11 | E1 4, |
June 21 | Hans Simon | Algorithmic Stability | Chapter 14 | E1 4, 024 |
June 28 | Georgi Vitanov | Dimensionality Reduction | Chapter 15 | E1 4, 024 |
July 5 | Shreyas Srinivas | Learning Automata and Languages | Chapter 16 | E1 4, 024 |
July 19 | Yonggang Jiang | Reinforcement Learning | Chapter 17 | E1 4, 024 |
References
- This is the book we are going to read https://cs.nyu.edu/~mohri/mlbook/
- A course of reference https://cs.nyu.edu/~mohri/ml18/