



This is a second course in statistical learning. It complements 271A, covering the topic of discriminant methods for statistical learning. Since discriminant methods are quite different from the generative methods covered in 271A, the latter is not a prerequisite for 271B. Topics covered include: linear discriminants; the Perceptron; the margin and large margin classifiers; learning theory; empirical vs structural risk minimization; the VC dimension; kernel functions; reproducing kernel Hilbert spaces; regularization theory; Lagrangian optimization; duality theory; the support vector machine; boosting; Gaussian processes, applications. There are no exams, course evaluation is based on homework and a project to be jointly determined by the student and instructor. 

Lectures:  TuTh, 5:006:20 PM, Warren Lecture Hall, Room 2113  
Instructor:  Nuno Vasconcelos  
n u n o @ e c e . u c s d . e d u, EBU15602  
office hours: TBA  
Teaching Assistant:  Hamed MasnadiShirzai (hmasnadi @ u c s d . e d u)  
Office hours:  Wed 56pm room 5512 EBU1  
Syllabus:  [ps, pdf]  
Homework:  Problem set 1 [ps, pdf]  
Problem set 2 [ps, pdf]  
Problem set 3 [ps, pdf]  
Readings:  Lecture 1: Introduction  
Lecture 2: Linear discriminants[slides]  
Lecture 3: The perceptron and margin [slides]  
Lecture 4: Neural networks [slides]  
Lecture 5: Kernels [slides]  
Lecture 6: Dot product kernels [slides]  
Lecture 7: Reproducing kernel Hilbert spaces[slides]  
Lecture 8: Regularization and the representer theorem[slides]  
Lecture 9: Optimization [slides]  
Lecture 10: Project meetings  
Lecture 11: The KKT conditions and duality theory [slides]  
Lecture 12: Duality theory [slides]  
Lecture 13: Support vector machines [slides]  
Lecture 14: Softmargin support vector machines [slides]  
Lecture 15: Boosting [slides]  
Lecture 16: VC dimension [slides]  
Lecture 17: Structural risk minimization [slides]  
Lecture 18: Applications  
Lecture 19: Project presentations  
Lecture 20: Project presentations  