Note: Stefanie Jegelka has transitioned from the institute (alumni).
My research is concerend with discrete optimization problems in machine learning. Most recently, I have been working on submodular functions and graph cuts.
I have also worked on theoretical aspects of clustering and density estimation, from the perspective of approximation as well as learning theory.
Below is a summary of projects, and links to code. I have also co-organized the NIPS workshop on Discrete Optimization in Machine Learning.
Structured problems with submodular cost
The generic setting here is as follows: take a combinatorial optimization problem, and replace the usual sum-of-weights cost by a nonlinear, submodular function. This makes the problem much harder but has nice applications. In particular, we worked on this setting with graph cuts, for which there are applications in inference and computer vision.
In addition, we derived online algorithms for structured decision problems (e.g., online spanning tree, or s-t cut) with submodular cost (in contrast, almost all previous results are for linear costs).
During our work, we came across several functions that are actually not submodular (but maybe some people hoped them to be), and thus I am collecting those functions in a bag of submodular non-examples.
Clustering & Graph Cuts
Nearest Neighbor Clustering: we developed a generic algorithm to minimize popular clustering objective functions, such as Normalized Cut or k-means. This method is consistent, thanks to pooling neighboring points. (with Ulrike von Luxburg, Sebastien Bubeck, Michael Kaufmann).
Download Demo Code
Generalized Clustering via Kernel Embeddings: The concept of clustering can be generalized to finding distributions that are maximally separated. The standard case is to measure separation by means of the distribution (k-means). MMD, the maximum mean discrepancy in a feature space, allows to also take higher-order moments into account. As a result, it is possible to e.g. separate two Gaussians with the same mean but different variance. Interestingly, maximizing MMD is closely related to kernel k-means and various other clustering criteria. (with Ulrike von Luxburg, Bernhard Schoelkopf, Arthur Gretton, Bharath Sriperumbudur)
Approximation Algorithms for Tensor Clustering: We prove an approximation factor for tensor clustering (i.e., "cutting a big cube into little cubes") for maybe the simplest possible algorithm. Various divergences are possible, such as Euclidean distance, Bregman divergences etc. (with Suvrit Sra, Arindam Banerjee)
Solution Stability in Linear Programming Relaxations: Graph Partitioning and Unsupervised Learning. Data is often noisy. If there are several good solutions to an optimization problem (here, graph partitioning), then the noise can be the determining part to make one solution optimal. However, how much do we want to trust this solution, if a small perturbation in the data suddenly makes another solution the best? this work proposes a method to test the stability of an optimal graph cut solution to perturbation of the edge weights (i.e., noise).We also show that several common clustering and graph partitioning objectives fall in our framework. (with Sebastian Nowozin)
ICA
Fast Kernel ICA: We developed a Newton-like method for kernel ICA that uses HSIC as the independence criterion (with Hao Shen, Arthur Gretton).
Download Code for Fast kernel ICA