Header logo is ei

Use of the Zero-Norm with Linear Models and Kernel Methods




We explore the use of the so-called zero-norm of the parameters of linear models in learning. Minimization of such a quantity has many uses in a machine learning context: for variable or feature selection, minimizing training error and ensuring sparsity in solutions. We derive a simple but practical method for achieving these goals and discuss its relationship to existing techniques of minimizing the zero-norm. The method boils down to implementing a simple modification of vanilla SVM, namely via an iterative multiplicative rescaling of the training data. Applications we investigate which aid our discussion include variable and feature selection on biological microarray data, and multicategory classification.

Author(s): Weston, J. and Elisseeff, A. and Schölkopf, B. and Tipping, M.
Journal: Journal of Machine Learning Research
Volume: 3
Pages: 1439-1461
Year: 2003
Month: March
Day: 0

Department(s): Empirical Inference
Bibtex Type: Article (article)

Digital: 0
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

Links: PDF


  title = {Use of the Zero-Norm with Linear Models and Kernel Methods},
  author = {Weston, J. and Elisseeff, A. and Sch{\"o}lkopf, B. and Tipping, M.},
  journal = {Journal of Machine Learning Research},
  volume = {3},
  pages = {1439-1461},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = mar,
  year = {2003},
  month_numeric = {3}