Header logo is ei


2000


no image
Observational Learning with Modular Networks

Shin, H., Lee, H., Cho, S.

In Lecture Notes in Computer Science (LNCS 1983), LNCS 1983, pages: 126-132, Springer-Verlag, Heidelberg, International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), July 2000 (inproceedings)

Abstract
Observational learning algorithm is an ensemble algorithm where each network is initially trained with a bootstrapped data set and virtual data are generated from the ensemble for training. Here we propose a modular OLA approach where the original training set is partitioned into clusters and then each network is instead trained with one of the clusters. Networks are combined with different weighting factors now that are inversely proportional to the distance from the input vector to the cluster centers. Comparison with bagging and boosting shows that the proposed approach reduces generalization error with a smaller number of networks employed.

PDF [BibTex]

2000

PDF [BibTex]


no image
The Infinite Gaussian Mixture Model

Rasmussen, CE.

In Advances in Neural Information Processing Systems 12, pages: 554-560, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the ``right'' number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms

Shin, H., Jang, M., Cho, S.

In Proc. of the Korean Brain Society Conference, pages: 129-133, Korean Brain Society Conference, June 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Support vector method for novelty detection

Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J., Platt, J.

In Advances in Neural Information Processing Systems 12, pages: 582-588, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Suppose you are given some dataset drawn from an underlying probability distribution ¤ and you want to estimate a “simple” subset ¥ of input space such that the probability that a test point drawn from ¤ lies outside of ¥ equals some a priori specified ¦ between § and ¨. We propose a method to approach this problem by trying to estimate a function © which is positive on ¥ and negative on the complement. The functional form of © is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
v-Arc: Ensemble Learning in the Presence of Outliers

Rätsch, G., Schölkopf, B., Smola, A., Müller, K., Onoda, T., Mika, S.

In Advances in Neural Information Processing Systems 12, pages: 561-567, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
AdaBoost and other ensemble methods have successfully been applied to a number of classification tasks, seemingly defying problems of overfitting. AdaBoost performs gradient descent in an error function with respect to the margin, asymptotically concentrating on the patterns which are hardest to learn. For very noisy problems, however, this can be disadvantageous. Indeed, theoretical analysis has shown that the margin distribution, as opposed to just the minimal margin, plays a crucial role in understanding this phenomenon. Loosely speaking, some outliers should be tolerated if this has the benefit of substantially increasing the margin on the remaining points. We propose a new boosting algorithm which allows for the possibility of a pre-specified fraction of points to lie in the margin area or even on the wrong side of the decision boundary.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Transductive Inference for Estimating Values of Functions

Chapelle, O., Vapnik, V., Weston, J.

In Advances in Neural Information Processing Systems 12, pages: 421-427, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
We introduce an algorithm for estimating the values of a function at a set of test points $x_1^*,dots,x^*_m$ given a set of training points $(x_1,y_1),dots,(x_ell,y_ell)$ without estimating (as an intermediate step) the regression function. We demonstrate that this direct (transductive) way for estimating values of the regression (or classification in pattern recognition) is more accurate than the traditional one based on two steps, first estimating the function and then calculating the values of this function at the points of interest.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Invariant feature extraction and classification in kernel spaces

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K.

In Advances in neural information processing systems 12, pages: 526-532, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Model Selection for Support Vector Machines

Chapelle, O., Vapnik, V.

In Advances in Neural Information Processing Systems 12, pages: 230-236, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
New functionals for parameter (model) selection of Support Vector Machines are introduced based on the concepts of the span of support vectors and rescaling of the feature space. It is shown that using these functionals, one can both predict the best choice of parameters of the model and the relative quality of performance for any value of parameter.

PDF Web [BibTex]

PDF Web [BibTex]


no image
The entropy regularization information criterion

Smola, A., Shawe-Taylor, J., Schölkopf, B., Williamson, R.

In Advances in Neural Information Processing Systems 12, pages: 342-348, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Effective methods of capacity control via uniform convergence bounds for function expansions have been largely limited to Support Vector machines, where good bounds are obtainable by the entropy number approach. We extend these methods to systems with expansions in terms of arbitrary (parametrized) basis functions and a wide range of regularization methods covering the whole range of general linear additive models. This is achieved by a data dependent analysis of the eigenvalues of the corresponding design matrix.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms: OLA, Bagging, Boosting

Shin, H., Jang, M., Cho, S., Lee, B., Lim, Y.

In Proc. of the Korea Information Science Conference, pages: 226-228, Conference on Korean Information Science, April 2000 (inproceedings)

[BibTex]

[BibTex]


no image
A simple iterative approach to parameter optimization

Zien, A., Zimmer, R., Lengauer, T.

In RECOMB2000, pages: 318-327, ACM Press, New York, NY, USA, Forth Annual Conference on Research in Computational Molecular Biology, April 2000 (inproceedings)

Abstract
Various bioinformatics problems require optimizing several different properties simultaneously. For example, in the protein threading problem, a linear scoring function combines the values for different properties of possible sequence-to-structure alignments into a single score to allow for unambigous optimization. In this context, an essential question is how each property should be weighted. As the native structures are known for some sequences, the implied partial ordering on optimal alignments may be used to adjust the weights. To resolve the arising interdependence of weights and computed solutions, we propose a novel approach: iterating the computation of solutions (here: threading alignments) given the weights and the estimation of optimal weights of the scoring function given these solutions via a systematic calibration method. We show that this procedure converges to structurally meaningful weights, that also lead to significantly improved performance on comprehensive test data sets as measured in different ways. The latter indicates that the performance of threading can be improved in general.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Choosing nu in support vector regression with different noise models — theory and experiments

Chalimourda, A., Schölkopf, B., Smola, A.

In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, Neural Computing: New Challenges and Perspectives for the New Millennium, IEEE, International Joint Conference on Neural Networks, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Bayesian modelling of fMRI time series

, PADFR., Rasmussen, CE., Hansen, LK.

In pages: 754-760, (Editors: Sara A. Solla, Todd K. Leen and Klaus-Robert Müller), 2000 (inproceedings)

Abstract
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
A High Resolution and Accurate Pentium Based Timer

Ong, CS., Wong, F., Lai, WK.

In 2000 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Robust Ensemble Learning for Data Mining

Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K.

In Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 1805, pages: 341-341, Lecture Notes in Artificial Intelligence, (Editors: H. Terano), Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Sparse greedy matrix approximation for machine learning.

Smola, A., Schölkopf, B.

In 17th International Conference on Machine Learning, Stanford, 2000, pages: 911-918, (Editors: P Langley), Morgan Kaufman, San Fransisco, CA, USA, 17th International Conference on Machine Learning (ICML), 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Entropy Numbers of Linear Function Classes.

Williamson, R., Smola, A., Schölkopf, B.

In 13th Annual Conference on Computational Learning Theory, pages: 309-319, (Editors: N Cesa-Bianchi and S Goldman), Morgan Kaufman, San Fransisco, CA, USA, 13th Annual Conference on Computational Learning Theory (COLT), 2000 (inproceedings)

[BibTex]

[BibTex]

1999


no image
Engineering Support Vector Machine Kernels That Recognize Translation Initiation Sites in DNA

Zien, A., Rätsch, G., Mika, S., Schölkopf, B., Lemmen, C., Smola, A., Lengauer, T., Müller, K.

In German Conference on Bioinformatics (GCB 1999), October 1999 (inproceedings)

Abstract
In order to extract protein sequences from nucleotide sequences, it is an important step to recognize points from which regions encoding pro­ teins start, the so­called translation initiation sites (TIS). This can be modeled as a classification prob­ lem. We demonstrate the power of support vector machines (SVMs) for this task, and show how to suc­ cessfully incorporate biological prior knowledge by engineering an appropriate kernel function.

Web [BibTex]

1999

Web [BibTex]


no image
Shrinking the tube: a new support vector regression algorithm

Schölkopf, B., Bartlett, P., Smola, A., Williamson, R.

In Advances in Neural Information Processing Systems 11, pages: 330-336 , (Editors: MS Kearns and SA Solla and DA Cohn), MIT Press, Cambridge, MA, USA, 12th Annual Conference on Neural Information Processing Systems (NIPS), June 1999 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel PCA and De-noising in feature spaces

Mika, S., Schölkopf, B., Smola, A., Müller, K., Scholz, M., Rätsch, G.

In Advances in Neural Information Processing Systems 11, pages: 536-542 , (Editors: MS Kearns and SA Solla and DA Cohn), MIT Press, Cambridge, MA, USA, 12th Annual Conference on Neural Information Processing Systems (NIPS), June 1999 (inproceedings)

Abstract
Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semiparametric support vector and linear programming machines

Smola, A., Friess, T., Schölkopf, B.

In Advances in Neural Information Processing Systems 11, pages: 585-591 , (Editors: MS Kearns and SA Solla and DA Cohn), MIT Press, Cambridge, MA, USA, Twelfth Annual Conference on Neural Information Processing Systems (NIPS), June 1999 (inproceedings)

Abstract
Semiparametric models are useful tools in the case where domain knowledge exists about the function to be estimated or emphasis is put onto understandability of the model. We extend two learning algorithms - Support Vector machines and Linear Programming machines to this case and give experimental results for SV machines.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Classification on proximity data with LP-machines

Graepel, T., Herbrich, R., Schölkopf, B., Smola, A., Bartlett, P., Müller, K., Obermayer, K., Williamson, R.

In Artificial Neural Networks, 1999. ICANN 99, 470, pages: 304-309, Conference Publications , IEEE, 9th International Conference on Artificial Neural Networks, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Kernel-dependent support vector error bounds

Schölkopf, B., Shawe-Taylor, J., Smola, A., Williamson, R.

In Artificial Neural Networks, 1999. ICANN 99, 470, pages: 103-108 , Conference Publications , IEEE, 9th International Conference on Artificial Neural Networks, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Linear programs for automatic accuracy control in regression

Smola, A., Schölkopf, B., Rätsch, G.

In Artificial Neural Networks, 1999. ICANN 99, 470, pages: 575-580 , Conference Publications , IEEE, 9th International Conference on Artificial Neural Networks, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Classifying LEP data with support vector algorithms.

Vannerem, P., Müller, K., Smola, A., Schölkopf, B., Söldner-Rembold, S.

In Artificial Intelligence in High Energy Nuclear Physics 99, Artificial Intelligence in High Energy Nuclear Physics 99, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Is the Hippocampus a Kalman Filter?

Bousquet, O., Balakrishnan, K., Honavar, V.

In Proceedings of the Pacific Symposium on Biocomputing, 3, pages: 619-630, Proceedings of the Pacific Symposium on Biocomputing, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
A Comparison of Artificial Neural Networks and Cluster Analysis for Typing Biometrics Authentication

Maisuria, K., Ong, CS., Lai, .

In unknown, pages: 9999-9999, International Joint Conference on Neural Networks, 1999 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Regularized principal manifolds.

Smola, A., Williamson, R., Mika, S., Schölkopf, B.

In Lecture Notes in Artificial Intelligence, Vol. 1572, 1572, pages: 214-229 , Lecture Notes in Artificial Intelligence, (Editors: P Fischer and H-U Simon), Springer, Berlin, Germany, Computational Learning Theory: 4th European Conference, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Entropy numbers, operators and support vector kernels.

Williamson, R., Smola, A., Schölkopf, B.

In Lecture Notes in Artificial Intelligence, Vol. 1572, 1572, pages: 285-299, Lecture Notes in Artificial Intelligence, (Editors: P Fischer and H-U Simon), Springer, Berlin, Germany, Computational Learning Theory: 4th European Conference, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Fisher discriminant analysis with kernels

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Müller, K.

In Proceedings of the 1999 IEEE Signal Processing Society Workshop, 9, pages: 41-48, (Editors: Y-H Hu and J Larsen and E Wilson and S Douglas), IEEE, Neural Networks for Signal Processing IX, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]

1998


no image
Navigation mit Schnappschüssen

Franz, M., Schölkopf, B., Mallot, H., Bülthoff, H., Zell, A.

In Mustererkennung 1998, pages: 421-428, (Editors: P Levi and R-J Ahlers and F May and M Schanz), Springer, Berlin, Germany, 20th DAGM-Symposium, October 1998 (inproceedings)

Abstract
Es wird ein biologisch inspirierter Algorithmus vorgestellt, mit dem sich ein Ort wiederfinden l{\"a}sst, an dem vorher eine 360-Grad-Ansicht der Umgebung aufgenommen wurde. Die Zielrichtung wird aus der Verschiebung der Bildposition der umgebenden Landmarken im Vergleich zum Schnappschuss berechnet. Die Konvergenzeigenschaften des Algorithmus werden mathematisch untersucht und auf mobilen Robotern getestet.

PDF Web [BibTex]

1998

PDF Web [BibTex]


no image
Prior knowledge in support vector kernels

Schölkopf, B., Simard, P., Smola, A., Vapnik, V.

In Advances in Neural Information Processing Systems 10, pages: 640-646 , (Editors: M Jordan and M Kearns and S Solla ), MIT Press, Cambridge, MA, USA, Eleventh Annual Conference on Neural Information Processing (NIPS), June 1998 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
From regularization operators to support vector kernels

Smola, A., Schölkopf, B.

In Advances in Neural Information Processing Systems 10, pages: 343-349, (Editors: M Jordan and M Kearns and S Solla), MIT Press, Cambridge, MA, USA, 11th Annual Conference on Neural Information Processing (NIPS), June 1998 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Qualitative Modeling for Data Miner’s Requirements

Shin, H., Jhee, W.

In Proc. of the Korean Management Information Systems, pages: 65-73, Conference on the Korean Management Information Systems, April 1998 (inproceedings)

[BibTex]

[BibTex]


no image
Übersicht durch Übersehen

Schölkopf, B.

Frankfurter Allgemeine Zeitung , Wissenschaftsbeilage, March 1998 (misc)

[BibTex]

[BibTex]


no image
Fast approximation of support vector kernel expansions, and an interpretation of clustering as approximation in feature spaces.

Schölkopf, B., Knirsch, P., Smola, A., Burges, C.

In Mustererkennung 1998, pages: 125-132, Informatik aktuell, (Editors: P Levi and M Schanz and R-J Ahlers and F May), Springer, Berlin, Germany, 20th DAGM-Symposium, 1998 (inproceedings)

Abstract
Kernel-based learning methods provide their solutions as expansions in terms of a kernel. We consider the problem of reducing the computational complexity of evaluating these expansions by approximating them using fewer terms. As a by-product, we point out a connection between clustering and approximation in reproducing kernel Hilbert spaces generated by a particular class of kernels.

Web [BibTex]

Web [BibTex]


no image
Kernel PCA pattern reconstruction via approximate pre-images.

Schölkopf, B., Mika, S., Smola, A., Rätsch, G., Müller, K.

In 8th International Conference on Artificial Neural Networks, pages: 147-152, Perspectives in Neural Computing, (Editors: L Niklasson and M Boden and T Ziemke), Springer, Berlin, Germany, 8th International Conference on Artificial Neural Networks, 1998 (inproceedings)

[BibTex]

[BibTex]


no image
Convex Cost Functions for Support Vector Regression

Smola, A., Schölkopf, B., Müller, K.

In 8th International Conference on Artificial Neural Networks, pages: 99-104, Perspectives in Neural Computing, (Editors: L Niklasson and M Boden and T Ziemke), Springer, Berlin, Germany, 8th International Conference on Artificial Neural Networks, 1998 (inproceedings)

[BibTex]

[BibTex]


no image
Support vector regression with automatic accuracy control.

Schölkopf, B., Bartlett, P., Smola, A., Williamson, R.

In ICANN'98, pages: 111-116, Perspectives in Neural Computing, (Editors: L Niklasson and M Boden and T Ziemke), Springer, Berlin, Germany, International Conference on Artificial Neural Networks (ICANN'98), 1998 (inproceedings)

[BibTex]

[BibTex]


no image
General cost functions for support vector regression.

Smola, A., Schölkopf, B., Müller, K.

In Ninth Australian Conference on Neural Networks, pages: 79-83, (Editors: T Downs and M Frean and M Gallagher), 9th Australian Conference on Neural Networks (ACNN'98), 1998 (inproceedings)

[BibTex]

[BibTex]


no image
Asymptotically optimal choice of varepsilon-loss for support vector machines.

Smola, A., Murata, N., Schölkopf, B., Müller, K.

In 8th International Conference on Artificial Neural Networks, pages: 105-110, Perspectives in Neural Computing, (Editors: L Niklasson and M Boden and T Ziemke), Springer, Berlin, Germany, 8th International Conference on Artificial Neural Networks, 1998 (inproceedings)

[BibTex]

[BibTex]

1997


no image
The view-graph approach to visual navigation and spatial memory

Mallot, H., Franz, M., Schölkopf, B., Bülthoff, H.

In Artificial Neural Networks: ICANN ’97, pages: 751-756, (Editors: W Gerstner and A Germond and M Hasler and J-D Nicoud), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks, October 1997 (inproceedings)

Abstract
This paper describes a purely visual navigation scheme based on two elementary mechanisms (piloting and guidance) and a graph structure combining individual navigation steps controlled by these mechanisms. In robot experiments in real environments, both mechanisms have been tested, piloting in an open environment and guidance in a maze with restricted movement opportunities. The results indicate that navigation and path planning can be brought about with these simple mechanisms. We argue that the graph of local views (snapshots) is a general and biologically plausible means of representing space and integrating the various mechanisms of map behaviour.

PDF PDF DOI [BibTex]

1997

PDF PDF DOI [BibTex]


no image
Predicting time series with support vector machines

Müller, K., Smola, A., Rätsch, G., Schölkopf, B., Kohlmorgen, J., Vapnik, V.

In Artificial Neural Networks: ICANN’97, pages: 999-1004, (Editors: Schölkopf, B. , C.J.C. Burges, A.J. Smola), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks , October 1997 (inproceedings)

Abstract
Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an e insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 29%.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Predicting time series with support vectur machines

Müller, K., Smola, A., Rätsch, G., Schölkopf, B., Kohlmorgen, J., Vapnik, V.

In Artificial neural networks: ICANN ’97, pages: 999-1004, (Editors: W Gerstner and A Germond and M Hasler and J-D Nicoud), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks , October 1997 (inproceedings)

Abstract
Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an e insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 29%.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Kernel principal component analysis

Schölkopf, B., Smola, A., Müller, K.

In Artificial neural networks: ICANN ’97, LNCS, vol. 1327, pages: 583-588, (Editors: W Gerstner and A Germond and M Hasler and J-D Nicoud), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks, October 1997 (inproceedings)

Abstract
A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in highdimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible d-pixel products in images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Homing by parameterized scene matching

Franz, M., Schölkopf, B., Bülthoff, H.

In Proceedings of the 4th European Conference on Artificial Life, (Eds.) P. Husbands, I. Harvey. MIT Press, Cambridge 1997, pages: 236-245, (Editors: P Husbands and I Harvey), MIT Press, Cambridge, MA, USA, 4th European Conference on Artificial Life (ECAL97), July 1997 (inproceedings)

Abstract
In visual homing tasks, animals as well as robots can compute their movements from the current view and a snapshot taken at a home position. Solving this problem exactly would require knowledge about the distances to visible landmarks, information, which is not directly available to passive vision systems. We propose a homing scheme that dispenses with accurate distance information by using parameterized disparity fields. These are obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves that the approximation does not prevent the scheme from approaching the goal with arbitrary accuracy. Mobile robot experiments are used to demonstrate the practical feasibility of the approach.

PDF [BibTex]

PDF [BibTex]


no image
Das Spiel mit dem künstlichen Leben.

Schölkopf, B.

Frankfurter Allgemeine Zeitung, Wissenschaftsbeilage, June 1997 (misc)

[BibTex]

[BibTex]


no image
Improving the accuracy and speed of support vector learning machines

Burges, C., Schölkopf, B.

In Advances in Neural Information Processing Systems 9, pages: 375-381, (Editors: M Mozer and MJ Jordan and T Petsche), MIT Press, Cambridge, MA, USA, Tenth Annual Conference on Neural Information Processing Systems (NIPS), May 1997 (inproceedings)

Abstract
Support Vector Learning Machines (SVM) are finding application in pattern recognition, regression estimation, and operator inversion for illposed problems . Against this very general backdrop any methods for improving the generalization performance, or for improving the speed in test phase of SVMs are of increasing interest. In this paper we combine two such techniques on a pattern recognition problem The method for improving generalization performance the "virtual support vector" method does so by incorporating known invariances of the problem This method achieves a drop in the error rate on 10.000 NIST test digit images of 1,4 % to 1 %. The method for improving the speed (the "reduced set" method) does so by approximating the support vector decision surface. We apply this method to achieve a factor of fifty speedup in test phase over the virtual support vector machine The combined approach yields a machine which is both 22 times faster than the original machine, and which has better generalization performance achieving 1,1 % error . The virtual support vector method is applicable to any SVM problem with known invariances The reduced set method is applicable to any support vector machine .

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning view graphs for robot navigation

Franz, M., Schölkopf, B., Georg, P., Mallot, H., Bülthoff, H.

In Proceedings of the 1st Intl. Conf. on Autonomous Agents, pages: 138-147, (Editors: Johnson, W.L.), ACM Press, New York, NY, USA, First International Conference on Autonomous Agents (AGENTS '97), Febuary 1997 (inproceedings)

Abstract
We present a purely vision-based scheme for learning a parsimonious representation of an open environment. Using simple exploration behaviours, our system constructs a graph of appropriately chosen views. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. Simulations and robot experiments demonstrate the feasibility of the proposed approach.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]