Skip to ContentSkip to Navigation
About us Latest news News News articles

Marco Wiering - Deep Extensions of Support Vector Machines

22 January 2013

Machine learning algorithms are very useful for regression and classification problems. These algorithms learn to extract a predictive model using a dataset of examples containing input vectors and target outputs. Among all machine learning algorithms, currently the most popular method is the Support Vector Machine (SVM). SVMs have been used for many engineering applications such as object recognition, face recognition, fMRI-scan classification and medical diagnosis.

In this presentation we will study two different extensions of the standard SVM. The first method that will be presented is the deep Support Vector Machine (DSVM). The original SVM has a single layer with kernel functions and is therefore a shallow model. The DSVM can use an arbitrary number of layers, in which lower-level layers contain support vector machines that learn to extract relevant features from the outputs of one layer below. The highest level model then performs the actual prediction using the extracted features as inputs. The DSVM is compared to the regular SVM on a large number of regression datasets and the results show that the DSVM significantly outperforms the SVM on most datasets. Another extension which will be presented is the neural SVM (NSVM). The NSVM uses neural networks to extract features that are given to an SVM to finally predict the output. We have used the NSVM to construct an autoencoder, basically a method for doing non-linear principle component analysis (PCA). The autoencoding NSVM is compared to standard PCA and a neural network autoencoder on the task of dimensionality reduction of eye images. The results show very favorable results for the autoencoding NSVM compared to these state-of-the-art methods.

Last modified:13 June 2019 1.40 p.m.
Share this Facebook LinkedIn
View this page in: Nederlands

More news