Method of Parallel Sequential Minimal Optimization for Fast Training Support Vector Machine

Abstract:

Article Preview

A fast training support vector machine using parallel sequential minimal optimization is presented in this paper. Up to now, sequential minimal optimization (SMO) is one of the major algorithms for training SVM, but it still requires a large amount of computation time for the large sample problems. Unlike the traditional SMO, the parallel SMO partitions the entire training data set into small subsets first and then runs multiple CPU processors to seal with each of the partitioned data set. Experiments show that the new algorithm has great advantage in terms of speediness when applied to problems with large training sets and high dimensional spaces without reducing generalization performance of SVM.

Info:

Periodical:

Edited by:

Honghua Tan

Pages:

947-951

DOI:

10.4028/www.scientific.net/AMM.29-32.947

Citation:

L. Y. Tian and X. G. Hu, "Method of Parallel Sequential Minimal Optimization for Fast Training Support Vector Machine", Applied Mechanics and Materials, Vols. 29-32, pp. 947-951, 2010

Online since:

August 2010

Export:

Price:

$35.00

In order to see related information, you need to Login.

In order to see related information, you need to Login.