An Empirical Study on Feature Subsampling-Based Ensembles

Article Preview

Abstract:

Feature subsampling techniques help to create diverse for classifiers ensemble. In this article we investigate two feature subsampling-base ensemble methods - Random Subspace Method (RSM) and Rotation Forest Method (RFM) to explore their usability with different learning algorithms and the robust on noise data. The experiments show that RSM with IBK work better than RFM and AdaBoost, and RFM with tree classifier and rule classifier achieve prominent improvement than others. We also find that Logistic algorithm is not suitable for any of the three ensembles. When adding classification noise into original data sets, ensembles outperform singles at lower noisy level but fail to maintain such superior at higher noisy level.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

848-852

Citation:

Online since:

December 2012

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] T.G. Dietterich, in: Multiple Classifier Systems, edited by J. Kittler and F. Roli, volume 1857, Springer-Verlag Publishers, Berlin (2000).

Google Scholar

[2] G. Brown, in: Encyclopedia of Machine Learning, edited by C. Sammu and G. Webb, Springer-Verlag Publishers, New York (2010).

Google Scholar

[3] G. Brown, J. Wyatt, and R. Harris et al: Information Fusion Vol. 6(2005), p.5.

Google Scholar

[4] T.K. Ho: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20(1998), p.832.

Google Scholar

[5] J.J. Rodríguez, L.I. Kuncheva: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28(2006), p.1619.

Google Scholar

[6] A. Chandra, Y. Xin: Neurocomputing, Vol. 69(2006): p.689.

Google Scholar

[7] L.I. Kuncheva, D.E.J. Linden and S.J. Johnston: IEEE Transactions on Medical Imaging, Vol. 29(2010), p.531.

Google Scholar

[8] L.I. Kuncheva, J.J. Rodríguez: Multiple Classifiers System, Lecture Notes in Computer Science, Vol. 4472, Springer-Verlag Publishers, Berlin (2007).

Google Scholar

[9] C.X. Zhang, J.S. Zhang: Pattern Recognition Letters, Vol. 29(2008), p.1524.

Google Scholar

[10] C.X. Zhang, J.S. Zhang: Pattern Analysis and Applications, Vol. 13 (2012), p.59.

Google Scholar

[11] K.H. Liu, D.S. Huang: Computers in Biology and Medicine, Vol. 38 (2008), p.601.

Google Scholar

[12] S. Kotsiantis: Artificial Intelligent Review, Vol. 35 (2011), p.223.

Google Scholar

[13] T. Lasota, Z. Telec and B. Trawiski et al, in: ACIIDS 2012, edited by J.S. Pan, S.M. Chen and E.T. Nguyen, Part I, LNAI 7196, Springer-Verlag Publishers, Berlin (2012).

Google Scholar

[14] Information on http: /archive. ics. uci. edu/ml.

Google Scholar

[15] M. Hall, E. Frank, Homes G, Pfahringer B, Reutemann P and Witten IH: SIGKDD Explorations 11, Issue 1 (2009).

Google Scholar

[16] G. Webb: Machine Learning, Vol. 40 (2000), p.159.

Google Scholar