The Improvements of Traffic Data NN Rules Based on Hypothesis Interval

Article Preview

Abstract:

In the practical application of the Intelligent Transportation System (ITS), the collected and stored data through Nearest Neighbor Query can easily be contaminated by noise data. The reason is that the sensitivity of Nearest Neighbor Rules (NN Rules) to the noise data leads to the limits of Nearest Neighbor Query's practical application. To solve this problem, by using the insensitivity of Hypothesis Interval to noise data, this thesis improves NN Rules and proposes a classification mode of traffic data collection nearest neighbor rules. When the model predicts the samples, not only the distance from the test samples to the nearest neighbor is considered, but also the degree of the class to which this nearest neighbor belongs is taken into account.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

3518-3522

Citation:

Online since:

August 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Li W J and XU Y, Simplified Chinese Spam Mail Filter Design and Performance Evaluation. Computer Engineering and Applications, 43 (2007), 128-132.

Google Scholar

[2] BO Shukui, LI Shengyang, ZHU Chongguang, Study on Dimensionality Curse in the Nearest Neighbor Queries Based on Statistics[J], Computer Engineering, 2006, 32(21): 6-8.

Google Scholar

[3] Elena M, Class tional Nearest Neighbor for Large Margin Instance Selection. IEEE Transactions on Pattern Analysis and Machine Intellignce, 32(2010), 364-370.

DOI: 10.1109/tpami.2009.164

Google Scholar

[4] Elena M, Class tional Nearest Neighbor for Large Margin Instance Selection. IEEE Transactions on Pattern Analysis and Machine Intellignce, 32(2010), 364-370.

DOI: 10.1109/tpami.2009.164

Google Scholar

[5] Ran G B, Amir N and Naftali T, Margin Based Feature Selection-Theory and Algorithms. ICML'04 Proceedings of the Twenty-First International Conference on Machine Learning. (2004).

DOI: 10.1145/1015330.1015352

Google Scholar