Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Analysis of Attribute Reduction of Rough Set and CNC Machine Fault Diagnosis Based on Particle Swarm
Online since: November 2012
Authors: Zhuang Wu
Finally, the correctness and superiority of this algorithm are proved from the reduction experimental results of related data sets.
Due to factors such as a large amount of data and multi-sample properties, the attribute reduction in rough set theory, often fail to find the smallest reduction in the limited time period.
Faults features are selected based on the attribute reduction algorithm of rough set of particle swarm and the diagram of particle encoding is shown in Fig. 2: In terms of the selection of training set and the test set in evaluation of particle adaptability, 200 sets are respectively chosen randomly from normal data and three kinds of fault data as the training set from the primary simulating original data while 300 sets are respectively chosen randomly as the test set.
The obtained data shall be first preprocessed then dimensioned and normalized before corresponding fault feature extraction and selection.
Table 1 Results of Attribute Reduction in Each Data Set Name of the data set Number of the attributes record Number of the records Number of the reduction attributes Number of the minimum reduction attributes The excellent rate The running time Vote 17 435 9 8 90 6453.751 Wine 14 178 6 5 80 1082.538 Soybean_ large 36 307 9 9 100 4031.657 Zoo 17 101 6 5 80 463.136 Lymphography 19 148 6 6 100 737.375 Sponge 45 76 8 8 100 417.636 The following indicators will be used to compare and measure the influence of this algorithm on the reduction performance: (1) the number of reduction attributes, (2) the excellent rate, (3) the running time From the effect of the reduction, the algorithm can acquire a reduction on the relatively small number of attributes in all data sets, which is attributed to the larger search space of particle based on the optimization capability of the PSO algorithm.
Due to factors such as a large amount of data and multi-sample properties, the attribute reduction in rough set theory, often fail to find the smallest reduction in the limited time period.
Faults features are selected based on the attribute reduction algorithm of rough set of particle swarm and the diagram of particle encoding is shown in Fig. 2: In terms of the selection of training set and the test set in evaluation of particle adaptability, 200 sets are respectively chosen randomly from normal data and three kinds of fault data as the training set from the primary simulating original data while 300 sets are respectively chosen randomly as the test set.
The obtained data shall be first preprocessed then dimensioned and normalized before corresponding fault feature extraction and selection.
Table 1 Results of Attribute Reduction in Each Data Set Name of the data set Number of the attributes record Number of the records Number of the reduction attributes Number of the minimum reduction attributes The excellent rate The running time Vote 17 435 9 8 90 6453.751 Wine 14 178 6 5 80 1082.538 Soybean_ large 36 307 9 9 100 4031.657 Zoo 17 101 6 5 80 463.136 Lymphography 19 148 6 6 100 737.375 Sponge 45 76 8 8 100 417.636 The following indicators will be used to compare and measure the influence of this algorithm on the reduction performance: (1) the number of reduction attributes, (2) the excellent rate, (3) the running time From the effect of the reduction, the algorithm can acquire a reduction on the relatively small number of attributes in all data sets, which is attributed to the larger search space of particle based on the optimization capability of the PSO algorithm.
Online since: August 2014
Authors: Bo Zhang, Fang Cheng Lv, Zi Jian Wang, Hu Jin
Results show that characteristic of this method contained all the information of the original data, and dimension is less than GIS insulation defect category numbers, and it can realize data dimension reduction without information loss, which improve the pattern recognition rate.
Feature Extraction Based on the Class Mean Kernel Principal Component Dimension Reduction and Compression of PD Data.
According to the experiment in section 2.2, every set of data was extracted 24 characteristic parameters. 50 groups of sample data were got from every discharge model.
The N kinds sample data was set as .
Matrix-based Kernel Method for Large-scale Data Set.
Feature Extraction Based on the Class Mean Kernel Principal Component Dimension Reduction and Compression of PD Data.
According to the experiment in section 2.2, every set of data was extracted 24 characteristic parameters. 50 groups of sample data were got from every discharge model.
The N kinds sample data was set as .
Matrix-based Kernel Method for Large-scale Data Set.
Online since: October 2011
Authors: Hai Zhong Tan
However, researchers have been illustrated that Pawlak’s rough set model has some limitations when handling some practical problems, especially when some noise data are included.
So that the rule properties need to be preserved after attribute reduction.
Attribute reduction in variable precision rough set model In this section, Ziarko’s and Mi’s attribute reduction models will be briefly recalled.
Rule acquisition based on rough set theory should correspond to the original data set.
If the rule properties are changed when reducing attributes, such as deterministic rules are changed to probabilistic rules, even some deterministic rules are eliminated, the obtained rule set will not confirm to the original data set. 3.
So that the rule properties need to be preserved after attribute reduction.
Attribute reduction in variable precision rough set model In this section, Ziarko’s and Mi’s attribute reduction models will be briefly recalled.
Rule acquisition based on rough set theory should correspond to the original data set.
If the rule properties are changed when reducing attributes, such as deterministic rules are changed to probabilistic rules, even some deterministic rules are eliminated, the obtained rule set will not confirm to the original data set. 3.
Online since: September 2013
Authors: Zlatan Soskic, Jelena Tomić, Nebojša Bogojević, Snežana Ćirić Kostić
Data processing of the measured data is performed by computers, which are essentially digital data processing devices.
Data processing of uniform data is significantly easier and faster than data processing of nonuniform data because equal duration of time intervals between moments of measurements enables simplification of calculations performed during the data processing procedures.
However, when strong electromagnetic induction is present, long series of consecutive data are corrupted and lost during the data transmission process, and such data transmission losses are called heavy data transmission losses.
The received data in cases of heavy data transmission losses cannot be made uniform in a meaningful way.
It is obvious that data losses cause reduction of signal power and hence the reduction of spectral power, but even basic considerations show that the distribution of power losses depends on the signal spectra and duration and distribution in time of loss events.
Data processing of uniform data is significantly easier and faster than data processing of nonuniform data because equal duration of time intervals between moments of measurements enables simplification of calculations performed during the data processing procedures.
However, when strong electromagnetic induction is present, long series of consecutive data are corrupted and lost during the data transmission process, and such data transmission losses are called heavy data transmission losses.
The received data in cases of heavy data transmission losses cannot be made uniform in a meaningful way.
It is obvious that data losses cause reduction of signal power and hence the reduction of spectral power, but even basic considerations show that the distribution of power losses depends on the signal spectra and duration and distribution in time of loss events.
Online since: May 2012
Authors: Ya Xin Su, A Long Su, Hao Cheng
Reduction of NO by iron.
Fig. 3 presents the NO reduction efficiency by iron.
According to the thermodynamic calculation using the basic data from JANAF tables (1985), metallic iron could be completely oxidized to Fe2O3 at 700-900 °C and when the concentration of NO is lower than 500 ppm.
In the NO reduction experiments, the final temperature was 1100°C.
The NO reduction efficiencies increased when CO was added.
Fig. 3 presents the NO reduction efficiency by iron.
According to the thermodynamic calculation using the basic data from JANAF tables (1985), metallic iron could be completely oxidized to Fe2O3 at 700-900 °C and when the concentration of NO is lower than 500 ppm.
In the NO reduction experiments, the final temperature was 1100°C.
The NO reduction efficiencies increased when CO was added.
Online since: August 2014
Authors: Xiao Lin Tian, Ao Ao Xu, Han Liu
The new algorithm has been tested based on the Chang’E Data in the Matlab environment.
Their mission is to collect many different types of data at different times and even from the different viewpoints.
Results have been put together and compared with the original data (Figure 4).
Results of the new algorithm for the No.2 area And after these results have been put together and compared with the original data (Figure 6).
[3] LIU han, JIANG HongKun, TIAN XiaoLin, Xu AoAo, A New Fast Auto-Extraction Algorithm of Lunar Craters Based on the Chang’E Data, DEStech Publications, E113
Their mission is to collect many different types of data at different times and even from the different viewpoints.
Results have been put together and compared with the original data (Figure 4).
Results of the new algorithm for the No.2 area And after these results have been put together and compared with the original data (Figure 6).
[3] LIU han, JIANG HongKun, TIAN XiaoLin, Xu AoAo, A New Fast Auto-Extraction Algorithm of Lunar Craters Based on the Chang’E Data, DEStech Publications, E113
Online since: June 2011
Authors: Hong Sheng Xu, Ting Zhong Wang
Formal concept lattices and rough set theory are two kinds of complementary mathematical tools for data analysis and data processing.
FCA and rough set theory are two kinds of complementary mathematical tools for data analysis and data processing.
In addition to being a technique for classifying and defining concepts from data, FCA may be exploited to discover implications among the objects and the properties.
References [1] Yao Y Y.A comparative study of formal concept analysis and rough set theory in data analysis, Rough Sets and Current Trends in Computing.
[3] Petko Valtchev, Rokia Missaoui, Robert Godin: Formal Concept Analysis for Knowledge Discovery and Data Mining: The New Challenges[C].
FCA and rough set theory are two kinds of complementary mathematical tools for data analysis and data processing.
In addition to being a technique for classifying and defining concepts from data, FCA may be exploited to discover implications among the objects and the properties.
References [1] Yao Y Y.A comparative study of formal concept analysis and rough set theory in data analysis, Rough Sets and Current Trends in Computing.
[3] Petko Valtchev, Rokia Missaoui, Robert Godin: Formal Concept Analysis for Knowledge Discovery and Data Mining: The New Challenges[C].
Online since: June 2014
Authors: Hong Lian Shen
This paper takes 7 indexes, using AHP, obtained weights of the five power plants by integrated evaluation which can evaluate the effect of energy-saving and emissions reduction.
1 Evaluation indexes of energy-saving and emission reduction
Many factors affect energy-saving and emission reduction of the coal-fired power generation.
We want to choose the power plant of the best effect by using the 7 indexes’ data of 5 power plants.
According the data in 依据表2中五个电厂的7个指标的数据,利用同样的方法构造第三层对第二层每个准则的判断矩阵。
Integrated Evaluation of Energy-saving and Emission Reduction.
Study on Evaluation System of coal-fired power generation energy-saving emission reduction.
We want to choose the power plant of the best effect by using the 7 indexes’ data of 5 power plants.
According the data in 依据表2中五个电厂的7个指标的数据,利用同样的方法构造第三层对第二层每个准则的判断矩阵。
Integrated Evaluation of Energy-saving and Emission Reduction.
Study on Evaluation System of coal-fired power generation energy-saving emission reduction.
Online since: July 2014
Authors: Jia Yi Ruan, Jin Cheng Zhu, Kai Jie Chen, Zhen Wang, Yue Qi Zhong
Figure 3.Data after noise reduction.
Data process and fit evaluation of clothing Data process.
Figure 10.Data of torso.
Figure 11.Data of left arm.
Figure 12.Data of right arm.
Data process and fit evaluation of clothing Data process.
Figure 10.Data of torso.
Figure 11.Data of left arm.
Figure 12.Data of right arm.
Online since: February 2011
Authors: Yan Hong Xie
Obviously, {a1} is not a true reduction.
This time, the attribute reduction set E is NULL
So E={a2,a3} is a reduction and it is a true reduction.
“An Attribute and Value Reduction and Rule Extraction Algorithm”, Computer Engineering & Science,Vol. 30, pp. 61-63.(2008) [3] L.J.Huang, Z.M.Zhang,C.Y.Zhou,M.H.Huang, “Method of Data Reduction Based on Boolean Matrix”.
“A New Method of Complete Data Reduction Based on the Boolean Matrix”.
This time, the attribute reduction set E is NULL
So E={a2,a3} is a reduction and it is a true reduction.
“An Attribute and Value Reduction and Rule Extraction Algorithm”, Computer Engineering & Science,Vol. 30, pp. 61-63.(2008) [3] L.J.Huang, Z.M.Zhang,C.Y.Zhou,M.H.Huang, “Method of Data Reduction Based on Boolean Matrix”.
“A New Method of Complete Data Reduction Based on the Boolean Matrix”.