Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: August 2013
Authors: Qin Zeng Xue, Gang Xue, Guo Ku Liu
The collected vibration signal can be used for data mining as well as obtaining fault rule based on the rough set theory.
Vibration data mining based on Rough Set theory 60 vibration signal of the rotor experimental platform sample data were analyzed, and the normalized energy as condition attributes, using fault types as decision attribute to form fault diagnosis data tables.
So we chose 80% of the data as training samples, 16 samples of each fault samples, and take the remaining 20% of the data as a test sample.
Data partition is obtained according to the above method(Table 3).
The table 4 shows that the reduction produces 37 diagnosis rules.
Vibration data mining based on Rough Set theory 60 vibration signal of the rotor experimental platform sample data were analyzed, and the normalized energy as condition attributes, using fault types as decision attribute to form fault diagnosis data tables.
So we chose 80% of the data as training samples, 16 samples of each fault samples, and take the remaining 20% of the data as a test sample.
Data partition is obtained according to the above method(Table 3).
The table 4 shows that the reduction produces 37 diagnosis rules.
Online since: July 2014
Authors: Chi Man Pun, Cong Lin
A novel adaptive image feature reduction approach for object tracking using vectorized texture feature is proposed in this paper.
The dimension reduction has advantages of reducing the computational cost in classification stage. 3) An adaptive learning rate was proposed to handle drifts caused by long term occlusion.
The definition is given as follows: (1) where The is given by least square loss which measures how much the training data is different from an assumed distribution.
(Fig.1 shows how the dimension reduction affects the image quality) is the labeled sign denoting which class the belongs to.
The 15% reduction rate is almost the highest that we could restore the image without noticeable visual quality loss.
The dimension reduction has advantages of reducing the computational cost in classification stage. 3) An adaptive learning rate was proposed to handle drifts caused by long term occlusion.
The definition is given as follows: (1) where The is given by least square loss which measures how much the training data is different from an assumed distribution.
(Fig.1 shows how the dimension reduction affects the image quality) is the labeled sign denoting which class the belongs to.
The 15% reduction rate is almost the highest that we could restore the image without noticeable visual quality loss.
Online since: October 2006
Authors: X.G. Hua, Jan Ming Ko, Yi Qing Ni
Reliability-Based Assessment of Bridges Using Long-Term
Monitoring Data
Y.Q.
From the monitoring data the bridge managers want to get answers to the serviceability and reliability issues: (i) has the load capacity or resistance of the structure changed?
As consistent with reliability analysis, the structural damage is first identified using a probabilistic approach from the monitoring data, so that the damage identification results account for uncertainty and randomness inherent in the measurement data and the structure.
The uncertainty in measured modal data is assumed as normally distributed uncorrelated random variables with known statistical properties.
In this example, the simulated stress measurement data (history) for each member is obtained by applying the random loads V to a finite element model of the structure.
From the monitoring data the bridge managers want to get answers to the serviceability and reliability issues: (i) has the load capacity or resistance of the structure changed?
As consistent with reliability analysis, the structural damage is first identified using a probabilistic approach from the monitoring data, so that the damage identification results account for uncertainty and randomness inherent in the measurement data and the structure.
The uncertainty in measured modal data is assumed as normally distributed uncorrelated random variables with known statistical properties.
In this example, the simulated stress measurement data (history) for each member is obtained by applying the random loads V to a finite element model of the structure.
Online since: June 2011
Authors: Fan Yang, Cai Li Zhang
Its calculation procedure is shown as below:
Step1: Confirm the reference sequenceas a record in standard fault sets and comparative sequence, process the data being dimensionless.
Step2: calculate difference sequence, whereas comparative sequence which is unknown pattern, as reference sequence, which is history diagnosis data sequence.
Select the record in reduced diagnosis knowledge database as reference sequence, the data observed which transform to reduced space as comparative sequence.
Part of the data that sampled repeatedly and modeled by AR time-series in normal and wear state is shown in Table1.
Use 10 of 20 data acquainted with known state construct the learning samples data sets, the other 10 data as test data, after the original data normalized, Use respectively the grey relational analysis(method 1)and rough set based weighted grey diagnosis method(method 2) to recognize their status.
Step2: calculate difference sequence, whereas comparative sequence which is unknown pattern, as reference sequence, which is history diagnosis data sequence.
Select the record in reduced diagnosis knowledge database as reference sequence, the data observed which transform to reduced space as comparative sequence.
Part of the data that sampled repeatedly and modeled by AR time-series in normal and wear state is shown in Table1.
Use 10 of 20 data acquainted with known state construct the learning samples data sets, the other 10 data as test data, after the original data normalized, Use respectively the grey relational analysis(method 1)and rough set based weighted grey diagnosis method(method 2) to recognize their status.
Online since: November 2012
Authors: Shu Cong Liu, Er Gen Gao, Chun Sheng Guo
Noise mixed in the recorded seismic signals often affects the data analysis result.
In addition, the higher sampling frequency of seismic record, resulted in massive monitoring data, the application of split-based FFT algorithms to do spectral analysis of the data, reduce computation and improve the speed of data analysis than base 2FFT and base 4FFT, thereby improving the timeliness of the seismic monitoring system.
A Wavelet Packet Decomposition Technique Principle In the field acquisition of seismic data it was inevitable to contain some regular or irregular interference noise, which would have a significant impact on the geological data interpretation.
Fig4 and Fig5 were data of two channels and the denoising datas.
Fig4 Data processing of channel one Fig5 Data processing of channel two Acknowledgment This work was supported by Team funded projects of the Central Universities basic research expenses and special funds innovative projects (ZY20120101) References [1] RenXueping,Ma Wensheng, XiaoLongsong.
In addition, the higher sampling frequency of seismic record, resulted in massive monitoring data, the application of split-based FFT algorithms to do spectral analysis of the data, reduce computation and improve the speed of data analysis than base 2FFT and base 4FFT, thereby improving the timeliness of the seismic monitoring system.
A Wavelet Packet Decomposition Technique Principle In the field acquisition of seismic data it was inevitable to contain some regular or irregular interference noise, which would have a significant impact on the geological data interpretation.
Fig4 and Fig5 were data of two channels and the denoising datas.
Fig4 Data processing of channel one Fig5 Data processing of channel two Acknowledgment This work was supported by Team funded projects of the Central Universities basic research expenses and special funds innovative projects (ZY20120101) References [1] RenXueping,Ma Wensheng, XiaoLongsong.
Online since: February 2013
Authors: Xiao Liu Shen, Li Ma, Zhen Li
How to manage energy consumption appropriately and energy saving and emission reduction are becoming crucial issue of the problem.
Research on energy consumption, conservation and emission reduction system of Beijing 1.3 Factor analysis of the system First, comparative analysis of raw coal consumption intensity has been made and shown in Figure 3.
A: Structure share B: Efficiency share On the basis of the analysis theory it could be found out that the raw coal consumption intensity analysis result is like the data shown in Table 1.
Table 1 Energy intensity of Beijing Year Structure share Primary Industry Secondary Industry Tertiary Industry 2006 -0.9336 0.0434 -0.6256 -0.3514 2007 -0.2675 -0.034 -0.4108 0.1773 2008 0.1738 0.0139 0.2541 -0.0942 2009 0.1582 0.014 0.2277 -0.0835 2010 -0.1491 -0.0198 -0.2097 0.0804 Year Efficiency share Primary Industry Secondary Industry Tertiary Industry 2006 1.9336 -0.086 0.0222 1.9974 2007 1.2675 -0.005 0.3278 0.9447 2008 0.8262 0.0075 0.4608 0.3579 2009 0.8418 0.0123 0.4653 0.3642 2010 1.1491 -0.0212 -0.6413 1.8116 Seeing from the data from Table 1, structure share reflects unreasonable industrial structure.
The data is normalized by using Min-max method.
Research on energy consumption, conservation and emission reduction system of Beijing 1.3 Factor analysis of the system First, comparative analysis of raw coal consumption intensity has been made and shown in Figure 3.
A: Structure share B: Efficiency share On the basis of the analysis theory it could be found out that the raw coal consumption intensity analysis result is like the data shown in Table 1.
Table 1 Energy intensity of Beijing Year Structure share Primary Industry Secondary Industry Tertiary Industry 2006 -0.9336 0.0434 -0.6256 -0.3514 2007 -0.2675 -0.034 -0.4108 0.1773 2008 0.1738 0.0139 0.2541 -0.0942 2009 0.1582 0.014 0.2277 -0.0835 2010 -0.1491 -0.0198 -0.2097 0.0804 Year Efficiency share Primary Industry Secondary Industry Tertiary Industry 2006 1.9336 -0.086 0.0222 1.9974 2007 1.2675 -0.005 0.3278 0.9447 2008 0.8262 0.0075 0.4608 0.3579 2009 0.8418 0.0123 0.4653 0.3642 2010 1.1491 -0.0212 -0.6413 1.8116 Seeing from the data from Table 1, structure share reflects unreasonable industrial structure.
The data is normalized by using Min-max method.
Online since: March 2011
Authors: Xu Yang Wang
Formal concept analysis and rough set theory provide two different methods for data analysis and knowledge processing.
For an initial data sets described by formal context, look for absolute necessary attribute sets by applying rough set theory.
Introduction Rough set theory put forward by Pawlak is a new mathematical method in the domain of intelligence data analysis and data mining.
As the powerful tool of data analysis and knowledge process, concept lattice has already applied widely in the domain of knowledge engineering, data mining, information retrieval, software engineering, etc[5~6].
Data sets are described by formal context exactly.
For an initial data sets described by formal context, look for absolute necessary attribute sets by applying rough set theory.
Introduction Rough set theory put forward by Pawlak is a new mathematical method in the domain of intelligence data analysis and data mining.
As the powerful tool of data analysis and knowledge process, concept lattice has already applied widely in the domain of knowledge engineering, data mining, information retrieval, software engineering, etc[5~6].
Data sets are described by formal context exactly.
Online since: December 2013
Authors: Chen Ming Li, Zhe Chen, Jie Shen, Xin Wang, Hui Bin Wang
In this paper, the principal component analysis method is applied in the underwater image data for detecting the image objects.
To these problems, the most promising solution is to perform dimensionality reduction on the image data.
By this assumption, PCA is limited to express the data as a linear combination of its basis vectors.
Let be the original underwater image data set, in which each column is corresponding to a single sample.
Experimental results 4.1 Feature extraction by dimensionality reduction Figure.3 Dimensionality reduction In order to visually represent and compare the inter-class difference and the inner-class difference of the feature extracted by the linear dimensionality reduction methods, we embed the image data in the high-dimensional space onto the two-dimensional space.
To these problems, the most promising solution is to perform dimensionality reduction on the image data.
By this assumption, PCA is limited to express the data as a linear combination of its basis vectors.
Let be the original underwater image data set, in which each column is corresponding to a single sample.
Experimental results 4.1 Feature extraction by dimensionality reduction Figure.3 Dimensionality reduction In order to visually represent and compare the inter-class difference and the inner-class difference of the feature extracted by the linear dimensionality reduction methods, we embed the image data in the high-dimensional space onto the two-dimensional space.
Online since: July 2014
Authors: Yu E Lin, Xing Zhu Liang
In recent years, a variety of manifold-based learning dimensionality reduction techniques have been proposed, which attempt to project the original data into a lower dimensional feature space by preserving the local neighborhood structure.
But, when the data is distributed in a nonlinear way, LDA may fail to discover essential data structures.
LPP can preserve the intrinsic geometry of data and yield an explicit linear mapping suitable for training and testing samples.
In order construct the objective function using labeled and unlabeled data, we give total-scatter matrix and the local scatter matrix , respectively.
As a result, OSMFA is more effective and efficient in face recognition.Experiments on face data show that the proposed algorithm has more discriminative power in comparison with SDA, MFA and ODLPP.
But, when the data is distributed in a nonlinear way, LDA may fail to discover essential data structures.
LPP can preserve the intrinsic geometry of data and yield an explicit linear mapping suitable for training and testing samples.
In order construct the objective function using labeled and unlabeled data, we give total-scatter matrix and the local scatter matrix , respectively.
As a result, OSMFA is more effective and efficient in face recognition.Experiments on face data show that the proposed algorithm has more discriminative power in comparison with SDA, MFA and ODLPP.
Online since: May 2013
Authors: Jing Su, Hai Feng Su, Yun Fei Long, Yan Xuan Wen, Xian Jia Ye
Generally, pyrolusite can be treated by roast reduction-leaching and direct reductive leaching in acid medium [1].
On the other hand, roasting reduction in N2 is similar with that in the conventional roasters.
After the roasting reduction had been completed, the roasted product was cooled to room temperature in inert gas.
Compared with data in the literature, the roasting temperature of bagasse is much lower than that of coal [1].
These gasses provide a reductive atmosphere which promotes pyrolusite reduction.
On the other hand, roasting reduction in N2 is similar with that in the conventional roasters.
After the roasting reduction had been completed, the roasted product was cooled to room temperature in inert gas.
Compared with data in the literature, the roasting temperature of bagasse is much lower than that of coal [1].
These gasses provide a reductive atmosphere which promotes pyrolusite reduction.