Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: July 2014
Authors: Xiao Yan Li, Xiao Bao Chen
Wavelet denoising is a noise reduction method by transforming noisy data into the wavelet domain, applying thresholding in the wavelet domain.
It means that one has to decompose noisy data to the decomposition level j and estimate coefficients of noisy data at each level of decomposition (): scale=median()/0.6754 (2) /scale (3) Once the wavelet coefficients are normalized to the noise level j, the threshold level defined by equation (3) is used to select the WT coefficients.
We have presented a new, alternative algorithm for noise reduction in the neural activity signals using transform and level-dependent threshold estimator.
It means that one has to decompose noisy data to the decomposition level j and estimate coefficients of noisy data at each level of decomposition (): scale=median()/0.6754 (2) /scale (3) Once the wavelet coefficients are normalized to the noise level j, the threshold level defined by equation (3) is used to select the WT coefficients.
We have presented a new, alternative algorithm for noise reduction in the neural activity signals using transform and level-dependent threshold estimator.
Online since: July 2014
Authors: Zhi Chuan Liu, Li Wei Tang, Li Jun Cao
It is necessary to do some noise reduction pretreatment work for the signal.
Kalman filter is widely used as a dynamic data processing method[5], which is a recursive filtering method proposed by R.E.Kalman.
Kalman recursive equation can make new state estimation at any given moment, which along with new data can make the optimal estimation of next state.
Then it can achieve the purpose of noise reduction.
Process noise Q is determined by a large number of test data in the filtering process.
Kalman filter is widely used as a dynamic data processing method[5], which is a recursive filtering method proposed by R.E.Kalman.
Kalman recursive equation can make new state estimation at any given moment, which along with new data can make the optimal estimation of next state.
Then it can achieve the purpose of noise reduction.
Process noise Q is determined by a large number of test data in the filtering process.
Online since: February 2013
Authors: Shu Min, Chun Yan Yang
Figure2, Access Graph and Justified Graph(source: Bafna2003)
As shown in Figure2, A reduction of the plans of the buildings to abstract maps known as “access graphs” depicts the topographical properties.
This reduction reveals a set of patterns that partially manifest the social presuppositions of a culture in a way that plans themselves do not immediately reveal.
These qualitative data can be got in many ways but hard to be analyzed with the quantitative “space configuration” values.
Thus, the qualitative data of spatial activities’ who, when, where, how and quantitative data of spatial configurations values may have comparable counterparts.
This reduction reveals a set of patterns that partially manifest the social presuppositions of a culture in a way that plans themselves do not immediately reveal.
These qualitative data can be got in many ways but hard to be analyzed with the quantitative “space configuration” values.
Thus, the qualitative data of spatial activities’ who, when, where, how and quantitative data of spatial configurations values may have comparable counterparts.
Online since: August 2011
Authors: Bin Yang, Tao Xu
Although the each hydride has different characteristic, the processes in electrolysis could be divided into two different parts: Electrolytic oxidation (Eq. 1) and Electrolytic reduction (Eq. 2)
It was found after be injected La in catalyst that cathode not only had better oxygen reduction activities, but La also could reduce solubility of catalytic phase [3].And it was shown that multi-layer catalytic membrane electrode manufactured by alternate deposition had better catalytic activity and stability than these in single- layer membrane electrode [4].
Based on logarithmic change bottom formula, then Eq. (4) could be changed to Eq. (5): (5) Made use of Eq. (5), we took the data fitting and charting about the cathode hydrogen evolution polarization value in 10 samples respectively, △{TTP}9651 E was changed from -0.04V to -0.005V, and took the calculation of i0 in 10 samples respectively, and these results were shown in Tab. 2.
The data in Tab. 1 could be disposed and was shown in Fig 2.
So, via comprehensive comparison about these data, it would be found that A-1 sample had maximum i0, lower Ed, and best hydrogen evolution properties.
It was found after be injected La in catalyst that cathode not only had better oxygen reduction activities, but La also could reduce solubility of catalytic phase [3].And it was shown that multi-layer catalytic membrane electrode manufactured by alternate deposition had better catalytic activity and stability than these in single- layer membrane electrode [4].
Based on logarithmic change bottom formula, then Eq. (4) could be changed to Eq. (5): (5) Made use of Eq. (5), we took the data fitting and charting about the cathode hydrogen evolution polarization value in 10 samples respectively, △{TTP}9651 E was changed from -0.04V to -0.005V, and took the calculation of i0 in 10 samples respectively, and these results were shown in Tab. 2.
The data in Tab. 1 could be disposed and was shown in Fig 2.
So, via comprehensive comparison about these data, it would be found that A-1 sample had maximum i0, lower Ed, and best hydrogen evolution properties.
Online since: December 2012
Authors: Mu Hee Song
Selecting classification disposition means reduction of disposition space.
A larger learning data may result in a better classification performance, yet the larger the data, the larger the number of disposition -- requiring more time for learning and classifying.
Each learning data is given a DF value from 1~ 4 and values exceeding each of the DF value is organized as the learning data and studied.
To organize SVM learning data, each TF and TF*IDF, and both of the two dispositions are used.
For Naïve Bayes, TF is used as the disposition to organize learning data,.
A larger learning data may result in a better classification performance, yet the larger the data, the larger the number of disposition -- requiring more time for learning and classifying.
Each learning data is given a DF value from 1~ 4 and values exceeding each of the DF value is organized as the learning data and studied.
To organize SVM learning data, each TF and TF*IDF, and both of the two dispositions are used.
For Naïve Bayes, TF is used as the disposition to organize learning data,.
Online since: October 2014
Authors: Si Chen, Hui Chang Zhao, Yong Chen, Shu Ning Zhang
Due to that the Doppler parameters vary according to slant and the resolution is lower using imaging algorithm of traditional pulse compression in processing raw echo data of the missile-borne synthetic aperture radar (SAR).
Finally, the performances of the algorithms are assessed using simulated and real Radarsat-1 data sets.
After LRCMC the signal is given by (10) where (11) Azimuth FrFT After the range FrFT, the same approach is used to focus the data along the range gates obtaining the azimuth compression
Results show that the proposed algorithm offers better focusing capabilities and greater side-lobe reduction ratios.
The proposed approach has been tested with simulated data.
Finally, the performances of the algorithms are assessed using simulated and real Radarsat-1 data sets.
After LRCMC the signal is given by (10) where (11) Azimuth FrFT After the range FrFT, the same approach is used to focus the data along the range gates obtaining the azimuth compression
Results show that the proposed algorithm offers better focusing capabilities and greater side-lobe reduction ratios.
The proposed approach has been tested with simulated data.
Online since: November 2013
Authors: Yuan Chao, Zhi Sheng Zhang, Min Dai
If the type of products or the amount of test data is changed, both hardware and the control of software have to be adjusted, which will lead to a great reduction of productivity and generality of test handler.
If the large amount of test data or complicated test procedures lead to a reduction in the speed of E-test, and as a result affect other workstations’ movement and reduce the productivity, this method of E-test cannot be adopted. 3) Traceability.
A test thread is created to operate E-test, including sending test commands, reading test data from testing instruments, storing test data to database and sorting qualified products.
Problems of data synchronization can be solved in this way.
CTestDataBase is created to store test data instantly to database.
If the large amount of test data or complicated test procedures lead to a reduction in the speed of E-test, and as a result affect other workstations’ movement and reduce the productivity, this method of E-test cannot be adopted. 3) Traceability.
A test thread is created to operate E-test, including sending test commands, reading test data from testing instruments, storing test data to database and sorting qualified products.
Problems of data synchronization can be solved in this way.
CTestDataBase is created to store test data instantly to database.
Online since: March 2012
Authors: Yu Qing Zhou
Data acquisition and pre-processing
2.1 Data acquisition
Date acquisition is refer to convert the object surface into discrete geometric point coordinate data by special measuring equipment and collecting method, it is known as point cloud [2].
Measurement data can be joined together in real time [4], it is very suitable for Auto part data acquisition. 2.2 Data pre-processing Due to machining precision of surveying instrument and influence of human element and environmental factors in the process of measurement, noise point is unavoidable.
Pre-processing of point cloud includes noise point remove, data compaction and data extract [5].
Data compaction can reduce total amount and improve the data pre-processing speed, but too much data reduction will effect the precision of model construction, so the point cloud is reduced in the condition of guaranteed precision The separation parts should be kept after point cloud reduction, it is useful for making surface when surface is designed, and also can provide precision data for subsequent surface modeling.
British library Cataloguing in Publication Data.2008:05-120
Measurement data can be joined together in real time [4], it is very suitable for Auto part data acquisition. 2.2 Data pre-processing Due to machining precision of surveying instrument and influence of human element and environmental factors in the process of measurement, noise point is unavoidable.
Pre-processing of point cloud includes noise point remove, data compaction and data extract [5].
Data compaction can reduce total amount and improve the data pre-processing speed, but too much data reduction will effect the precision of model construction, so the point cloud is reduced in the condition of guaranteed precision The separation parts should be kept after point cloud reduction, it is useful for making surface when surface is designed, and also can provide precision data for subsequent surface modeling.
British library Cataloguing in Publication Data.2008:05-120
Online since: August 2010
Authors: Zhi Xiong Li, Xiang Shun Chen, Hu Biao Zeng
Fortunately, The PCA is an effective data
dimensionality reduction technique that by making use of orthogonal transformation the PCA could
make the selected features irrelevant and give full play to the role of each feature [9].
The PCA is a powerful tool for data dimensionality reduction widely used in a variety of applications.
As a result, the FKNN has ability to deal with problems where pattern data overlap.
The vibration data were acquired under heavy-load with 660 rpm of the motor speed.
Then, the PCA was used to reduce the data dimensionality.
The PCA is a powerful tool for data dimensionality reduction widely used in a variety of applications.
As a result, the FKNN has ability to deal with problems where pattern data overlap.
The vibration data were acquired under heavy-load with 660 rpm of the motor speed.
Then, the PCA was used to reduce the data dimensionality.
Online since: January 2017
Authors: Amine Bakhti, Farouk Benallel Boukhoulda
In the first part of this study, by means of a data acquisition system, a signal processing analysis of the contact force and acceleration of the projectile is carried out in order to deduce the oscillations of the responses.
The curves show a reduction of resonance frequencies with the increase in the level of energy.
¶This drop of frequency noticed on the table 4 is linked to the reduction of stiffness of the structure.
This is explained by a reduction of stiffness of the composite plate caused by damage in the structure.
This drop in frequency is related to the reduction of the stiffness of impacted structure in hygrothermal aging which becomes more flexible and less rigid.
The curves show a reduction of resonance frequencies with the increase in the level of energy.
¶This drop of frequency noticed on the table 4 is linked to the reduction of stiffness of the structure.
This is explained by a reduction of stiffness of the composite plate caused by damage in the structure.
This drop in frequency is related to the reduction of the stiffness of impacted structure in hygrothermal aging which becomes more flexible and less rigid.