Search:

  • Data Reduction

Search Options

Sort by:

Publication Type:

Open access:

Publication Date:

Periodicals:

Search results

Online since: February 2011
Authors: Xue Feng She, Qing Guo Xue, Xiu Wei An, Yin Gui Ding, Wan Hua Yu, Jing Song Wang
Base on reduction experimental data, considering the reduction process factors like carbon content, reductive removal of ZnO, changing size of pellet, and partial pressure of reducing gas, also coupled heat transfer, mass transfer and chemical reactions, a direct reduction mathematical model on carbon-bearing pellet containing zinc has been established.
Based on reduction experiment data, a direct reduction mathematical model of preparing carbon-bearing pellets is built.
Establishment of Model Based on reduction experimental data and other researchers’ work [3,4,5,6], considering the reduction process factors like carbon content, reductive removal of ZnO, changing size of pellet, and partial pressure of reducing gas, also coupled heat transfer, mass transfer and chemical reaction process, a direct reduction mathematical model on carbon-bearing pellets containing zinc has been established.
Figure 1 and 2 show a comparison among the experimental data with the calculated results.
The theoretical simulations data fit the experimental results well.
Online since: December 2014
Authors: Xing Qi He, Qi Jun Xian, Ping Yan
Research of Loss Reduction Efficiency Based on DEA HE Xing-qi, XIAN Qi-jun, YAN Ping State Grid Sichuan Electric Power Company, Chengdu, China Hayes_goodman@hotmail.com,13808185225@139.com, 13808089870@139.com Key words: line loss management; energy conservation; data envelopment analysis Abstract:An evaluation system of loss reduction efficiency was established for power supply companies by using data envelopment analysis methods,and the empirical demonstration research was conducted for loss reduction efficiency of power supply companies in Sichuan.The evaluation results indicate that Institutional reform, grid construction and informationization promote the loss reduction efficiency of the power supply companies in Sichuan.
Moreover, the low line loss rate cannot represent the better management level and loss reduction efficiency.
Therefore, an objective method to evaluate the loss reduction efficiency for power supply enterprises is very exigent.In this view, according to the statistical data of 2012-2014 of 21 power supply companies of Sichuan Power Grid Company, an evaluation system of loss reduction efficiency was established for power supply companies based on the variable returns DEA model in this article.
It indicated that the enforcement of loss reduction of some power supply companies was still to be improved, and not produce the scale effect.
Conclusions An evaluation system of loss reduction efficiency was established for power supply companies by using data envelopment analysis methods, and used the panel data of 21 powe supply companies in Sichuan as an example to quantitative evaluate and analysis, the result can provide reference for the development of  investment strategies and loss reduction efficiency evaluation.
Online since: June 2019
Authors: Evgenii K. Papynov, Arseniy Portnyagin, Alexey Golikov, Valentin Avramenko
Using a spline, one can define temperature function of generalized constants relying on experimental data without any model assumptions.
Data on kinetic parameters of iron oxide reduction reveals rather broad range of values (18-246 kJ mol-1 [9]) caused not only by different composition and morphology of investigated materials but also by inaccurate kinetic analysis.
Multiple heating rate data were used to improve reliability of kinetic analysis according to ICTAC Project recommendations [10].
Avramenko, An alternative approach to kinetic analysis of temperature-programmed reaction data, RSC Adv. 8 (2018) 3286–3295. doi:10.1039/c7ra09848k
Galwey, The significance of “compensation effects” appearing in data published in “computational aspects of kinetic analysis”: ICTAC project, 2000, Thermochim.
Online since: August 2013
Authors: Chun Yang, Xiao Fang Liu
Training Data Reduction and Classification Based on Greedy Kernel Principal Component Analysis and Fuzzy C-means Algorithm Xiaofang Liu1, Chun Yang2 1.
Given the data set , the data standardization method in (8) is used to obtain normalized data and balance the data impact on the results of the classification
Feature extraction data for test data Y N Validity of classification?
Results of training data reduction by GKPCA method The GKPCA method uses Radial Basis Functions (RBF) kernel in (9).
Extracted Subsets from Training Data by GKPCA Data Sets Kernel Parameters Samples Number of Subset Percentage of Training Data Reduction (%) Iris 3 22 75.6 Landsat Satellite 8 783 82.3 A.
Online since: October 2013
Authors: Cheng Zhe Xu, Wei Hong Zhu
Dimensionality Reduction.
Hyperspectral data used in this study has a high dimensionality.
Low frequency component generally contain most information of data, thus, the coefficient corresponding to can be describe the original data.
Figure 2 shows the result of dimensionality reduction by using DWT, where we can find that the shape of approximation coefficients data is very similar to original data, this result indicates that the DWT can be used in reducing dimensionality.
(A) 512-dimensional original hyperspectral data; (B) 16-dimensional approximation coefficients data after performing 5-level DWT.
Online since: November 2011
Authors: Jian Yang Lin, Yong Yu, Ya Jie Xu, Zhou Mi Kan
Get Acanthopanax Senticosus Harms every clustering centre data with liner reduction PSO and Conjugate gradient optimization.
Make Acanthopanax Senticosus Harms criterion data as Gamma distribution and calculate similar by the sample data Normal distribution and ensure the finial similar by angle cosine.
Reduction Introduce In the rough set theory [1] knowledge can be understood an ability of classify that is carving up the data and can expressed by assemble for example if divide the given data assemble U by the equal relationship P then call it knowledge.
The final aim of knowledge reduction is not only for the sake of reducing conditioned property in the repository compressing data but also for the sake of making good basic for knowledge found.
Its centre data of recommended products clustering is 0.0062±0.00003, 0.0754±0.00038 and its centre data of non-recommended products clustering is 0.0127±0.00006, 1.6375±0.00819 and the centre data of counterfeit clustering is 0.0221±0.00011, 1.8280±0.00915.
Online since: July 2014
Authors: Yun Peng, Hong Xin Wan
The evaluation algorithm is based on the attributes of data objects.
There are a lot of uncertain data of customer clustering, so traditional method of classification to the incomplete data will be very complex.
Example Analysis Data objects is shown in Table 1.
Tested by actual data analysis, attributes reduction can reduce the size of customer data and data noise and improve the clustering accuracy of web customers.
Data Analysis Approaches Of Soft Sets Under Incomplete Information.
Online since: March 2011
Authors: Hao Jiang, Jie Xu, Zhuo Chen
Classical linear dimensionality reduction methods are helpless for high dimensional nonlinear input data, thus nonlinear dimensionality reduction methods are needed.
On the other hand, usually the input data of nonlinear dimensionality reduction methods are real data, such as images and videos captured by cameras.
Real data are complex and have noise, so it is hard to process directly.
Real data need preprocessing, such as segmentation and alignment.
To avoid dealing with the real data and obtain useful conclusions, we generate artificially 2D simple motion data for motion analysis.
Online since: August 2012
Authors: Yu Hua Dong, Hai Chun Ning
SVD method is a data processing method achieved broad interest in nearly 10 years[1].
The aim of noise reduction and elimination abnormal data is achieved.
The increment criterion of singular entropy is adopted to select the abnormal data.
Simulation analysis and example Exterior trajectory data is used to verify the performance of noise reduction and elimination the abnormal data, adopting wavelet transform method combining SVD. db30 orthogonal wavelet is used to decompose data.
It has good performance with noise reduction and abnormal data detection.
Online since: October 2014
Authors: Hong Li Lv
Then rough sets theory is applied to the fault diagnosis of power transformer because it can be effectively analysis and deal with imprecise and incomplete data and it can extract of implicit knowledge and effectively simplify data.
Sample data Discrete data Reduce attribute Sample data after reduction Neural network system Test data Test data after reduction Result output Fig. 1 Block diagram of RSNN system The application of attribute reduction algorithm According to DGA method, the composition of hydrogen (H2), methane (CH4), acetylene (C2H2), ethylene (C2H4) and ethane (C2H6) is for the input data of BP neural network.
Fifteen sets of data is selected as the training sample and the original decision table is shown as Table 1.Each set data have eight input data as the condition attributes, denoted by {C1,…{TTP}8230 ,C8}.
The output data is defined as decision attribute, denoted by D.
After data reduction, the BP neural network is rebuild and the training result is shown in Fig. 2.
Showing 41 to 50 of 40196 items