Search:

  • Data Reduction

Search Options

Sort by:

Publication Type:

Open access:

Publication Date:

Periodicals:

Search results

Online since: May 2012
Authors: Lei Chen, Wen Quan Tao, Hao Qiang Pang, Tian Yuan Liu, Guang Mu Zhu
Study on Energy Saving and Emission Reduction Using Data Envelopment Analysis Model Lei Chen1,2,a, Haoqiang Pang2,b , Tianyuan Liu 2,c , Guangmu Zhu 2,d and Wen-quanTao1,2,e 1 Key Laboratory of Thermo-Fluid Science and Engineering, MOE, Xi’an Jiaotong University,China 2 School of Energy & Power Engineering, Xi’an Jiaotong University, Xi’an 710049, China aChenlei.09@stu.xjtu.edu.cn,bpang1991314@me.com,ctianyuannn@163.com, dguangmuzhu@gmail.com, ewqtao@mail.xjtu.edu.cn Keywords: energy saving and emission reduction; Data Envelopment Analysis (DEA); efficiency Abstract In the paper, on the basis of the comprehensive weight method, the total energy saving and emission reduction target is distributed to every region.
Establish the original data table, as shown in table 1.
The data of the energy consumption and the pollutant are from the China Energy Statistical Yearbook and the China Statistical Yearbook [7, 8].
Green, Analyzing Multivariate Data, Duxbury Pr, (2002) [4] Yajun Guo, The Theories, Methods and Applications of Comprehensive Evaluation (2007).
Battese, An Introduction to Efficiency and Productivity Analysis, Kluwer Academic Publishers Group, (1997) [6] Desheng Wu,Theories and Methods in Study of Data Envelopment Analysis in Chinese (2006).
Online since: October 2010
Authors: Jin Liang, Zhen Zhong Xiao, Bao Quan Shi, Qing Liu, Xiao Qiang Zhang
This paper presents a new data reduction algorithm to reduce redundant in rang image or three dimension point cloud.
The effectiveness of the proposed k-means clustering data reduction algorithm is demonstrated through the simplification results of the practical range image and three dimension point cloud.
Since there is no topological structure of the input data, it is really a difficult task to reduce redundant data while keeping features.
The Proposed K-means Clustering Data Reduction Method The proposed method mainly includes five steps: data pre-processing, clusters initialization, boundary cluster detection, recursive subdivision and refinement.
Top left: input data (304202).
Online since: November 2013
Authors: Hua Min Yang, Fei He
Pawlak in 1981 makes it possible to direct reasoning base data [2].
In recent years, with technological advances, increasing the amount of data bring tremendous difficulties to data analysis [3].
Removing unnecessary data is the starting point of this article.
General mode of reduction Reduction is a process of removing redundant data, and it is defined as a triple by us, denoted as.
E represents the end of reduction, and redundant data all removed.
Online since: October 2014
Authors: Yi Zhang
Attribute Reduction Based on continuous attribute domain Zhangyi1 1Nanchang Normal University Department of Mathematics and Computer Science, Jiangxi Nanchang, 330032 Key words: Data mining; continuous attribute domain; Attribute Reduction Abstract: Discrete data attributes reduction, there are many mature methods, but for continuous data attributes reduction, general algorithm is not very good, in real life, the continuous data feature extraction and discrete data is also important, based on the number of new brain waves as analysis object, and through the continuous eeg feature extraction comparison, verify the feasibility of the proposed method.
Introduction Attribute reduction is in a large number of attributes in the collection, in classification ability unchanged, the core attribute reduction process, the attribute reduction theory is one of the key problems in Rough set theory, Rough set theory since Pawlak has been proposed, the attribute reduction theory, algorithm and application also obtained the full development, including the reduction algorithm based on positive region, the reduction algorithm based on information theory and rough set method with other data mining and machine learning algorithm combining reduction algorithm based on discernibility matrix reduction algorithm. . these reduction theory for the amount of data and change data very fruitful, but for the incremental data or data processing often have great limitations.
In order to adapt to the amount of data or data, Liu Zongtian proposed the incremental reduction algorithm based on rough set theory; Wang Jue proposed distributed reduction algorithm based on discernibility matrix; F.
For discrete data, there are many kinds of mature attribute reduction method, also achieved good results, but for attribute data continuous, direct reduction of attributes analysis method of discrete type is not very good, the existing method is the attribute domain continuous, analysis of the attribute domain discretized using various information, so as to obtain the discrete data, then the discrete attributes analysis method, but the process of continuous data discretization may cause information loss, so in order to better carry out the continuous attribute analysis, this paper designs a method of attribute reduction for continuous data, and the laboratory to collect EEG data analysis thus, the corresponding feature extraction.
The granulating mechanism of complex data and data modeling [D].
Online since: July 2011
Authors: Zhen Dong Mu, Hua Bo Xiao, Yong Ping Xing, Jian Zheng
Data Reduction Used in CAPP Application of Processing Methods of Reasoning Zhendong Mu 1, a, Huabo Xiao 2,b and Jian Zheng 3,c Yongping Xing4,d 1 Institute of Information and Technology, Jiangxi BlueSky University 2 Department of Computer Science and Technology, Jiangxi Bluesky Uniersity,Jiangxi 3 Teaching affairs office of Jiangxi Bluesky University 4 Scientific Research office , Jiangxi Bluesky University zd_mu@sohu.com Keywords: CAPP (Computer Aided Process Planning); processing methods of reasoning; data reduction Abstract.
This paper use the theory of data reduction, based on success stories, then the rule base derived.
Introduction CAPP (Computer Aided Process Planning) is a tool that engineer used to design the manufacturing process of parts from rough to finished, is a method that the designer used to convert the data of design to the data of manufacture.
In order to improve the reasoning accuracy and efficiency, this paper used data reduction to deduce the rule.
[7] Mu Zhen-dong; Xiao Dan Feature extraction of electroencephalogram based on data reduction [J] Journal of Clinical Rehabilitative Tissue Engineering Research 2010, 14 (9) 1642-1644 [8] JIA Xiu-yi; YU Shao-yue; SHANG Lin; CHEN Shi-fu Attribute Reduction Method Based on Rough Sets and Ant Colony Algorithm [J].
Online since: August 2013
Authors: Hua Min Yang, Fei He
Pawlak in 1981 makes it possible to direct reasoning base data [2].
In recent years, with technological advances, increasing the amount of data bring tremendous difficulties to data analysis [3].
Removing unnecessary data is the starting point of this article.
General mode of reduction Reduction is a process of removing redundant data, and it is defined as a triple by us, denoted as.
E represents the end of reduction, and redundant data all removed.
Online since: February 2013
Authors: Yong De Hu, Xin Tan, Jing Chang Pan
This structure is related to the Renyi entropy of the data.
This paper described the original data by several components on the purpose of dimension reduction.
Experimental results show that the KECA is a good method in high-dimensional data reduction.
Introduction The dimension reduction of high-dimensional data is always a very important issue in Data Mining owing to the curse of dimensionality.
[2] JIANG Bin, PAN Jingchang, GUO Qiang, YI Zhenping, 2-D PCA based Spectra Data Dimension Reduction Method, Modern Electronics Technique,14: 21-23, 2007
Online since: February 2014
Authors: Fan Yang, Hua Mei Li, Jun Hua Chen, Xiao Zhou Chen
Comparison of LLE and PCA Algorithms for Gene Expression Data Analysis Xiaozhou Chena, Fan Yang, Huamei Li, Junhua Chen School of Mathematics and Computer Science, Yunnan University of Nationalities, Kunming 650031, China ach_xiaozhou@163.com Keywords: LLE algorithm; PCA algorithm; dimensional reduction; gene expression Data Abstract.
According to the problem that the linear dimension reduction is not effective to understand gene expression data. using the manifold learning as a guide, analysing dimensionality reduction of gene expression data, selecting colon cancer and leukaemia gene expression datasets for investigation, using inter category distances as the criteria to quantitatively evaluate the effects of data dimensionality reduction.
It helps us effectively to discover the true meaning of the linear high dimensional data, and explore the internal structure of these data, at the same time, it can facilitate the dimension reduction and preserves the data’s local structure[4] [5].
Consider a data matrix .
Test results The data sample [11] set after dimension reduction was set as including category g contained samples.
Online since: December 2014
Authors: Xin Ying Chen, Guan Yu Li
Experiment Results We have chosen 10 UCI data sets [11] and test the algorithm proposed here (algorithm 2) and the algorithm proposed by Hu K.
At the same time the algorithm 2 can get accurate reductions on 8 data sets, and we can see the algorithm2 is more accurate than the algorithm1.
In any way, the newly arisen method is suitable for massive data, and provides a theoretical method for large-scale data mining.
[5] Keyun Hu: Research on Concept Lattice and Rough Set Based Data Mining Methods (Ph.D Thesis, Tsinghua University, 2001, In Chinese)
Shen: IEEE Transactions on Knowledge and Data Engineering Vol. 16 (2004), p.1457–1471 [9] Q.
Online since: November 2013
Authors: Peng Wu
The analysis on dimensionality reduction mathematical model based on feedback constraint for High-dimensional information Wu Peng (Department of Computer Engineering, Sichuan College of Architectural Technology, Deyang, Sichuan, 618000,China) Key words: high-dimensional data; data mining; dimensionality reduction Abstract.
It uses feedback restriction technique to construct dimensionality reduction model for multidimensional product data.
It uses feedback restriction technique to construct dimensionality reduction model for multidimensional product data.
Experimental results show that the proposed method increases the data coverage rate of high-dimensional data mining by 66%, and has great optimizing effect. 2 Mathematical model of dimensionality reduction The data obtained is with high latitudes, where a large number of data are under components involved standardized restrictions.
It uses feedback restriction technique to construct dimensionality reduction model for multidimensional product data.
Showing 11 to 20 of 40196 items