Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: May 2010
Authors: Li Min Wang, Xiong Fei Li, Xue Cheng Wang
Towards Efficient Dimensionality Reduction for Evolving Bayesian
Network Classifier
Wang LiMina , Li XiongFeib and Wang XueChengc
College of Computer Science and Technology, JiLin University, ChangChun 130012, China
awanglim@jlu.edu.cn, blxf@jlu.edu.cn, cwxc@jlu.edu.cn (corresponding author)
Keywords: Bayesian Network; Dimensionality Reduction; Mixed Data
Abstract.
Experimental Study We chose 12 data sets from the UCI machine learning repository for our experiments.
For almost every data set, HBC achieves slightly better classification performance than Naïve Bayes Since most of the data sets have just a few relevant attributes.
Experiments were performed on data sets from the UCI repository.
It has been observed that for certain data sets only few parameters are used. 6.
Experimental Study We chose 12 data sets from the UCI machine learning repository for our experiments.
For almost every data set, HBC achieves slightly better classification performance than Naïve Bayes Since most of the data sets have just a few relevant attributes.
Experiments were performed on data sets from the UCI repository.
It has been observed that for certain data sets only few parameters are used. 6.
Online since: July 2014
Authors: Zhi Hao Peng, An Sheng Deng, Wei Luo
Data Processing in Attributes Reduction Based on Rough Sets and FCA
Zhihao PENG 1,2,a, Wei Luo1,2,b, Ansheng Deng1,c
1 Faculty of Information Science and Technology, Dalian Maritime University, Dalian 116024, China
2 Department of Computer Science, Dalian Neusoft Institute of Information, Dalian 116626, China
apengzhihao@neusoft.edu.cn, bluowei@neusoft.edu.cn, c ashdeng@dlmu.edu.cn
Corresponding author.
Introduction Rough set theory(RST) which is proposed by Pawlak is a relatively new soft computing tool to conceptualize and analyze various types of data.
Nowadays, it has been a widely used mathematical framework for image processing, feature selection, pattern recognition, rule extraction, conflict analysis, granular computing, decision supporting, data mining and knowledge discovery from given data sets and so on.
In other words, people can select useful features from a given data set based on rough set theory.
Concept data analysis: Theory and applications.John Wiley & Sons.
Introduction Rough set theory(RST) which is proposed by Pawlak is a relatively new soft computing tool to conceptualize and analyze various types of data.
Nowadays, it has been a widely used mathematical framework for image processing, feature selection, pattern recognition, rule extraction, conflict analysis, granular computing, decision supporting, data mining and knowledge discovery from given data sets and so on.
In other words, people can select useful features from a given data set based on rough set theory.
Concept data analysis: Theory and applications.John Wiley & Sons.
Online since: January 2010
Authors: Cong Feng Jiang, Ying Hui Zhao, Jian Wan
Conventional power reduction techniques such as dynamic
voltage/frequency scaling (DVS/DFS) have disadvantages when they are ported to current data
centers with virtualization deployments.
Moreover, power reduction is particularly valuable for data centers with thermal emergencies, cooling constraints, and power-supply constraints.
Since data centers experience large periods of low utilization, they present opportunities for power reduction with minimal performance losses.
The input and output is the real-time power consumption data.
This paper also presents the challenges of power reduction in large Data Centers.
Moreover, power reduction is particularly valuable for data centers with thermal emergencies, cooling constraints, and power-supply constraints.
Since data centers experience large periods of low utilization, they present opportunities for power reduction with minimal performance losses.
The input and output is the real-time power consumption data.
This paper also presents the challenges of power reduction in large Data Centers.
Online since: February 2011
Authors: Yan Kun Li
Construction Method of Fuzzy Attribute Information System Based on DEA and Attribute Reduction
Li Yankun
College of Light Industry, Hebei Polytechnic University, TangShan, Hebei, China
lyk_917@163.com
Keywords: fuzzy attribute information system; attribute reduction; data envelopment analysis; relative efficiency
Abstract: For the subjectivity of attribute value in the fuzzy attribute information system, this paper proposes a construction method of fuzzy attribute information system based on DEA (Data Envelopment Analysis).
We should make the continuous data discretely firstly before reducing.
Data Envelopment Analysis [4] evaluates the relative efficiency of decision making units (DMU) with multiple input and multiple output by mathematical programming model.
Then we can build the DEA model for the original data and evaluate the relative efficiency for elements.
Data Envelopment Analysis [M].
We should make the continuous data discretely firstly before reducing.
Data Envelopment Analysis [4] evaluates the relative efficiency of decision making units (DMU) with multiple input and multiple output by mathematical programming model.
Then we can build the DEA model for the original data and evaluate the relative efficiency for elements.
Data Envelopment Analysis [M].
Online since: March 2017
Authors: Alinda Samsuri, Mohd Ambar Yarmo, Mohamed Wahab Mohamed Hisham, Mohd Nor Latif
It seem that, increasing the temperature cause the rapid reduction that correlated with thermodynamic consideration data that show the formation of metallic molybdenum is become feasible by increasing the temperature.
Thermodynamic data were used to discuss possibility of the reduction stages in terms of Gibbs-Helmholtz relationship as below.
For second stage reduction, non-isothermal reduction mode continued until 900 °C.
Table 1 shows the thermodynamic considerations data for the reduction of MoO3 by using hydrogen as a reducing agent.
This result is accomplished with the thermodynamic consideration data in Table 1 that indicated the formation of molybdenum is required high temperature to make the reaction become favourable.
Thermodynamic data were used to discuss possibility of the reduction stages in terms of Gibbs-Helmholtz relationship as below.
For second stage reduction, non-isothermal reduction mode continued until 900 °C.
Table 1 shows the thermodynamic considerations data for the reduction of MoO3 by using hydrogen as a reducing agent.
This result is accomplished with the thermodynamic consideration data in Table 1 that indicated the formation of molybdenum is required high temperature to make the reaction become favourable.
Online since: February 2014
Authors: Hua Bin Wang, Di Jian Xue, Dong Xie, Jin Liang Shi
By a scientific experiment or observation, the amount x and Y in a set of data pairs are obtained.
Least squares fitting algorithm based on orthogonal function By analyzing the variation of experimental data points, a curve fitting equation can be created for the target data.
But if is the orthogonal family of functions for the weight of the set of data points, i.e
In the use of data computation, there is often a good language program that has been prepared for the user. 4.
An electronic scale is available for real-time measurement of the mines weight in the reduction furnace, which is then sent into the upper computer for data processing.
Least squares fitting algorithm based on orthogonal function By analyzing the variation of experimental data points, a curve fitting equation can be created for the target data.
But if is the orthogonal family of functions for the weight of the set of data points, i.e
In the use of data computation, there is often a good language program that has been prepared for the user. 4.
An electronic scale is available for real-time measurement of the mines weight in the reduction furnace, which is then sent into the upper computer for data processing.
Online since: December 2012
Authors: Ying Huan Zou, Jing Jie Chen
Further improve the reliability of the results of data analysis.
Reduction of decision table An important part of the Reduction of decision table is the simplification the condition attributes in the decision table, The reduction of decision table and decision table without reduction has the same features, but the reduction of decision table with fewer condition attributes, reflecting the real core attributes of the data sets, For this experiment, is to reflect the use of QAR Data Set fuel consumption to estimate the required core attributes.
QAR Data property of a model reduction experiment, extracted from the experimental data, and laid a good foundation for the next step to establish the fuel consumption estimation model.
Jet fuel estimation model based on flight data analysis J.
Data preprocessing for data mining system design and realization D Beijing: Beijing Jiaotong University, 2011
Reduction of decision table An important part of the Reduction of decision table is the simplification the condition attributes in the decision table, The reduction of decision table and decision table without reduction has the same features, but the reduction of decision table with fewer condition attributes, reflecting the real core attributes of the data sets, For this experiment, is to reflect the use of QAR Data Set fuel consumption to estimate the required core attributes.
QAR Data property of a model reduction experiment, extracted from the experimental data, and laid a good foundation for the next step to establish the fuel consumption estimation model.
Jet fuel estimation model based on flight data analysis J.
Data preprocessing for data mining system design and realization D Beijing: Beijing Jiaotong University, 2011
Online since: June 2010
Authors: Guang Bin Wang, Liang Pei Huang
In the noise reduction algorithm based on manifold learning, phase space data may be
distorted and reduction targets are chosen at random, it made efficiency and effect of noise
reduction lower.
The process of global array by affine transformation will be replaced with mean reconstruction, and the intrinsic dimension was estimate as dimension of reduction targets by using maximum likehood estimation method, the data in addition to intrinsic dimension space will be eliminated.
The algorithm has better effcet by taking the appropriate parameters, but data may be distorted in the prosess of dimension reducation, global array and signal reverse solution, and lead to low effect of noise reduction.
Conclusion In the LTSA noise reduction algorithm,reduction target dimension are chosen at random and phase data may be distorted in the prosess of dimension reducation, global array and signal reverse solution, it made efficiency and effect of noise reduction lower. we propose local tangent space mean reconstruction algorithm.
Firstly the intrinsic dimension was obtained by MLE, then high dimensional data in the phase space was reduced to the intrinsic dimension space, at last the data in local tangent space were returned to the global high dimension phase space.
The process of global array by affine transformation will be replaced with mean reconstruction, and the intrinsic dimension was estimate as dimension of reduction targets by using maximum likehood estimation method, the data in addition to intrinsic dimension space will be eliminated.
The algorithm has better effcet by taking the appropriate parameters, but data may be distorted in the prosess of dimension reducation, global array and signal reverse solution, and lead to low effect of noise reduction.
Conclusion In the LTSA noise reduction algorithm,reduction target dimension are chosen at random and phase data may be distorted in the prosess of dimension reducation, global array and signal reverse solution, it made efficiency and effect of noise reduction lower. we propose local tangent space mean reconstruction algorithm.
Firstly the intrinsic dimension was obtained by MLE, then high dimensional data in the phase space was reduced to the intrinsic dimension space, at last the data in local tangent space were returned to the global high dimension phase space.
Online since: December 2004
Authors: X.D. Zhang, C. Wang, Chang Ku Sun, S.H. Ye
A data reduction method is presented based on Grid reduction method
considering color-boundary preservation.
Therefore, the research work of this paper provides a preprocessing flow, including data arrangement, noise filtering, smoothing, data reduction, division, reorganization and so on.
The noise filtering and data reduction methods are specially analyzed in detail.
Materials Science Forum Vols. *** 717 The research of preprocessing places emphasis on the data reduction considering the over-crowed point cloud data.
Data Reduction realizes the sampling of point cloud, and reduces the quantity of data.
Therefore, the research work of this paper provides a preprocessing flow, including data arrangement, noise filtering, smoothing, data reduction, division, reorganization and so on.
The noise filtering and data reduction methods are specially analyzed in detail.
Materials Science Forum Vols. *** 717 The research of preprocessing places emphasis on the data reduction considering the over-crowed point cloud data.
Data Reduction realizes the sampling of point cloud, and reduces the quantity of data.
Online since: February 2011
Authors: Xue Feng She, Qing Guo Xue, Xiu Wei An, Yin Gui Ding, Wan Hua Yu, Jing Song Wang
Base on reduction experimental data, considering the reduction process factors like carbon content, reductive removal of ZnO, changing size of pellet, and partial pressure of reducing gas, also coupled heat transfer, mass transfer and chemical reactions, a direct reduction mathematical model on carbon-bearing pellet containing zinc has been established.
Based on reduction experiment data, a direct reduction mathematical model of preparing carbon-bearing pellets is built.
Establishment of Model Based on reduction experimental data and other researchers’ work [3,4,5,6], considering the reduction process factors like carbon content, reductive removal of ZnO, changing size of pellet, and partial pressure of reducing gas, also coupled heat transfer, mass transfer and chemical reaction process, a direct reduction mathematical model on carbon-bearing pellets containing zinc has been established.
Figure 1 and 2 show a comparison among the experimental data with the calculated results.
The theoretical simulations data fit the experimental results well.
Based on reduction experiment data, a direct reduction mathematical model of preparing carbon-bearing pellets is built.
Establishment of Model Based on reduction experimental data and other researchers’ work [3,4,5,6], considering the reduction process factors like carbon content, reductive removal of ZnO, changing size of pellet, and partial pressure of reducing gas, also coupled heat transfer, mass transfer and chemical reaction process, a direct reduction mathematical model on carbon-bearing pellets containing zinc has been established.
Figure 1 and 2 show a comparison among the experimental data with the calculated results.
The theoretical simulations data fit the experimental results well.