Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: August 2013
Authors: Tong Wang, Yan Xia Pang, Yue Ping Wu, Yi Du
Dimension reduction method is one of most famous machine learning tools.
Few such attempts have been made for classification of high-dimensional protein data sets.
The major problem involved in the ability to accurately classify these massive high dimensional data sets is the high computing time and classifier complexity.
Data sets The protein–RNA complexes used in our experiments were retrieved from the PDB.
The accuracy of the low dimensional representations of the high dimensional data obtained by the different DR methods was evaluated via KNN algorithm.
Few such attempts have been made for classification of high-dimensional protein data sets.
The major problem involved in the ability to accurately classify these massive high dimensional data sets is the high computing time and classifier complexity.
Data sets The protein–RNA complexes used in our experiments were retrieved from the PDB.
The accuracy of the low dimensional representations of the high dimensional data obtained by the different DR methods was evaluated via KNN algorithm.
Online since: February 2011
Authors: Jin Peng Wang, Bao Xiang Liu, Zhen Dong Li, Li Chao Feng
Concept lattice as a powerful tool for data analysis and dealing has a rapid growth in recent years [7].
On the other hand, we can better understand the data analysis and data rules through established interoperability between the two theories [10].
Theoretical Aspects of Reasoning about Data.
He: Concept Lattice and Rough Set Data Analysis.
Theoretical aspects of reasoning about data.
On the other hand, we can better understand the data analysis and data rules through established interoperability between the two theories [10].
Theoretical Aspects of Reasoning about Data.
He: Concept Lattice and Rough Set Data Analysis.
Theoretical aspects of reasoning about data.
Online since: December 2010
Authors: Zhao Qi Wu, Shao Fei Jiang
This model is used for structural damage detection, particularly for cases where the measurement data has many uncertainties.
RSPNN Model for Damage Detection Data Preprocessing.
There are 4 methods used for discretized data: First, S method.
Clustering is a process of dividing data set into several groups or kinds which make data in the same group with its own similarity while dissimilarity in different groups.
The main merit of K-means clustering is simplicity, speediness and efficiency of processing large data-base.
RSPNN Model for Damage Detection Data Preprocessing.
There are 4 methods used for discretized data: First, S method.
Clustering is a process of dividing data set into several groups or kinds which make data in the same group with its own similarity while dissimilarity in different groups.
The main merit of K-means clustering is simplicity, speediness and efficiency of processing large data-base.
Online since: November 2012
Authors: Xiao Han, Le Le Qu, Xuan Li, Rui Guo
This paper is the noise reduction methods based on CFA image, and combining with interpolation to consider noise reduction.
Noise reduction-Interpolation process and noise reduction-interpolation process The interpolation of the CFA image after noise reduction is researched.
Imaging performance from different order of interpolation and noise reduction In order to see the difference between the two imaging sequence, the experimental data are shown in Fig. 2 and Fig. 3.
The missing data of the Least Squares Method noise reduction process before Bilinear Interpolation uses the near the two points on the straight line to determine the corresponding point.
Fig.2 MSE data Fig.3 s-CIELAB data Table 1 Serial number of interpolation and noise reduction methods i \j Linear Filtering BM3D Bilateral Filtering Homogeneity Least Squares Method Bilinear Interpolation 1 2 3 4 POSC 5 6 7 8 AHA 9 10 11 12 Fig 4 (a) AHA interpolation- Linear Filtering (b) AHA interpolation- Bilateral Filtering (c) AHA interpolation-Least Squares Method The data of the Linear Filtering is most decentralized.
Noise reduction-Interpolation process and noise reduction-interpolation process The interpolation of the CFA image after noise reduction is researched.
Imaging performance from different order of interpolation and noise reduction In order to see the difference between the two imaging sequence, the experimental data are shown in Fig. 2 and Fig. 3.
The missing data of the Least Squares Method noise reduction process before Bilinear Interpolation uses the near the two points on the straight line to determine the corresponding point.
Fig.2 MSE data Fig.3 s-CIELAB data Table 1 Serial number of interpolation and noise reduction methods i \j Linear Filtering BM3D Bilateral Filtering Homogeneity Least Squares Method Bilinear Interpolation 1 2 3 4 POSC 5 6 7 8 AHA 9 10 11 12 Fig 4 (a) AHA interpolation- Linear Filtering (b) AHA interpolation- Bilateral Filtering (c) AHA interpolation-Least Squares Method The data of the Linear Filtering is most decentralized.
Online since: June 2011
Authors: Di Wu, Zhao Zheng
With the rapid development of computer vision, more and more data dimensionality reduction problems are involved, this leads to the rapid development of dimensionality reduction algorithms.
Further more, high-dimensional data always led to low computational efficiency.
These problems can be solved by data dimensionality reduction algorithms.
Many data dimensionality reduction algorithms have emerged until now.
While PCA aims to preserve the global structure of the data, LPP aims to preserve the local structure of the data, assuming that the samples from the same class are likely close to each other in the input space.
Further more, high-dimensional data always led to low computational efficiency.
These problems can be solved by data dimensionality reduction algorithms.
Many data dimensionality reduction algorithms have emerged until now.
While PCA aims to preserve the global structure of the data, LPP aims to preserve the local structure of the data, assuming that the samples from the same class are likely close to each other in the input space.
Online since: January 2013
Authors: Hui Xin He, Hui Xia He, Da Ren Yu, Xiao Xian Zhang, Jin Fu Liu
In order to explore the internal characteristics of large amounts of data, VDM obtain the overall structural features by dimensionality reduction on the original data, which can then use other visualization techniques to analyze the data.
The network contains a hidden dimension reduction ability with topology preservation ability.
When the data dimension is greater than 3, fixate one dimension value to a constant value, then achieve the data dimension reduction.
LLE is a method for nonlinear reduction which can keep local geometric features.
Visual expression of reduction result Barely research into the inlet start or unstart.
The network contains a hidden dimension reduction ability with topology preservation ability.
When the data dimension is greater than 3, fixate one dimension value to a constant value, then achieve the data dimension reduction.
LLE is a method for nonlinear reduction which can keep local geometric features.
Visual expression of reduction result Barely research into the inlet start or unstart.
Online since: April 2012
Authors: De Yong Wang, Mao Fa Jiang, Yan Liu
The mixture calculation of producing stainless steel crude melts by chromium ore smelting reduction in a 150 t converter is carried out by use of the empirical data and the calculation method of refining plain carbon steel in a converter, according to the blowing conditions of 185 t smelting reduction converter of No.4 steelmaking shop in Chiba Works of JFE Steel.
Mixture Calculation The Required Raw Data of Calculation.
These can be calculated respectively according to the basic raw data and the set values of process parameters.
The total amount of slag and its components are calculated respectively according to the basic raw data and the set values of process parameters.
Chen, Diagram and Data Manual in Steelmaking, Metallurgy Industry Publications, Beijing, 1984.
Mixture Calculation The Required Raw Data of Calculation.
These can be calculated respectively according to the basic raw data and the set values of process parameters.
The total amount of slag and its components are calculated respectively according to the basic raw data and the set values of process parameters.
Chen, Diagram and Data Manual in Steelmaking, Metallurgy Industry Publications, Beijing, 1984.
Online since: February 2011
Authors: Mao Fa Jiang, De Yong Wang, Yan Liu
The process calculation of producing stainless steel crude melts by smelting reduction in a 150 t converter is carried out for the first time by use of the empirical data and calculation method of refining plain carbon steel in a converter, according to the blowing conditions of 185 t smelting reduction converter of No.4 steelmaking shop in Chiba Works of JFE Steel.
Using the self-programming procedure for the processing calculation, the proper coke quantity and a complete set of calculation data for the process of smelting reduction in a converter are obtained, including the batch calculation, the material balance and heat balance calculations.
Using the self-programming procedure for the processing calculation, the proper coke quantity and a complete set of calculation data for the process of smelting reduction in a converter are obtained, including the batch calculation, the material balance and heat balance calculations.
They are calculated respectively according to the base data and set values of processing parameters.
Using the self-programming procedure containing proper coke quantity for the process calculation (shown in Fig. 1), a complete set of calculation data for the process of smelting reduction in a converter are obtained.
Using the self-programming procedure for the processing calculation, the proper coke quantity and a complete set of calculation data for the process of smelting reduction in a converter are obtained, including the batch calculation, the material balance and heat balance calculations.
Using the self-programming procedure for the processing calculation, the proper coke quantity and a complete set of calculation data for the process of smelting reduction in a converter are obtained, including the batch calculation, the material balance and heat balance calculations.
They are calculated respectively according to the base data and set values of processing parameters.
Using the self-programming procedure containing proper coke quantity for the process calculation (shown in Fig. 1), a complete set of calculation data for the process of smelting reduction in a converter are obtained.
Online since: March 2015
Authors: Da Qiang Chen, Er Tian Hua, Xiao Juan Gong, Lei Hu, Yan Zhen He, Dan Zhi Sun
This paper makes the historical data which customers bought the products as the foundation, and suggests a method of personalized product design knowledge acquisition based on knowledge reduction and knowledge mining.
Thirdly, the product design information was taken as the initial data of knowledge mining, and the product design knowledge of specific customer group was obtained by a classification consistency algorithm.
Thus, the main idea of product design knowledge acquisition proposed in this paper is as follows: a) based on the history data of the customer’s online shopping and the customer segmentation results, the core matrices of each specific customer group were taken as the decision variables of the decision table based on the key customer segmentation; b) using the knowledge reduction algorithm based on Skowron discernibility matrix, the redundant and even product matrices that is not necessary are deleted; c) with the product design information as the initial data for knowledge mining, by using the classification consistency algorithm, eventually the product design knowledge of specific customer groups is obtain.
Case Study In this case study, a baby carrier brand's product information is gained from Web technology by online interaction, in which there is a mass data of customer history purchase of a certain brand stroller.
Stefanowski: Rough Sets in Data Mining and Knowledge Discovery, Physica-Verlag Vol. 1 (1998), p. 500 [8] J.H.
Thirdly, the product design information was taken as the initial data of knowledge mining, and the product design knowledge of specific customer group was obtained by a classification consistency algorithm.
Thus, the main idea of product design knowledge acquisition proposed in this paper is as follows: a) based on the history data of the customer’s online shopping and the customer segmentation results, the core matrices of each specific customer group were taken as the decision variables of the decision table based on the key customer segmentation; b) using the knowledge reduction algorithm based on Skowron discernibility matrix, the redundant and even product matrices that is not necessary are deleted; c) with the product design information as the initial data for knowledge mining, by using the classification consistency algorithm, eventually the product design knowledge of specific customer groups is obtain.
Case Study In this case study, a baby carrier brand's product information is gained from Web technology by online interaction, in which there is a mass data of customer history purchase of a certain brand stroller.
Stefanowski: Rough Sets in Data Mining and Knowledge Discovery, Physica-Verlag Vol. 1 (1998), p. 500 [8] J.H.
Online since: July 2015
Authors: Gerhard Hirt, Markus Bambach, Johannes Lohmar, Alexander Kraemer
The natural decrease in accuracy with the use of less data compared to the gain due to the reduction of experimental effort is analysed.
Using the vertices and symmetrical distribution of the data within the full experimental matrix allows a drastic reduction of experimental effort while maintaining the initial accuracy.
The data distribution for reduced data sets achieving the best accuracy can then be optimized inversely.
Fitting with reduced data For the fitting with reduced data two key questions have to be answered, 1) how much data are necessary to maintain the same accuracy as the initial fit and 2) how should the data be distributed within the full experimental matrix.
The material model imposes a variety of conditions on the data distribution.
Using the vertices and symmetrical distribution of the data within the full experimental matrix allows a drastic reduction of experimental effort while maintaining the initial accuracy.
The data distribution for reduced data sets achieving the best accuracy can then be optimized inversely.
Fitting with reduced data For the fitting with reduced data two key questions have to be answered, 1) how much data are necessary to maintain the same accuracy as the initial fit and 2) how should the data be distributed within the full experimental matrix.
The material model imposes a variety of conditions on the data distribution.