Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: August 2013
Authors: Fu Hai Huang, Yan Xue Dong
Concepts and Features of Rough Set
Rough set theory is proposed by Polish mathematician Z.Pawlak in the early 1980s, it is an theory of mathematical analysis used to study the incomplete data and expression, learning and summary of inaccurate knowledge.
The data set of all the objects analyzed based on rough set theory is called the information system (IS).
Before extracting texture feature, we need to regularize the data: (2) Where R is a regularization constant, which is the sum of all the elements in the GLCM.
Reduction of Knowledge In this paper, MIBARK (Mutual Information Based Algorithm for Reduction of Knowledge) are used in knowledge reduction and summarized value reduction algorithm is used in attribute value reduction.
Different discretization algorithm and attribute reduction algorithm are tested, classification results as shown on the final classification results: Fig.4.
The data set of all the objects analyzed based on rough set theory is called the information system (IS).
Before extracting texture feature, we need to regularize the data: (2) Where R is a regularization constant, which is the sum of all the elements in the GLCM.
Reduction of Knowledge In this paper, MIBARK (Mutual Information Based Algorithm for Reduction of Knowledge) are used in knowledge reduction and summarized value reduction algorithm is used in attribute value reduction.
Different discretization algorithm and attribute reduction algorithm are tested, classification results as shown on the final classification results: Fig.4.
Online since: July 2012
Authors: Yan Xiang Lou, Gang Liu, Sheng Li Liu
The data repair problem[8, 9] were introduced in many literature, such as exact repair, functional repair and exact repair of systematic parts, and can achieve a significant reduction in repairing.
The major concern of a storage system is that convincing the data owner that the server actually stores the data, i.e. data integrity check, such as provable data possession (PDP) [4] and proof of retrievability (PoR) [5].
Data Storage.
Data Integrity Check.
Server sends the corresponding data and to TPA. 3) Data Check.
The major concern of a storage system is that convincing the data owner that the server actually stores the data, i.e. data integrity check, such as provable data possession (PDP) [4] and proof of retrievability (PoR) [5].
Data Storage.
Data Integrity Check.
Server sends the corresponding data and to TPA. 3) Data Check.
Online since: April 2013
Authors: Xin Xi Xu, Wei Hua Su, Song Bai, Meng Yang, Xiao Hui Liu, Fu Nu
Basing on the testing data of an ambulance, a multi-input and single-output linear system model is established employing the partial coherence analysis method.
Partial testing points and data acquisition and processing equipments are shown in Figure 2.
Fig.3 A weighted SPL spectrum within frequency range of 20~300Hz Generally, the overall sound pressure of one field point will reduce accompany with the reduction of major response peaks.
Topology Optimization Design for Noise Reduction in a Truck Cab Interior[J].
[8] Bai, S., Xu, X.X., Ren, X.D., et al.Analysis of Interior Noise in a Cab and Noise Reduction with Damping[J].Noise and Vibration control, 2012,32(3):139-142.
Partial testing points and data acquisition and processing equipments are shown in Figure 2.
Fig.3 A weighted SPL spectrum within frequency range of 20~300Hz Generally, the overall sound pressure of one field point will reduce accompany with the reduction of major response peaks.
Topology Optimization Design for Noise Reduction in a Truck Cab Interior[J].
[8] Bai, S., Xu, X.X., Ren, X.D., et al.Analysis of Interior Noise in a Cab and Noise Reduction with Damping[J].Noise and Vibration control, 2012,32(3):139-142.
Online since: September 2013
Authors: Qun Le Du, Chao Hui Wang, Lu Yang, Yan Wei Li
The data of Chang’an University indicates that little difference exists when the mixing amount are different, and the viscosity reducing range of several asphalts with different dosages have reached more than 20% at 120 ˚C.
The data of Tongji University shows that when Evotherm was used to modify SBS asphalt, a good viscosity reduction effect has been achieved when a small dosage (1%) is taken and a maximum viscosity reducing extent (22%) was got at 135 ˚C.
The data of Chongqing Jiaotong University indicates that different dosages of Sasobit modified asphalt’ viscosity reductions are more significant at 120˚C, which reach more than 20%.
As can be seen from the data of Chongqing University, with the temperature increasing, the viscosity reducing extent increases first and then decreases, and reaches the maximum at 120˚C, the viscosity reducing extent basically from 19% to 26%.
Under the optimum dosages at 120~135˚C, the reducing viscosity effect is significantly and the range of viscosity reduction is among 10~30% affected by Evotherm, the range of viscosity reduction is among 20~40% affected by Sasobit, the range of viscosity reduction is among 15~25% affected by Aspha-min.
The data of Tongji University shows that when Evotherm was used to modify SBS asphalt, a good viscosity reduction effect has been achieved when a small dosage (1%) is taken and a maximum viscosity reducing extent (22%) was got at 135 ˚C.
The data of Chongqing Jiaotong University indicates that different dosages of Sasobit modified asphalt’ viscosity reductions are more significant at 120˚C, which reach more than 20%.
As can be seen from the data of Chongqing University, with the temperature increasing, the viscosity reducing extent increases first and then decreases, and reaches the maximum at 120˚C, the viscosity reducing extent basically from 19% to 26%.
Under the optimum dosages at 120~135˚C, the reducing viscosity effect is significantly and the range of viscosity reduction is among 10~30% affected by Evotherm, the range of viscosity reduction is among 20~40% affected by Sasobit, the range of viscosity reduction is among 15~25% affected by Aspha-min.
Online since: May 2015
Authors: Amir Abidov, Bunyod Allabergenov, Sung Jin Kim, Soon Wook Jeong, Hee Joon Kim, Fei Yi Xiao, Xing Jin, Beom Hyeok Park, Kwang Hwan Jhee
Photocatalytic reduction to useful compounds is a key to the future.
Strong absorbance at 195 nm is observed which well complies with reference and standard data.
Detailed HPLC data of photosynthesized sample is summarized in table 1.
Data is summarized in table 2.
[2] “Annual Data | Atmospheric CO2 | CO2 Now.”
Strong absorbance at 195 nm is observed which well complies with reference and standard data.
Detailed HPLC data of photosynthesized sample is summarized in table 1.
Data is summarized in table 2.
[2] “Annual Data | Atmospheric CO2 | CO2 Now.”
Online since: October 2006
Authors: Gang Chen
Experimental data so far show that most beneficial effect of nanostructures comes from the phonon
thermal conductivity reduction, while the electron performance has maintained to a level that is
comparable to that of their bulk materials.
The experimental data from these superlattice structures show that thermal conductivity reduction plays a central role in the reported ZT enhancement, while the electron performance that determines the numerator of ZT, σS 2, has maintained to a level close to that of their parent materials..
Theories have been developed to explain the thermal conductivity reduction in quantum wells and superlattices, first preceding the experiments and later concurrently with increasing experimental data.
These models can explain experimental data for superlattices with periods thicker than ~5 mono-atomic layers.
Comparison with experimental data (Fig.1), however, shows that the group velocity reduction alone is insufficient to explain the magnitude of the thermal conductivity reduction perpendicular to the film plane, and it fails completely to explain the thermal conductivity reduction along the film plane [13].
The experimental data from these superlattice structures show that thermal conductivity reduction plays a central role in the reported ZT enhancement, while the electron performance that determines the numerator of ZT, σS 2, has maintained to a level close to that of their parent materials..
Theories have been developed to explain the thermal conductivity reduction in quantum wells and superlattices, first preceding the experiments and later concurrently with increasing experimental data.
These models can explain experimental data for superlattices with periods thicker than ~5 mono-atomic layers.
Comparison with experimental data (Fig.1), however, shows that the group velocity reduction alone is insufficient to explain the magnitude of the thermal conductivity reduction perpendicular to the film plane, and it fails completely to explain the thermal conductivity reduction along the film plane [13].
Online since: October 2009
Authors: Tonio Buonassisi, Mariana I. Bertoni, Clémence Colin
In order to quantify the data shown in Fig. 1, we used the software ImageJ to count the dislocation
content per unit area of each sample.
When solving equation 2 for 10-, 60- and 360-min we observed a very good agreement with our experimental data.
Even though the error bars of the data points in Fig. 3 are on the order of 25% we can confine our data between the 360- and the 10-minute lines, which correspond to our highest and lowest experimental holding times respectively.
In order to extract the activation energy from our experimental data using equation 3, the dislocation density reduction versus time was plotted for a constant annealing temperature (1350°C) and fitted using a nonlinear code in MATLAB.
Given the fit to our experimental data, the mechanism of dislocation reduction appears to be pairwise annihilation, with an activation energy of 2.1 ± 0.2 eV.
When solving equation 2 for 10-, 60- and 360-min we observed a very good agreement with our experimental data.
Even though the error bars of the data points in Fig. 3 are on the order of 25% we can confine our data between the 360- and the 10-minute lines, which correspond to our highest and lowest experimental holding times respectively.
In order to extract the activation energy from our experimental data using equation 3, the dislocation density reduction versus time was plotted for a constant annealing temperature (1350°C) and fitted using a nonlinear code in MATLAB.
Given the fit to our experimental data, the mechanism of dislocation reduction appears to be pairwise annihilation, with an activation energy of 2.1 ± 0.2 eV.
Online since: January 2012
Authors: Bing Li, Feng Ming Guo, Yi Gang He
In this paper, to alleviate the influence of the noise the inaccurate measurement in the complicated environment, based on the robust ability in multivariate linear regression of PLS, and in combination with nonlinear data dimension reduction of manifold learning, a novel kernel matrix Isomap algorithm is proposed.
Compared with traditional Isomap and MDS, simulation results indicate that the algorithm has good topology stability, generalization property, robustness and lower computational complexity. 1 Introduction Isomap is one of manifold learning and nonlinear dimensionality reduction algorithm based on the framework of MDS algorithm which can recover the meaningful low dimensional structures hidden in high dimensional data [1], [2], [3].
In the Isomap algorithm, geodesic distances between data points are extracted instead of simply taking the Euclidean distance, thus a geometric distance graph is constructed and the embedding is obtained from the graph.
Isomap and KIsomap algorithm solves based on least squares (LS), which is much sensitive to abnormal points of data [4], [5], [6], [7].
The edge lengths between data points in the neighborhood graph are set as their Euclidean distances.
Compared with traditional Isomap and MDS, simulation results indicate that the algorithm has good topology stability, generalization property, robustness and lower computational complexity. 1 Introduction Isomap is one of manifold learning and nonlinear dimensionality reduction algorithm based on the framework of MDS algorithm which can recover the meaningful low dimensional structures hidden in high dimensional data [1], [2], [3].
In the Isomap algorithm, geodesic distances between data points are extracted instead of simply taking the Euclidean distance, thus a geometric distance graph is constructed and the embedding is obtained from the graph.
Isomap and KIsomap algorithm solves based on least squares (LS), which is much sensitive to abnormal points of data [4], [5], [6], [7].
The edge lengths between data points in the neighborhood graph are set as their Euclidean distances.
Online since: January 2009
Authors: Yun Ping Di, Xiao Meng Zhang, Ming Liu, Li Hua Xu, Hong Shun Hao
Based on the analysis of its components, Carbothermal Reduction Nitridation (CRN) method
was selected to synthesize eco-friendly composite material.
Here, graphite (obtained from Beijing Chemical Reagents Company, China) was used as reduction agent.
Table 2 listed sintering temperature, holding time, the content of reduction agent and the amount of black clay, respectively.
At 1450°C, principal reactions in this system proceeded according to the following equation: FeTiO3+3.3C+0.35N2=Fe+TiC0.3N0.7+3CO (1) Fe3O4+SiO2+6C=Fe3Si+6CO (2) 3Fe+SiO2+2C=Fe3Si+2CO (3) Based on thermodynamic data form literature [6], Gibbs free energy (∆G 0) was calculated.
Practical Data Handbook of Inorganic Thermodynamics (Metallurgy Industry press, Beijing, China, 2002), p.26 1 Corresponding author.
Here, graphite (obtained from Beijing Chemical Reagents Company, China) was used as reduction agent.
Table 2 listed sintering temperature, holding time, the content of reduction agent and the amount of black clay, respectively.
At 1450°C, principal reactions in this system proceeded according to the following equation: FeTiO3+3.3C+0.35N2=Fe+TiC0.3N0.7+3CO (1) Fe3O4+SiO2+6C=Fe3Si+6CO (2) 3Fe+SiO2+2C=Fe3Si+2CO (3) Based on thermodynamic data form literature [6], Gibbs free energy (∆G 0) was calculated.
Practical Data Handbook of Inorganic Thermodynamics (Metallurgy Industry press, Beijing, China, 2002), p.26 1 Corresponding author.
Online since: June 2011
Authors: Rui Ling Zhang, Hong Sheng Xu
Formal concept analysis (FCA) was originally proposed by Wille (1982), which is an important theory for data analysis and knowledge discovery.
By a(x) we denote a data pattern (a1(x), …, an(x)) defined by the object x and attributes from A = {a1, …, an}.
A data pattern of IS is any feature value vector v = (v1, …, vn) where vi Vai for i = 1, …, n such that v = a(x) for some x U [4].
In our approach, individual traces are the data points that the FCA algorithm uses for learning.
[4] Petko Valtchev, Rokia Missaoui, Robert Godin: Formal Concept Analysis for Knowledge Discovery and Data Mining: The New Challenges[C].
By a(x) we denote a data pattern (a1(x), …, an(x)) defined by the object x and attributes from A = {a1, …, an}.
A data pattern of IS is any feature value vector v = (v1, …, vn) where vi Vai for i = 1, …, n such that v = a(x) for some x U [4].
In our approach, individual traces are the data points that the FCA algorithm uses for learning.
[4] Petko Valtchev, Rokia Missaoui, Robert Godin: Formal Concept Analysis for Knowledge Discovery and Data Mining: The New Challenges[C].