Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: December 2010
Authors: Zhi Hui Zhong, Shu Yan, Zu Jue Chen
In order to adapt to different environments, RFID technology needs a different antenna communication technology to achieve data exchange.
It can achieve both multi-band and antenna size reduction.
Compared to normal square patch antenna, this antenna has a good size-reduction feature.
The n iteration Minkowski fractal called Size-reduction characteristics of Minkowski fractal patch antenna.
Therefore, it has a good antenna characteristics and size-reduction feature for RFID applications.
It can achieve both multi-band and antenna size reduction.
Compared to normal square patch antenna, this antenna has a good size-reduction feature.
The n iteration Minkowski fractal called Size-reduction characteristics of Minkowski fractal patch antenna.
Therefore, it has a good antenna characteristics and size-reduction feature for RFID applications.
Online since: November 2014
Authors: Yi Chen, Si Cheng Deng
Age estimation is an important method to solve the face recognition with age change, due to the feature extraction,in the process of age estimation study, PCA dimensional reduction method is usually used to reduce dimension with excessive dimension.PCA refers that transform the sample matrix into one-dimensional vector first, then the one-dimensional vectors form a matrix, solve the eigenvector. 2D-PCA applied in this paper is not required to transform the sample matrix into one-dimensional vector, but construct scatter matrix with data matrix directly, accordingly, the computing time is reduced and a good performance evaluation is achieved in the test.
PCA generally refers that transform the sample matrix into a one-dimensional vector first, then a number of the sample vectors form a matrix[4], finally solve the eigenvectors of the covariance matrix; 2D-PCA is not required to transform the sample matrix into a one-dimensional vector, but construct scatter matrix with data matrix directly,therefore, characteristics extracted by 2D-PCA are better and faster than PCA and the calculation time is reduced.
Compare with PCA and 2D-PCA dimension reduction method of the face image identification and the calibration point method, the time spent and the recognition rate were as shown in Figure 1.
Randomly select the training sample, the rest are as test sample, treat with PCA and 2DPCA dimension reduction. 3.
Table1 Experimental results Test methods Group 1 Accuracy rate Group 2 Accuracy rate Group 3 Accuracy rate Running time Calibration points 70.0% 71.2% 71.1% 22.62s PCA 69.4% 69.1% 70.4% 91.65s 2DPCA 71.1% 72.4% 70.6% 35.94s The test results showed that the original image information was not required to read due to the use of calibration point method, thus dimension reduction was not needed, therefore the running time was the shortest; while the running time of using traditional PCA dimension reduction method was the longest, the 2D -PCA method used in this paper was considerably less than the traditional PCA method.
PCA generally refers that transform the sample matrix into a one-dimensional vector first, then a number of the sample vectors form a matrix[4], finally solve the eigenvectors of the covariance matrix; 2D-PCA is not required to transform the sample matrix into a one-dimensional vector, but construct scatter matrix with data matrix directly,therefore, characteristics extracted by 2D-PCA are better and faster than PCA and the calculation time is reduced.
Compare with PCA and 2D-PCA dimension reduction method of the face image identification and the calibration point method, the time spent and the recognition rate were as shown in Figure 1.
Randomly select the training sample, the rest are as test sample, treat with PCA and 2DPCA dimension reduction. 3.
Table1 Experimental results Test methods Group 1 Accuracy rate Group 2 Accuracy rate Group 3 Accuracy rate Running time Calibration points 70.0% 71.2% 71.1% 22.62s PCA 69.4% 69.1% 70.4% 91.65s 2DPCA 71.1% 72.4% 70.6% 35.94s The test results showed that the original image information was not required to read due to the use of calibration point method, thus dimension reduction was not needed, therefore the running time was the shortest; while the running time of using traditional PCA dimension reduction method was the longest, the 2D -PCA method used in this paper was considerably less than the traditional PCA method.
Online since: October 2011
Authors: Yen Kuei Tseng
The measured data of CO, CO2 and waste heat expelling to environment were keeping the same, but actually they were low down when considering the total volume of inlet air diminished by 10%.
In this study, the average fuel consumption as well as combustion exhaust emissions will be measured, all positive effects of burning efficiency and emission reduction will be perceived from those related data.
The main purpose of this study is to confirm the reduction of fuel consumption after the spoiler was installed.
In order to let the measuring data have conjunction with real, the furnace with a light duty burner for zinc alloy casting is used.
When measuring the fuel consumptions for burner with different types of spoiler, the emissions including CO, CO2, O2, SOx, NOx and the tail pipe temperature are measured at the same time, from those data, the reduction of waste gas as well as waste heat emissions can be obtained to compare with the fuel saving trend for burners with and without installing the spoiler.
In this study, the average fuel consumption as well as combustion exhaust emissions will be measured, all positive effects of burning efficiency and emission reduction will be perceived from those related data.
The main purpose of this study is to confirm the reduction of fuel consumption after the spoiler was installed.
In order to let the measuring data have conjunction with real, the furnace with a light duty burner for zinc alloy casting is used.
When measuring the fuel consumptions for burner with different types of spoiler, the emissions including CO, CO2, O2, SOx, NOx and the tail pipe temperature are measured at the same time, from those data, the reduction of waste gas as well as waste heat emissions can be obtained to compare with the fuel saving trend for burners with and without installing the spoiler.
Online since: September 2013
Authors: Shi Qi Li, Rong Zhu, Run Zao Liu, Yu Gang Wang, Ming Shan Hou
Experimental equipments include: reduction furnace, reduction tube, electronic balance, silicon controlled voltage regulator, temperature controller, flowmeter and so on.
Fig 4 Experimentation of hydrogen reduction a Experiment conditions (1) hydrogen flow: 2L/min; (2) nitrogen flow: 2L/min(gas shield), 5L/min(reduction process); (3) weight: 50g; (4) corundum crucible: inner diameter: Φ49mm, external diameter: Φ54mm,high: 80mm.
b Experimental results and analysis Weightlessness curve and analysis of reducibility Ten times reduction experiments were proceed, weightlessness datas were written down every minute, Weight loss curve of iron ore is shown in the figure 5.
According to figure 5, the final reduction temperature of 0.18 mm ore was about 950 ℃, the 2μm ore was about 730℃, the reduction temperature of micron order ore droped about 100℃, compared with millimeter.
In the weightlessness experiment, the percent reduction was more than 95%.
Fig 4 Experimentation of hydrogen reduction a Experiment conditions (1) hydrogen flow: 2L/min; (2) nitrogen flow: 2L/min(gas shield), 5L/min(reduction process); (3) weight: 50g; (4) corundum crucible: inner diameter: Φ49mm, external diameter: Φ54mm,high: 80mm.
b Experimental results and analysis Weightlessness curve and analysis of reducibility Ten times reduction experiments were proceed, weightlessness datas were written down every minute, Weight loss curve of iron ore is shown in the figure 5.
According to figure 5, the final reduction temperature of 0.18 mm ore was about 950 ℃, the 2μm ore was about 730℃, the reduction temperature of micron order ore droped about 100℃, compared with millimeter.
In the weightlessness experiment, the percent reduction was more than 95%.
Online since: November 2013
Authors: Qiu Jing Yang
Rough sets theory was brought into the reduction process of data, and proposed a new method of granular BP neural networks.
The experiment and simulation In order to ensure the evaluation of the effectiveness of the detection algorithm and authority, choose KDDCUP99 dataset experiment simulation. 10% are selected according to the selected interval data, such as 1/5 of the training data set as the training data, from the selection of the remaining 10% of the training data set to half as the first set of test data, the selection of the original half 10% test data set as a second set of test data, containing some new types of data.
The data set Number of samples Accuracy False positives Omission Training data 98804 0.9991 0.0003 0.0006 The test data 1 197608 0.9887 0.0054 0.0059 The test data 2 155514 0.9689 0.0114 0.0197 Summary Rough set theory, as one of the main model of granular computing theory, is a kind of dealing with imprecise, incomplete and uncertain knowledge of mathematical tools.
Rough sets theory to analyze the data is to keep the classification of the decision system capacity.
That remains unchanged, and then the data is particle size reduction, according to the results of the final reduction, to extract the classification rules of the decision system.
The experiment and simulation In order to ensure the evaluation of the effectiveness of the detection algorithm and authority, choose KDDCUP99 dataset experiment simulation. 10% are selected according to the selected interval data, such as 1/5 of the training data set as the training data, from the selection of the remaining 10% of the training data set to half as the first set of test data, the selection of the original half 10% test data set as a second set of test data, containing some new types of data.
The data set Number of samples Accuracy False positives Omission Training data 98804 0.9991 0.0003 0.0006 The test data 1 197608 0.9887 0.0054 0.0059 The test data 2 155514 0.9689 0.0114 0.0197 Summary Rough set theory, as one of the main model of granular computing theory, is a kind of dealing with imprecise, incomplete and uncertain knowledge of mathematical tools.
Rough sets theory to analyze the data is to keep the classification of the decision system capacity.
That remains unchanged, and then the data is particle size reduction, according to the results of the final reduction, to extract the classification rules of the decision system.
Online since: October 2011
Authors: Hai Ping Xiao, Lei Huang, Qin Jian Yu
Reaction rate constant and thermodynamic data in elementary chemical reaction equation come from the Leeds model and NSAS data base.
For example, decomposition methods in ROP are adopted to analyze CH3 and CH2.At 1100˚C, ten elementary reactions influencing production and reduction most are studied with excessive air coefficient to be 0.7.Because reactions of CH4 are finished in 0.15s, calculating time is selected to be 0.15s.Numberings of reaction equations are numberings of elementary reactions in elementary reaction model and data behind the equations is ROP coefficient of the elementary reaction.
Free radicals produced in CH2 reduction are in favor of reduction of NO.
NO direct reduction by CH2 is also proved.
J Phys Chem Ref Data, vol. 21, pp. 411~737, 1992
For example, decomposition methods in ROP are adopted to analyze CH3 and CH2.At 1100˚C, ten elementary reactions influencing production and reduction most are studied with excessive air coefficient to be 0.7.Because reactions of CH4 are finished in 0.15s, calculating time is selected to be 0.15s.Numberings of reaction equations are numberings of elementary reactions in elementary reaction model and data behind the equations is ROP coefficient of the elementary reaction.
Free radicals produced in CH2 reduction are in favor of reduction of NO.
NO direct reduction by CH2 is also proved.
J Phys Chem Ref Data, vol. 21, pp. 411~737, 1992
Online since: August 2014
Authors: Ying Yang, Lin Li, Xue Jun Zhang, Lei Yang
The characteristics of medical data make large differences between medical data mining and conventional data mining [1].
A Preprocessing for the Massive Data in Hepatopathy Information System Data preprocessing is an important step which usually takes about 70% of the time in the process of a complete data mining [3].
The characteristics of medical data such as privacy, diversity, integrity, redundancy and timeliness make large differences from conventional data mining [4].
Preprocessing of massive data in liver disease patient information system includes as follow: Fig. 1 Data mining and aided diagnosis system of hepatopathy 1) Anonymization and identity transformation: Due to medical information involves the patient privacy issues, besides the need of the general data preprocessing, medical data also requires special data processing such as name-anonymization and identity-transformation for patient records, thus separates the relationship between patients and patient records. 2) Medical Text standard: The explanation that medical experts do for images, signals or other clinical data in medical text information is usually nonstandard, so it is necessary to standardize the text data [5].
C The Classification Algorithm for Medical Image Data We research a new classification method that based on the combination of rough set and SLIQ decision tree, which combine the two methods’ advantages, namely using rough set data reduction, remove the redundant attributes and decision tree method is then used to generate the classification rules effectively.
A Preprocessing for the Massive Data in Hepatopathy Information System Data preprocessing is an important step which usually takes about 70% of the time in the process of a complete data mining [3].
The characteristics of medical data such as privacy, diversity, integrity, redundancy and timeliness make large differences from conventional data mining [4].
Preprocessing of massive data in liver disease patient information system includes as follow: Fig. 1 Data mining and aided diagnosis system of hepatopathy 1) Anonymization and identity transformation: Due to medical information involves the patient privacy issues, besides the need of the general data preprocessing, medical data also requires special data processing such as name-anonymization and identity-transformation for patient records, thus separates the relationship between patients and patient records. 2) Medical Text standard: The explanation that medical experts do for images, signals or other clinical data in medical text information is usually nonstandard, so it is necessary to standardize the text data [5].
C The Classification Algorithm for Medical Image Data We research a new classification method that based on the combination of rough set and SLIQ decision tree, which combine the two methods’ advantages, namely using rough set data reduction, remove the redundant attributes and decision tree method is then used to generate the classification rules effectively.
Online since: February 2013
Authors: Ai Min Fan, Liang Hong Zhao, Xiao Lei Zhang, Fei Wang
Experimental Research on Energy Conservation and Emission Reduction of LPG Vehicles
Aimin Fan 1,a, Lianghong Zhao 1,b, Xiaolei Zhang 1, Fei Wang 2
1 Electromechanical Engineering Department of Shunde Polytechnic,Foshan328300,Guangdong province, China
2 Foshan Powergate Energy Developing Co., Ltd., Guangdong province,China
aemail:fam99999@163.com, bemail:zhaolianghong@163.com
Keywords: LPG vehicles, buses, synergist, energy conservation and emission reduction
Abstract.
This thesis makes a comparative experimental research on the vehicles through adding a proper amount of CPG-4 synergist to the fuel of the LPG vehicles so as to ameliorate their combustion performance and reach the purpose of energy conservation and emission reduction.
At the same time, low nitrogen combustion state can be formed, because with the energy field of the exclusive charges and supplemented by the reduction characteristics of the selective ends of the roast, the emission of NOx is reduced.
In order to ensure the reliability of the test data, the test is made respectively at the sites of the many authoritative testing institutes such as the Automobile Parts Test Center (Guangzhou), Thermal Mechanical Engineering Technology Institute of South China University of Technology (Guangzhou), Transport Vehicles Comprehensive Performances Test Station of Tianhe District, Guangzhou, Guangzhou AnXun Automobiles Testing Service Co., Ltd., etc.
(3) The effect of using CPG-4 synergist to reduce the HC emission is obvious, with the reduction rate of more than 50%
This thesis makes a comparative experimental research on the vehicles through adding a proper amount of CPG-4 synergist to the fuel of the LPG vehicles so as to ameliorate their combustion performance and reach the purpose of energy conservation and emission reduction.
At the same time, low nitrogen combustion state can be formed, because with the energy field of the exclusive charges and supplemented by the reduction characteristics of the selective ends of the roast, the emission of NOx is reduced.
In order to ensure the reliability of the test data, the test is made respectively at the sites of the many authoritative testing institutes such as the Automobile Parts Test Center (Guangzhou), Thermal Mechanical Engineering Technology Institute of South China University of Technology (Guangzhou), Transport Vehicles Comprehensive Performances Test Station of Tianhe District, Guangzhou, Guangzhou AnXun Automobiles Testing Service Co., Ltd., etc.
(3) The effect of using CPG-4 synergist to reduce the HC emission is obvious, with the reduction rate of more than 50%
Online since: August 2013
Authors: Sheng Zhong, Qing Ju Guo, Wen Tian Ji
Clustering can not only be applied to data segmentation but also to outlier detection.
The establishment of model and realization of algorithm 1) The establishment of semantic web data model The data objects this paper studies come from Movielens database which is a noncommercial practical website for research.
Possessing 1000000 marking data given by 6040 users regarding 3900 films, Movielens data set is widely used as experiment material.
After dimensionality reduction of vector space we will get a new matrix.
In experiments, data from six types of films each of which has 100~200 pieces of markings given by users is taken as the usage data set of improved ontology- based semantic web K-means algorithm.
The establishment of model and realization of algorithm 1) The establishment of semantic web data model The data objects this paper studies come from Movielens database which is a noncommercial practical website for research.
Possessing 1000000 marking data given by 6040 users regarding 3900 films, Movielens data set is widely used as experiment material.
After dimensionality reduction of vector space we will get a new matrix.
In experiments, data from six types of films each of which has 100~200 pieces of markings given by users is taken as the usage data set of improved ontology- based semantic web K-means algorithm.
Online since: October 2010
Authors: Der Ho Wu, Hsun Heng Tsai
The objective of the current study is to perform a Taguchi design experiment to establish the MWCNT/NR vibration isolators which maximize the force reduction in transmissibility of the resultant nanotube-reinforced material.
The corresponding results are summarized in Table 2, together with the average force reduction in transmissibility ( R ) for each sample calculated from equation (2).
The schematic layout of the vibration force transmitted teating -12.79 3 2 1 2 0.435 0.442 0.437 0.004 -7.17 4 2 2 1 0.357 0.368 0.378 0.011 -8.69 Data Analysis.
Applying the ANOM statistical technique to the data yielded the S/N response table and S/N response graph presented in Table 2 and Fig. 4, respectively.
Fig. 4 shows that Factor A2 has the most significant effect in enhancing the reduction in force reduction in transmissibility of the composite material, while the choice of sonication time has relatively little effect.
The corresponding results are summarized in Table 2, together with the average force reduction in transmissibility ( R ) for each sample calculated from equation (2).
The schematic layout of the vibration force transmitted teating -12.79 3 2 1 2 0.435 0.442 0.437 0.004 -7.17 4 2 2 1 0.357 0.368 0.378 0.011 -8.69 Data Analysis.
Applying the ANOM statistical technique to the data yielded the S/N response table and S/N response graph presented in Table 2 and Fig. 4, respectively.
Fig. 4 shows that Factor A2 has the most significant effect in enhancing the reduction in force reduction in transmissibility of the composite material, while the choice of sonication time has relatively little effect.