Search Options

Sort by:

Sort search results by

Publication Type:

Publication Type filter

Open access:

Publication Date:

Periodicals:

Periodicals filter

Search results

Online since: October 2011
Authors: Hai Ping Xiao, Qin Jian Yu, Lei Huang
Reaction rate constant and thermodynamic data in elementary chemical reaction equation come from the Leeds model and NSAS data base.
For example, decomposition methods in ROP are adopted to analyze CH3 and CH2.At 1100˚C, ten elementary reactions influencing production and reduction most are studied with excessive air coefficient to be 0.7.Because reactions of CH4 are finished in 0.15s, calculating time is selected to be 0.15s.Numberings of reaction equations are numberings of elementary reactions in elementary reaction model and data behind the equations is ROP coefficient of the elementary reaction.
Free radicals produced in CH2 reduction are in favor of reduction of NO.
NO direct reduction by CH2 is also proved.
J Phys Chem Ref Data, vol. 21, pp. 411~737, 1992
Online since: March 2015
Authors: Wei Gao, Sheng Xie, Wen Li, Juan He, Yong Peng Lyu
With each set of test repeating 5 times and taking samples in every stratified soil (6 Layers) every time (7 periods), using 12 sets of experiment, respectively, the combinations of 3 categories factors (2 kinds of soil, 3 types of soil cover, 4 times typical rainfall event) to obtain a valid set of 840 hierarchical data.
Data collation and mapping use Excel 2003 and GraphPad software (Version 5.01, GraphPad software, Inc.).
Use SPSS software (Version 18, IBM Corporation) for normal distribution to the data survey, correlation, ANOVA, t tests, etc.
Each set of data can be routine statistical analysis, through the 0.05 level KS normality test.
In general terms, the average reduction rate of CODCr of B is about 3.24% higher than that of A, and the average reduction rate of ammonia of B is about 5.21% high than that of A, and the average reduction rate of TP of B is about 3.70% higher than that of A .
Online since: February 2014
Authors: Jian Yong Wang, Xi Guo
Test Case Optimization Based on State Transition Reduction Xi Guo1, Jianyong Wang1,a 1Department of Computer Science, College of Science, Huazhong Agriculture University, Wuhan 430070, China awjy01@mail.hzau.edu.cn Keywords:status reduction, test case generation, predicate abstraction, equivalence class Abstract.Due to the large number of status transition of large scale software system, an efficient test case generation method based on predicate abstraction is proposed aiming to the problem of status space explosion.
System State Reduction and Equivalent Class As for the software system, the ideal test case should have a smaller size and satisfy the test requirements.
If one test case cannot satisfies every state in the equivalent class, then the above reduction rules should be used, and this process can reduce the scale of test cases.
Data flow testing as model checking.
On the completeness of a test suite reduction strategy.
Online since: February 2012
Authors: Qing Ying Zhang, Jun Liang Wang, Li Yin
Therefore, “energy conservation and emission reduction” has got to be significant.
In 2007, 2008, and 2009, the energy consumption per unit GDP of Wuhan is 12.6, 11.9, 11.1 thousand ton coal equivalent per ten thousand Yuan respectively, which are higher than the national average data, i.e. 11.6, 11.02, 10.77 year-on-year.
Then the ten expertises give grades to the current situation of energy saving and emission reduction on transportation in this city.
According to the values, the current situation of energy saving and emission reduction in Wuhan is evaluated scientifically and factually.
The total point of energy saving and emission reduction is 79.1085.
Online since: November 2013
Authors: Qiu Jing Yang
Rough sets theory was brought into the reduction process of data, and proposed a new method of granular BP neural networks.
The experiment and simulation In order to ensure the evaluation of the effectiveness of the detection algorithm and authority, choose KDDCUP99 dataset experiment simulation. 10% are selected according to the selected interval data, such as 1/5 of the training data set as the training data, from the selection of the remaining 10% of the training data set to half as the first set of test data, the selection of the original half 10% test data set as a second set of test data, containing some new types of data.
The data set Number of samples Accuracy False positives Omission Training data 98804 0.9991 0.0003 0.0006 The test data 1 197608 0.9887 0.0054 0.0059 The test data 2 155514 0.9689 0.0114 0.0197 Summary Rough set theory, as one of the main model of granular computing theory, is a kind of dealing with imprecise, incomplete and uncertain knowledge of mathematical tools.
Rough sets theory to analyze the data is to keep the classification of the decision system capacity.
That remains unchanged, and then the data is particle size reduction, according to the results of the final reduction, to extract the classification rules of the decision system.
Online since: September 2014
Authors: Shuai Shuai Zhu, Ling Zhang
The data recovery availability of the client is the main design goal of data dispersal algorithms.
According to technological details, there are mainly three design routines: data backup or mirror, data partitioning or data striping, erasure code et. al.
Data partitioning and Striping technology Data partitioning is to divide the original data into several segments, and store them on physically different storage devices.
Piece P: The original data D is partitioned into smaller data units with the same length.
Fragment F: It’s the atomic unit of redundant data.
Online since: November 2012
Authors: Yusak Tanoto
A walk-through motor energy audit is carried out in a typical type of industry in Indonesia to collect some important data.
Data needed to estimate energy savings thus the payback period include: motor average usage hour, average electricity cost, motor efficiency under various load condition, incremental price for VSD, and average installation cost for capacitor per kVAR.
Several data including electricity energy consumption during a year, amount of sandals produced in term of tons, and monetary value of total sandals produced in a year are required.
Required data in terms of technical as well as operational and electricity consumption pattern is collected through a walk-through motor energy audit.
The important findings include a 10% reduction in total annual kWh exclude losses minimization getting from the installation of capacitor bank, a reduction on SEC and EI, and threefold on emission reduction obtained at least 47.13 tons of CO2.
Online since: October 2012
Authors: A.K.M. Nurul Amin, Syidatul Akma Sulaiman, M.D. Arif
Fast Fourier Transform (FFT) was subsequently used to transform the vibration data to a function of frequency domain.
It was observed that the magnetic fields contributed to reduction of chatter amplitudes.
The vibration data was transformed into a function in the frequency domain by using Fast Fourier Transform (FFT) analyzer, included in the software.
As shown in Fig. 3 (b), the highest percentage reduction was observed in run number 4.
This reduction was due to the damping effect of the magnets on the vibratory motion of the tool.
Online since: August 2013
Authors: Sheng Zhong, Qing Ju Guo, Wen Tian Ji
Clustering can not only be applied to data segmentation but also to outlier detection.
The establishment of model and realization of algorithm 1) The establishment of semantic web data model The data objects this paper studies come from Movielens database which is a noncommercial practical website for research.
Possessing 1000000 marking data given by 6040 users regarding 3900 films, Movielens data set is widely used as experiment material.
After dimensionality reduction of vector space we will get a new matrix.
In experiments, data from six types of films each of which has 100~200 pieces of markings given by users is taken as the usage data set of improved ontology- based semantic web K-means algorithm.
Online since: February 2020
Authors: Gow Yi Tzou, Tim Pasang, Wayne Stalls, Un Chin Chai
The pre-processor of the DEFORM software is designed to assist users in preparing and verifying input data during the forming analysis.
The compression force at each step reduction were extracted from the DEFORM software.
The flow stress was determined as a result of the potential fitting of the data obtained using the Grapher 3 software.
118.083 (a-d)/a*100% 23.31% 19.95% 31.61% Fig. 13 Comparison of compression force (kN) for angular velocities (w=0.5, 2.5, 6.5 rad/s) with a friction factor of m=0.9 Table 8 Datum shown in Fig. 13 Force (kN) R%(20) R%(40) R%(60) (a) m=0.9 55.833 93.831 243.84 (b) m=0.9, w=0.5 42.902 68.772 182.988 (a-b)/a*100% 23.16% 26.71% 24.96% (c) m=0.9, w=2.5 27.75 45.736 93.578 (a-c)/a*100% 50.30% 51.26% 61.62% (d) m=0.9, w=6.5 25.325 41.317 79.536 (a-d)/a*100% 54.64% 55.97% 67.38% Fig. 11(Datum in Fig. 11 are in Table 6), Fig. 12(Datum in Fig. 12 are in Table 7) and Fig. 13 (Datum in Fig. 13 are in Table 8) show the differences in compression force required to deform the ring specimen at each reduction.
At 60% reduction, with an angular velocity of 0.5 rad/s, friction factors (m) 0.1, 0.5 and 0.9 showed a reduction in the compressive forces by 3.04%, 11.15% and 24.96%. 2.
Showing 1301 to 1310 of 40694 items