Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: March 2009
Authors: Kevin Matocha, Greg Dunne, Vinayak Tilak
A reduction in interface trap density has been observed by
growing the SiO2 using N2O precursor and subsequent anneal in NO [3] leading to an increase in
the inversion layer mobility [4].
The dashed line is a fit of the data and the solid line is the charge expected from theory.
The dashed line is a fit of the data and the solid line is the charge expected from theory.
In figure 3, Dit is plotted versus surface potential to illustrate the incompatibility of the interface state model with observed data.
The reduction in the mobility at higher doping concentration cannot be explained by reductions in the bulk mobility with increased doping.
The dashed line is a fit of the data and the solid line is the charge expected from theory.
The dashed line is a fit of the data and the solid line is the charge expected from theory.
In figure 3, Dit is plotted versus surface potential to illustrate the incompatibility of the interface state model with observed data.
The reduction in the mobility at higher doping concentration cannot be explained by reductions in the bulk mobility with increased doping.
Online since: September 2013
Authors: Tao Tao Fan
a1090864740@qq.com
Keywords: graphene , copper , electrical conductivity , hardness
Abstract: This paper studies the performance change by adding a certain percentage of graphene and high-temperature reduction graphene to copper powder,which would be sinteringed by Spark Plasma Sintering (sps)under certain temperature and pressure.
The NO.1 is added with 20mg grapheme while the NO.2 is added with 20mg high-temperature reduction graphene.
Measurement result is shown in figure 4.Data is in table 1.We can calculate that the electrical conductivity of the sample added graphene increased by 4.39% compared with the sample added no graphene.
The NO.1 is added with 20mg grapheme while the NO.2 is added with 20mg high-temperature reduction graphene.
Measurement result is shown in figure 4.Data is in table 1.We can calculate that the electrical conductivity of the sample added graphene increased by 4.39% compared with the sample added no graphene.
Online since: June 2011
Authors: Yong Xiang Zhao, Guo Xiang Song, J.Y. Liu
Test details and primary test data are given in Table 1.
Therefore, scattered data are obtained in Table 1.
Real distributed shape of a set of data can be judged using a so-called statistical parameter, coefficient of skewness, g, which is defined for a set of data xi as (1) where and are respectively average value and standard deviation of a set of x data with sampling size of ns.
For the present data, the g values of five properties are all greater than zero.
It indicates that EMV1 and E are not reasonable models for the present data.
Therefore, scattered data are obtained in Table 1.
Real distributed shape of a set of data can be judged using a so-called statistical parameter, coefficient of skewness, g, which is defined for a set of data xi as (1) where and are respectively average value and standard deviation of a set of x data with sampling size of ns.
For the present data, the g values of five properties are all greater than zero.
It indicates that EMV1 and E are not reasonable models for the present data.
Online since: March 2015
Authors: Ting Wang
Since the order of magnitude of estimate for the amount of biomass required for reduction in C emissions was within the range of coal-biomass feed systems, it is reasonable to assume that existing technology is adequate for the option.
The smallest time step in the simulation is the base-time step which is selected according to input data availability and/or module requirement(generally one hour).Modules run on longer time steps(daily, yearly)are called after the sum of basetime steps (i.e. hourly) has reached the module specific time step.
D01 in turn can be roughly estimated from DBH by linear extrapolation using the current height/diameter relation and the height difference between the two stem positions.This calculation is purely empirical and is only used here for transferring initialized DBH to D01 and D01 back into DBH in order to provide evaluation data that can be compared with repeated DBH measurements.It does not affect the model calculations.
Since the order of magnitude of estimate for the amount of biomass required for reduction in C emissions was within the range of coal-biomass feed systems,it is reasonable to assume that existing technology is adequate for the option.
The smallest time step in the simulation is the base-time step which is selected according to input data availability and/or module requirement(generally one hour).Modules run on longer time steps(daily, yearly)are called after the sum of basetime steps (i.e. hourly) has reached the module specific time step.
D01 in turn can be roughly estimated from DBH by linear extrapolation using the current height/diameter relation and the height difference between the two stem positions.This calculation is purely empirical and is only used here for transferring initialized DBH to D01 and D01 back into DBH in order to provide evaluation data that can be compared with repeated DBH measurements.It does not affect the model calculations.
Since the order of magnitude of estimate for the amount of biomass required for reduction in C emissions was within the range of coal-biomass feed systems,it is reasonable to assume that existing technology is adequate for the option.
Online since: November 2012
Authors: Hui Fen Dong, Qing Ji Gao, Feng Li, Bao Lei Dong
A new control method of Fuzzy-PID controller with RSDA (Rough Set Data Analysis) is proposed.
The RSDA (Rough Set Data Analysis) theory can deal with imprecise and incomplete problems.
It is used to analyze the dependencies among the properties of the information decision system and the necessity of a data value without a priori knowledge.
a) Collect 100 experiments data, adjust the PID parameters by the classic method, make a decision table according to the ev, ∆ev and u.
b) Discretize the metrical data of decision tables, make cluster analysis and interval division by rough set theory.
The RSDA (Rough Set Data Analysis) theory can deal with imprecise and incomplete problems.
It is used to analyze the dependencies among the properties of the information decision system and the necessity of a data value without a priori knowledge.
a) Collect 100 experiments data, adjust the PID parameters by the classic method, make a decision table according to the ev, ∆ev and u.
b) Discretize the metrical data of decision tables, make cluster analysis and interval division by rough set theory.
Online since: July 2013
Authors: Jamie Goggins, David Byrne
Some sections contain additional gauges with the intention of extracting more detailed data.
In addition an alternative load capacity method will be illustrated utilising data from the instrumentation of the void form flat slab.
Cost analysis A cost analysis was carried out to illustrate the benefit of utilising instrumentation data when estimating structural capacity and the subsequent retrofitting of structures.
However there was a considerable reduction in the amount of FRP (i.e. no. of layers etc.) required when utilising the data from the instrumented slab.
Vollum, Prediction of slab deflections and validation against Cardington data, Proceedings of the Institution of Civil Engineers; Structures and Buildings, vol. 152 (2002) pp. 235-248
In addition an alternative load capacity method will be illustrated utilising data from the instrumentation of the void form flat slab.
Cost analysis A cost analysis was carried out to illustrate the benefit of utilising instrumentation data when estimating structural capacity and the subsequent retrofitting of structures.
However there was a considerable reduction in the amount of FRP (i.e. no. of layers etc.) required when utilising the data from the instrumented slab.
Vollum, Prediction of slab deflections and validation against Cardington data, Proceedings of the Institution of Civil Engineers; Structures and Buildings, vol. 152 (2002) pp. 235-248
Study on Prediction Model of Grain Yield Based on Principal Component Analysis and BP Neural Network
Online since: January 2015
Authors: Xing Mei Xu, Li Ying Cao, Jing Zhou
Taking the grain yield data from 1980 to 2012 of Jilin Province for example, this paper analyzes the main factors that influences the grain yield based on the principle component analysis method.
Principal component analysis is mainly to study how to explain the internal structure of multi variables with a few principal components by using the idea of dimension reduction[11].
The experimental and simulation results show that, the combined model is used to predict the treatment object, it can greatly improve the prediction accuracy of the model. 1 Materials and methods 1.1 Test data source In this experiment, 1980---2012 year Jilin province grain yield data are used, the data from the "statistical yearbook of Jilin province".
Combining the opinions of experts, selected many influence factors to analyze, these factors are related with grain yield(y), including the amount of chemical fertilizer (x1), large cattle head number at the end of the year (x2), the sown area of grain (x3), total power of agricultural machinery(x4), the effective irrigation area (x5), rural consumption (x6), rural households per capita annual net income (x7), and rural per capita residential area (x8), the level of consumption of rural residents (x9) etc. 1.2 The principal component analysis Principal component analysis has 2 purposes: the first is the compression of the data, the second one is the interpretation of the data.
Select the data of 1980-2009 years as the network training samples, the data of 2010-2012 years as the network test samples.
Principal component analysis is mainly to study how to explain the internal structure of multi variables with a few principal components by using the idea of dimension reduction[11].
The experimental and simulation results show that, the combined model is used to predict the treatment object, it can greatly improve the prediction accuracy of the model. 1 Materials and methods 1.1 Test data source In this experiment, 1980---2012 year Jilin province grain yield data are used, the data from the "statistical yearbook of Jilin province".
Combining the opinions of experts, selected many influence factors to analyze, these factors are related with grain yield(y), including the amount of chemical fertilizer (x1), large cattle head number at the end of the year (x2), the sown area of grain (x3), total power of agricultural machinery(x4), the effective irrigation area (x5), rural consumption (x6), rural households per capita annual net income (x7), and rural per capita residential area (x8), the level of consumption of rural residents (x9) etc. 1.2 The principal component analysis Principal component analysis has 2 purposes: the first is the compression of the data, the second one is the interpretation of the data.
Select the data of 1980-2009 years as the network training samples, the data of 2010-2012 years as the network test samples.
Online since: February 2015
Authors: K.N. Basim, A.O. Hussein, Shamsuddin Shahid, Shreeshivadasan Chelliapan
Rainfall and hydraulic data measured at the different catchments was used for the model calibration and verification.
The first set of parameters was defined based on existing data for sub-catchment and drainage structure parameters and slope.
Sewer flow and sewer quality data at two sub-catchments were collected over dry and wet weather periods at each sub-catchment.
Location of data collection points at two sub-catchments are shown in Figure 1.
[5] National Climatic Data Center. 2012. http://www.ncdc.noaa.gov/oa/climate/afghan/iraq-narrative.html
The first set of parameters was defined based on existing data for sub-catchment and drainage structure parameters and slope.
Sewer flow and sewer quality data at two sub-catchments were collected over dry and wet weather periods at each sub-catchment.
Location of data collection points at two sub-catchments are shown in Figure 1.
[5] National Climatic Data Center. 2012. http://www.ncdc.noaa.gov/oa/climate/afghan/iraq-narrative.html
Online since: November 2010
Authors: Xin Li, Qun Yan, Da Tai Yu
The simulated and experimental data of a PEMFC example illustrate the effectiveness and availability of the proposed algorithm.
(15) where is the set of parameters to be optimized, f is the objective function, Vsm is the experimental data of fuel cell voltage, VFC is the model output voltage, N is the number of the experimental data point.
Sets of experimental data are used to determine the optimized parameters by PSO.
The other sets of data are used for model validation.
From Figure 1, it’s obvious that the curve with parameters optimized by the improved PSO algorithm fits the data better, and the value of objective function is decreased.
(15) where is the set of parameters to be optimized, f is the objective function, Vsm is the experimental data of fuel cell voltage, VFC is the model output voltage, N is the number of the experimental data point.
Sets of experimental data are used to determine the optimized parameters by PSO.
The other sets of data are used for model validation.
From Figure 1, it’s obvious that the curve with parameters optimized by the improved PSO algorithm fits the data better, and the value of objective function is decreased.
Online since: May 2016
Authors: Muhammad Azizi bin Azizan, Sk. Muiz bin Sk Abd Razak, Nurfadzillah binti Ishak
This is some of the data collected from the interview based on the experience as a from Aurum Synergy Group members.
Other than that, some relevant data for this report are collected include articles and journals as the primary data.
Reduced Supervision A reduction in on-site labour means a reduction in supervision time and costs.
Research Methodology Data collection methods has been done through two ways such as interviews and observations.
Research sites chosen will be the main data sources and the research begins with the research title selection.
Other than that, some relevant data for this report are collected include articles and journals as the primary data.
Reduced Supervision A reduction in on-site labour means a reduction in supervision time and costs.
Research Methodology Data collection methods has been done through two ways such as interviews and observations.
Research sites chosen will be the main data sources and the research begins with the research title selection.