Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: October 2011
Authors: Ya Fei Li, Gao Huan Liu
On the base of data pretreatment, two aspects researches were carried out.
Data sources and processing methods.
Two sensor data were used: Landsat ETM and SPOT HRG and PAN.
Remote sensing data fusion.
Such the comprehensive image analysis method was conducive to vegetation protection, especially when it was difficult to obtain sequential time images of the same sensor data and lots of field samples data.
Data sources and processing methods.
Two sensor data were used: Landsat ETM and SPOT HRG and PAN.
Remote sensing data fusion.
Such the comprehensive image analysis method was conducive to vegetation protection, especially when it was difficult to obtain sequential time images of the same sensor data and lots of field samples data.
Online since: July 2013
Authors: Bo Huang, Hong Bing Luo, Jia Fa Tang, Xiao Ling Liu, Wei Jie Deng, Qiang Xu, Ke Zhang
In this study, the sedimentation has been assessed and forecasted by GIS technology by using the measured sediment data.
They can be useful in simulating hydrology or soil erosion, but are site specific and require long-term data.
Based on the DEM digital data, the different slope are calculated.
Sediment yield Calculated sediments yields results Sediments yields mainly came from the MUSLE equation based on the DEM data, precipitation data, vegetation data and management data.
In future, all hydrology information ought to be measured, collected and estimated to revise relevant data from this report during EPWS processes.
They can be useful in simulating hydrology or soil erosion, but are site specific and require long-term data.
Based on the DEM digital data, the different slope are calculated.
Sediment yield Calculated sediments yields results Sediments yields mainly came from the MUSLE equation based on the DEM data, precipitation data, vegetation data and management data.
In future, all hydrology information ought to be measured, collected and estimated to revise relevant data from this report during EPWS processes.
Online since: August 2013
Authors: Yi Fan Zeng, Kai Lei
Experimental results on DUC2005and DUC2006 benchmark data sets demonstrate the effectivenessof our proposal.
Each task has a gold standard data set and reference summaries.
Table I gives a short summary ofabove data sets.
Tables II and III present the performance of these systems on DUC2005 and DUC2006 data sets, respectively.
Schölkopf,“Ranking on data manifolds,” Advances in Neural InformationProcessing Systems, vol. 16, pp. 169–176, 2004
Each task has a gold standard data set and reference summaries.
Table I gives a short summary ofabove data sets.
Tables II and III present the performance of these systems on DUC2005 and DUC2006 data sets, respectively.
Schölkopf,“Ranking on data manifolds,” Advances in Neural InformationProcessing Systems, vol. 16, pp. 169–176, 2004
Online since: July 2019
Authors: Filippo Montevecchi, William Hackenhaar, Antonio Scippa, Gianni Campatelli
The first two indices were derived by the side surfaces data, while the wall heights were derived from the top surfaces.
This section presents the results of the analyses, first discussing the thermocouple data and then the surfaces’ geometry.
Thermal data.
Thermocouples data are summarized in Fig. 5, which presents the signals of TC2 (Fig. 1), the closest to the wall.
Surface data.
This section presents the results of the analyses, first discussing the thermocouple data and then the surfaces’ geometry.
Thermal data.
Thermocouples data are summarized in Fig. 5, which presents the signals of TC2 (Fig. 1), the closest to the wall.
Surface data.
Online since: August 2011
Authors: Qiong Qiong Liu, Lin Zhao, Lu Hua You, Xin Tan
Five isotherm models were used to describe the isotherm data.
The removal of NH4+ ion from aqueous solution using the ammonium ion-exchange material was studied in PH range from 3 to10, and the data obtained were plotted in Fig.3.
The other parameters are different isotherm constants, which can be determined by regression of the experimental isotherm data.
A lower S.E. value and higher R2 value are considered to represent goodness of conformity between measured and estimated NH4+ exchanged data.
The comparison of their reported ammonium adsorption capacities data is given in Table2.
The removal of NH4+ ion from aqueous solution using the ammonium ion-exchange material was studied in PH range from 3 to10, and the data obtained were plotted in Fig.3.
The other parameters are different isotherm constants, which can be determined by regression of the experimental isotherm data.
A lower S.E. value and higher R2 value are considered to represent goodness of conformity between measured and estimated NH4+ exchanged data.
The comparison of their reported ammonium adsorption capacities data is given in Table2.
Online since: October 2007
Authors: Mark A. Miodownik, Andrew Godfrey, Qing Liu, Yu Bin Zhang, Wei Liu
The
procedure consists of first aligning the data to an ideal orientation and then adjusting selectively
those orientations belonging to the texture component of interest.
In the following we show how such microstructures can be generated by the manipulation of a EBSD data set of a sample in the fully recrystallized state.
The procedures described could, however, be applied to a sample that has any other near-single component texture. 2.1 Data realignment algorithm: Here we adjust the data so that the cube orientation component is centered exactly on the ideal cube orientation (i.e. has a mean orientation exactly equal to {100}<001>).
For example, for EBSD data taken from a 99.999% Ni sample cold-rolled to 98% reduction then annealed at 300°C for 1 hour, after squeezing by α = 0.2 the cube texture is tightened so that the mean deviation to the average orientation is increased from θ,‾av,i = 10° to θ,‾av,i = 2°.
A series of Potts model simulation results using this EBSD data with 2 different squeeze-factors (as well as α = 1, corresponding to the directly imported EBSD data) are shown in Fig. 1.
In the following we show how such microstructures can be generated by the manipulation of a EBSD data set of a sample in the fully recrystallized state.
The procedures described could, however, be applied to a sample that has any other near-single component texture. 2.1 Data realignment algorithm: Here we adjust the data so that the cube orientation component is centered exactly on the ideal cube orientation (i.e. has a mean orientation exactly equal to {100}<001>).
For example, for EBSD data taken from a 99.999% Ni sample cold-rolled to 98% reduction then annealed at 300°C for 1 hour, after squeezing by α = 0.2 the cube texture is tightened so that the mean deviation to the average orientation is increased from θ,‾av,i = 10° to θ,‾av,i = 2°.
A series of Potts model simulation results using this EBSD data with 2 different squeeze-factors (as well as α = 1, corresponding to the directly imported EBSD data) are shown in Fig. 1.
Online since: August 2013
Authors: Chun Shan Yang, Gui Jun Liu
The traditional method is to use the DTO (data transfer object) to transform the data in the database for objects in the problem domain at the time of application development requires additional coding.
In addition, the business domain model objects to pass data between tiers, and data transformation, as carriers of data communication between each layer, reducing the overhead of passing data between systems to maximize, and business objects that represent real-world objects, makes the system at the time of expansion, remodeling, data transfer between parts of the system are not affected[3]
Mature ways that MVC (model--view-controller) mode of organization presentation layer to separate user interface code (view) and application data and business logic (model), and then use the controller matches the view and the model, data reduction, coupling of data description and application operations, ultimately be answered fast, beautiful interface layer
Data persistence layer design goal is to provide a high level for the entire project, unity, security, concurrency data persistence mechanisms, the complex business logic and logical separation of data, making the system tightly coupled relationship into loosely coupled relationships, complete the programming of data persistence, and the business logic layer provides the system services.
Data lasting layer provides has data access logic, to using programmer avoid manual prepared program access data lasting layer, makes its focused Yu business logic of development, and to in different of project in the reuse has has achieved, simplified data increased, and by deleting, and check, and modified, features of development process, while and does not lost multilayer structure of natural advantage, inherited continued J2EE special some can telescopic sexual and can extended sexual
In addition, the business domain model objects to pass data between tiers, and data transformation, as carriers of data communication between each layer, reducing the overhead of passing data between systems to maximize, and business objects that represent real-world objects, makes the system at the time of expansion, remodeling, data transfer between parts of the system are not affected[3]
Mature ways that MVC (model--view-controller) mode of organization presentation layer to separate user interface code (view) and application data and business logic (model), and then use the controller matches the view and the model, data reduction, coupling of data description and application operations, ultimately be answered fast, beautiful interface layer
Data persistence layer design goal is to provide a high level for the entire project, unity, security, concurrency data persistence mechanisms, the complex business logic and logical separation of data, making the system tightly coupled relationship into loosely coupled relationships, complete the programming of data persistence, and the business logic layer provides the system services.
Data lasting layer provides has data access logic, to using programmer avoid manual prepared program access data lasting layer, makes its focused Yu business logic of development, and to in different of project in the reuse has has achieved, simplified data increased, and by deleting, and check, and modified, features of development process, while and does not lost multilayer structure of natural advantage, inherited continued J2EE special some can telescopic sexual and can extended sexual
Online since: May 2014
Authors: Li Yang Xie, Xue Hong He, Zhi Liang Hu, Xi Jie Zhai
Malik introduced the age reduction factor and put forward the concept of effective age [5].
Researchers select a group of fault data, which is represented in Table 1 Table 1 System fault data Maintenance frequency i Maintenance interval T (h) Maintenance frequency i Maintenance interval T (h) 1 1998 11 735 2 1808 12 665 3 1636 13 602 4 1480 14 544 5 1339 15 493 6 1212 16 446 7 1097 17 403 8 992 18 365 9 898 19 330 10 812 20 299 Preliminary statistical treatments were done with above fault date, and then the exponential model which is proposed in this section is established.
The researchers select a group of fault data, which is represented in Table 3 Preliminary statistical treatments were done with above fault date, and then the above model is established.
And the model parameter η, m and k can be got through statistical analysis combining with system historical and empirical data model.
We obtain the approximate function expressions of the system's maintainability parameters of these devices through the curve fitting for data, and analyze the maintenance policy for the living examples.
Researchers select a group of fault data, which is represented in Table 1 Table 1 System fault data Maintenance frequency i Maintenance interval T (h) Maintenance frequency i Maintenance interval T (h) 1 1998 11 735 2 1808 12 665 3 1636 13 602 4 1480 14 544 5 1339 15 493 6 1212 16 446 7 1097 17 403 8 992 18 365 9 898 19 330 10 812 20 299 Preliminary statistical treatments were done with above fault date, and then the exponential model which is proposed in this section is established.
The researchers select a group of fault data, which is represented in Table 3 Preliminary statistical treatments were done with above fault date, and then the above model is established.
And the model parameter η, m and k can be got through statistical analysis combining with system historical and empirical data model.
We obtain the approximate function expressions of the system's maintainability parameters of these devices through the curve fitting for data, and analyze the maintenance policy for the living examples.
Online since: June 2013
Authors: Radu Negru, Liviu Marsavina, Niculai Pasca, Sebastian Muntean
Considering the failure event in the statistical analysis, the data reduction technique proposed in [14] is used for fatigue limit estimation.
As can be seen a scatter of FCG data is observed especially at low crack length.
Crack length vs. number of cycles curve (CT specimen no. 3) FCG data analysis.
Rilly, A practical method for statistical analysis of strain-life fatigue data, Int.
Mood, A method for obtaining and analyzing sensitivity data, J.
As can be seen a scatter of FCG data is observed especially at low crack length.
Crack length vs. number of cycles curve (CT specimen no. 3) FCG data analysis.
Rilly, A practical method for statistical analysis of strain-life fatigue data, Int.
Mood, A method for obtaining and analyzing sensitivity data, J.
Online since: March 2024
Authors: José Mota, João Pires, Tiago Pires, M. Filomena Teodoro
The objective of this work is to filter the data obtained from measurements of the anemometer at NRP Sagres, with the purpose of reducing the previously mentioned errors.
In this project we have used some data base available in [23] relatively to an anemometer from NRP Sagres.
These data were provided to the author of [23].
However, data classification was considered carried out perfectly suited to the scope of this work, having been necessary to import only “Speed” variable.
Notice that the data is expressed in the usual unit of wind speed, the knot (nautical mile per hour = 0.51 m sec-1 = 1.15 mph).
In this project we have used some data base available in [23] relatively to an anemometer from NRP Sagres.
These data were provided to the author of [23].
However, data classification was considered carried out perfectly suited to the scope of this work, having been necessary to import only “Speed” variable.
Notice that the data is expressed in the usual unit of wind speed, the knot (nautical mile per hour = 0.51 m sec-1 = 1.15 mph).