Search Options

Sort by:

Sort search results by

Publication Type:

Publication Type filter

Open access:

Publication Date:

Periodicals:

Periodicals filter

Search results

Online since: June 2012
Authors: István Mészáros
Our novel data evaluation technique was applied for data evaluation of the magnetic measurements.
This method is based on the multiphase-hyperbolic model of magnetization and it is called model based data evaluation (MBDE) technique.
The magnetic tester was controlled by a computer in which a 16 bit input-output data acquisition card accomplished the measurements.
This procedure is based on the multiphase hyperbolic model and it is called model based data evaluation (MBDE) technique.
The application of model based data evaluation technique (MBDA) was found to be a promising tool in data evaluation of magnetic measurements.
Online since: January 2012
Authors: Masoud Navazandeh Sajoudi, M.F.M. Kamal, Arham Abdullah, M.A Kazerooni Sadi, Fatemeh Torshizi, R Taherkhani
When reduction and reuse become difficult, recycling is desired.
Data Analysis Reliability Test The purpose of the reliability test is to prove the instrument is reliable.
Figure 1: Type of Projects According to Figure 1, the majority of respondent came from housing projects which contribute 47 percent of the data or 19 respondents.
Commercial projects came in second contributing 23 percent of the data or 9 respondents.
Analysis of Relationship between The 4R’ Techniques and Waste Produced on Sites In this analysis, the correlation method is used as a tool to analyze the data.
Online since: December 2012
Authors: Tao Shen Li, Dan Luo, Shu Zhuang Huang
They may use Hive, Pig, ZooKeeper, Cascading and other module to solve serious problems in TB/PB magnitude data storage, extensible computing, and data analysis processing, and so on.
Experiment proved that when the data volume increases, the memory overflow of MR_CD and MR_DD will occur.
As to the above deficit, scientific reference [6] and reference [7] have made equivalent improvement, the idea is to add in a Combiner prior to Map output, used to reduce the local data reduction, similar to the local Reduce function, reducing data transmission in the network and I/O reading in the disk.
Map scans every recorded sectional K candidate item sets in data subgroups, each candidate item set counted as 1, then output the intermediate result by Combiner reduction.
Then we will use the present existing TB/PB grade gigantic data sets to undertake simulated test to this algorithm.
Online since: June 2012
Authors: Fei Hu Hu, Jie Jiang, Ling Ma, Lu Lu Liu
Then the concepts of synchronization edge, process, synchronization process, base process, process tree, data type and trace-set of types is given.
Process model and data model are two essential aspects in workflow management system.
(Data type).
The set of data types Dt is defined by the following syntax: A A is a set of atomic types, which cannot be decomposed in the type system.
Koehler, “The refined process structure tree,” DATA & KNOWLEDGE ENGINEERING, vol. 68, 2009, pp. 793-818
Online since: November 2012
Authors: Qiao Yi Wang, Yao Zhu, Yong Zhao
A few formulae, commonly used to predict the coefficient of friction from forward slip measurements of a rolling process, have been evaluated according to forward slip measurements and coefficient of friction data.
At the same time, the contact arc increase due to the large reduction and the roll flattening.
It is this variation that poses severe difficulties for researchers to secure reproducible forward slip data and that perhaps well explains the scatter found in many forward slip measurements.
Fig.2 Neutral point to contact arc Fig. 3 (a) Reduction vs. forward slip vs.
It is believed that this variation poses severe difficulties for researchers to secure reproducible forward slip data.
Online since: April 2012
Authors: Hong Jing Zhang, Shuang Bo Dong, Zhe Kui Zheng
The experimental results indicated that the in-situ carbon dioxide generation technology could be used to produce enough carbon dioxide and get good efficiencies of oil expansion, reduction of viscosity and enhancement of oil displacement.
This can generate gas mixtures can be given preferential access to the high permeability layers in the formation of a chemical exothermic reaction of CO2, the expansion of crude oil viscosity reduction.
Table 1 Comparison of model oil and kerosene expansion of the experimental data Model oil (crude oil: kerosene) to increase pressure / MPa increase in volume / mL expansion /% 0:1 0.48 4 10 1:1 0.54 5 12.5 2:1 0.66 8 20 1:0 0.74 10 25 Note: for the NB35-2 crude oil heavy oil wells B1, 60 ℃ viscosity 2010mPa • s, the mass concentration of salt A solution of 20% added to the initial pressure of 10MPa.
Table 2 Experimental results of stick down reservoir conditions Temperature / ℃ oil viscosity mPa. • s oil viscosity reduction rate viscosity mPa • s Viscosity reduction rate Δη/% 60 2010 842.708 58.1 60 2010 1012.356 49.7 60 2010 968.48 51.8 CO2 and crude oil bubble.Early researchers agreed.
Ultra-heavy oil emulsion viscosity reduction technology Research [J].
Online since: December 2012
Authors: Si Yuan Cheng, Tian Li, Xue Peng Wang, Xue Rong Yang
In this paper, we propose a new method of data repairing based on Anatomical CAD.
Generally, Reverse Engineering refers to the process of reconstructing CAD models from existing parts, which consists of data acquisition, data pre-processing, incomplete data repairing and CAD modeling [1].
When capturing data, measured data of a product can be incomplete because of the inaccessibility or invisibility of the surfaces for the measure tools, which need to take data post-processing and repair the model.
Fig. 3 The scaned point cloud of model Fig. 4 The STL data Generally, much more than necessary points obtained by a Potable CMM would significantly affect the speed of processing data.
After operations of the point cloud Noise Reduction, Sub Sample, and Mesh Data, an STL file is obtained, as shown in Fig. 4.
Online since: September 2014
Authors: Chong Zhao Han, Tao Yan
Pawlak, is an extension of set theory for the study of the intelligent systems characterized by uncertain, imprecise, incomplete and inconsistent data.
Preliminary Information tables are firstly defined by Pawlak for rough data representation [1].
Pawlak, Rough Sets: Theoretical Aspects of Reasoning About Data, Kluwer Academic Publishers, Boston, 1991
Luo, Data analysis based on discernibility and indiscernibility, Information Sciences 177 (2007), p. 4959–4976
Xie, Information-preserving hybrid data reduction based on fuzzy-rough techniques, Pattern Recognition Letters 27 (2006), p. 414–423
Online since: February 2013
Authors: Yan Tong, Hua Lin
School of Surveying and Land Information Engineering, Henan Polytechnic University, Jiaozuo 454003, China; 2 Jiaozuo Teachers College, Jiaozuo 454003, China aemail: xiaoxiaotuan@163.com Keywords: emergency shelter; urban disaster reduction; green space system; Jiaozuo Abstract: Urban green space system has a strong function of disaster prevention and reduction.
In addition, the three types of shelters are functional uniform that is urban green space shelter for emergency shelter also can be used for temporary and long-term shelters if possible. 2.3 Landscape pattern index and disaster reduction function of greenbelt of urban The landscape pattern is a set of landscape elements, which are arranged from a series of different sizes and different shapes landscape elements [6, 7].
The performance of landscape patterns determine the various ecological processes, as well as determine effective green space for disaster reduction and shelter (Table 1).
Table 1 List of green patterns analysis indicators Structure index of green space Formulas Sign of the effect on disaster shelter Sign of the effect on ecology function Diversity index /H + + Evenness index /E + + Fragmentation index/ C - - Dominance index/ D - - Fractal dimension index/ FD - - Note: Pi is the proportion of i type green landscape in the total; k is the total types of green landscape; Ni is number of patches in type i ; N is total green patches; L is average perimeter of green patches; A is average area of green patches. 2.4 Data Collection and analysis The statistical program SPSS for Windows version 15.0, GIS (Geographic Information System) and erdas 8.4 program were used to perform the data analysis.
All the data are come from statistical yearbook of Jiaozuo (1999-2008), Spot images and field survey in years of 2004-2008.
Online since: February 2013
Authors: Zong Hai Sun, Osman Osman
Dimensionality reduction is the mapping of high-dimensional data into some representation of lower dimensionality.
We apply GPLVM method for dimensionality reduction on the data set which is in high-dimensional form, and then we evaluate the quality of the resulting low-dimensional data representation by comparing the results attained to those we got when using intrinsic dimensionality estimation techniques, and lastly compare the performances of GPLVM and PCA on the same data set.
The data set consists of 53 data points with a dimensionality of 87 features.
Now, our data set has been successfully downsized using the above illustrated dimensionality reduction techniques, the next step is to make use of this outcome for effective and fast clustering.
Fig. 3 shows the resulting data clusters.
Showing 1931 to 1940 of 40694 items