Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: October 2013
Authors: Shao Yun Song, Bao Hua Zhang
Apriori algorithm has the following disadvantages: First, a large set of candidate and there is a lot frequent itemsets, take up a lot of memory space; second, repeatedly scanning the database, take up a lot of CPU time, so that the speed increases greatly reduced along with the data.
In the frequent k-itemsets generated (K +1)-item set, you can prune the data.
ABTM algorithm can save storage airborne. 5.2 Experimental results and analysis Fig. 5 Different run-time support algorithms Algorithm using Matlab simulation, data were Mashroom data sets, in order to compare the different degrees of support for the time spent, the experimental machine's CPU is the Inter Pentium3.0GHz and memory of 2GB. give it’s run-time for Apriori algorithm, matrix algorithms and ABTM algorithms, testing for different support. shown in Figure 5.
Proceedings of the 1995 ACM SIGMOD International Conference on Management of Data, San Jose, California: ACM Press, 1995: 175 a 186
Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, Texas, USA, 2000: 1-12.
In the frequent k-itemsets generated (K +1)-item set, you can prune the data.
ABTM algorithm can save storage airborne. 5.2 Experimental results and analysis Fig. 5 Different run-time support algorithms Algorithm using Matlab simulation, data were Mashroom data sets, in order to compare the different degrees of support for the time spent, the experimental machine's CPU is the Inter Pentium3.0GHz and memory of 2GB. give it’s run-time for Apriori algorithm, matrix algorithms and ABTM algorithms, testing for different support. shown in Figure 5.
Proceedings of the 1995 ACM SIGMOD International Conference on Management of Data, San Jose, California: ACM Press, 1995: 175 a 186
Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, Texas, USA, 2000: 1-12.
Online since: May 2012
Authors: Wen Liang Lai, Shao Wei Liao, Chung Yi Chung, Hwa Sheng Gau
The water table data are taken from 37 wells which located on unconfined aquifer in Pingtung plain.
Factor analysis is a statistical method that reveals variations in data and their principal causes by matrix manipulation of vast amounts of data [13].
If we have p wells and q records in each well, the water level data can be represented by a matrix function as follow: (1) Where X is the water level data matrix with q by p matrix; L is the component loading matrix with q by p matrix; F is the common factor matrix with p by p matrix; The elements (factors loading) in the loading factor matrix can be described as coefficient of relationship between the random variables and factors.
Table 1 Total variance explained by the factor analysis of water table data from 37 monitoring wells.
:Statistics and Data Analysis in Geology, 2 ed., Wiley, NY(1986), p. 527–557.
Factor analysis is a statistical method that reveals variations in data and their principal causes by matrix manipulation of vast amounts of data [13].
If we have p wells and q records in each well, the water level data can be represented by a matrix function as follow: (1) Where X is the water level data matrix with q by p matrix; L is the component loading matrix with q by p matrix; F is the common factor matrix with p by p matrix; The elements (factors loading) in the loading factor matrix can be described as coefficient of relationship between the random variables and factors.
Table 1 Total variance explained by the factor analysis of water table data from 37 monitoring wells.
:Statistics and Data Analysis in Geology, 2 ed., Wiley, NY(1986), p. 527–557.
Online since: January 2021
Authors: Yasuyuki Miyazawa, Mana Sakai, Tatsuya Sasaki
However, there are increasing opportunities to braze stainless steel in an inert atmosphere gas at atmosphere for cost reduction and mass production.
However, high-precision data could not be obtained because the cross section was damaged when the test specimen was cut.
The void ratio of each area was calculated, and a graph was created based on this data.
Our data suggested that bubbles were less likely to come out than in the fillet area, and the void ratio increased.
However, high-precision data could not be obtained because the cross section was damaged when the test specimen was cut.
The void ratio of each area was calculated, and a graph was created based on this data.
Our data suggested that bubbles were less likely to come out than in the fillet area, and the void ratio increased.
Online since: June 2014
Authors: Ning Wang, Fei Sun, Xiao Hong Shan
For this reason, the development of information services industry can be clearly and intuitively learned from these data.
Development status of an industry can be affected by complex external factors which is hard to obtain definitive data.
Which model to choose is always measured by the stability, tendency and periodicity of historical data.
Through data conversion and calculating ACF, PACF repeatedly, we can get the estimation value of parameters.
For government, it can provide data reference and theory support to formulate a rational policy.
Development status of an industry can be affected by complex external factors which is hard to obtain definitive data.
Which model to choose is always measured by the stability, tendency and periodicity of historical data.
Through data conversion and calculating ACF, PACF repeatedly, we can get the estimation value of parameters.
For government, it can provide data reference and theory support to formulate a rational policy.
Online since: February 2014
Authors: Zheng Zeng, De Shan Liu, Jin Jiang Yan
Fig.2 2D GIS geographic scene function
Spatial Data Organization.
The process data fall into two categories, the previous earthquake data and the latest seismic data.
Mxd file contains no map data, and the spatial data are saved in the geospatial database (GeoDatabase) or the files in the form of a shapefile [4].
This design realizes the separation of configuration and data [5].
[5]Chen Xin: Realization of Data Auto-Conversion from MapGIS to ArcGIS.
The process data fall into two categories, the previous earthquake data and the latest seismic data.
Mxd file contains no map data, and the spatial data are saved in the geospatial database (GeoDatabase) or the files in the form of a shapefile [4].
This design realizes the separation of configuration and data [5].
[5]Chen Xin: Realization of Data Auto-Conversion from MapGIS to ArcGIS.
Online since: May 2014
Authors: Chun Li Wang
We use the MATLAB software to analyze the data, and use VB software to display the data analysis process.
After importing the data, we can use the Calculate button to analyze data.
The uncertainty of data management technology [J] Journal of computer, 2012, 32(1):56-60
Effective clustering algorithm based on probabilistic data stream [J].
Reduction clustering algorithm based on approximate backbone guided [J].
After importing the data, we can use the Calculate button to analyze data.
The uncertainty of data management technology [J] Journal of computer, 2012, 32(1):56-60
Effective clustering algorithm based on probabilistic data stream [J].
Reduction clustering algorithm based on approximate backbone guided [J].
Online since: April 2008
Authors: Fu Guo Li, Xiao Na Wang, Yu Hong Liu
With combining the obtained data and parameters, FGH96 P/M Superalloy dynamic
recrystallization mathematic model is established by regression method.
The measured data are carried into the equation (1) and regressed.
Fig. 7 Distribution of grain size(µm ) for different height reductions (deformation temperature: 1373K, deformation velocity: 0.0001m/s, fraction factor: 0.3) 1-upper die 2-lower die 3-symmetric line 4 Conclusions (1) On the basis of the analysis of the dynamic recrystallization phenomenon, FGH96 Superalloy recrystallization grain size model under isothermal deformation is established
They would like to thank the teachers and engineerings concerned in the Numerical Simulation and Data Integration Center of the School of Materials Science and Engineering at the Northwestern Polytechnical University.
The measured data are carried into the equation (1) and regressed.
Fig. 7 Distribution of grain size(µm ) for different height reductions (deformation temperature: 1373K, deformation velocity: 0.0001m/s, fraction factor: 0.3) 1-upper die 2-lower die 3-symmetric line 4 Conclusions (1) On the basis of the analysis of the dynamic recrystallization phenomenon, FGH96 Superalloy recrystallization grain size model under isothermal deformation is established
They would like to thank the teachers and engineerings concerned in the Numerical Simulation and Data Integration Center of the School of Materials Science and Engineering at the Northwestern Polytechnical University.
Online since: May 2012
Authors: Samaneh Zolfagharian, Rosli Mohammad Zin, Rozana Binti Zakaria, Kian Seng Foo, Jay Yang
A validation survey was carried out and the data collected was analysed using SPSS in order to confirm the significance of retrofitting Universiti Teknologi Malaysia (UTM) buildings toward green building initiative.
Besides, available energy efficient and renewable energy technologies can also contribute to a reduction in the energy use in high-energy use office buildings.
The second phase was studied the design concept and procedure used in GBI- NRED and then identify all the data or input needed for questionnaire preparation.
The data collected from the questionnaire paper was analysed using frequency method and the finding was discussed in conclusion, limitation and recommendation on further retrofitting in UTM campus.
Besides, available energy efficient and renewable energy technologies can also contribute to a reduction in the energy use in high-energy use office buildings.
The second phase was studied the design concept and procedure used in GBI- NRED and then identify all the data or input needed for questionnaire preparation.
The data collected from the questionnaire paper was analysed using frequency method and the finding was discussed in conclusion, limitation and recommendation on further retrofitting in UTM campus.
Online since: December 2012
Authors: Xin He Chen, Wei Song
This system has poor portability and cannot adapt to large scale data processing.
The MVC(model view control) model separates the layer of data access and data performance, provides better decoupling and has obvious advantages in achieving multi-tier Web applications.
Fig. 2 Software Architecture In the first place, the data table and their relationship must be determined in the entity layer, then the Hibernate Mapping file and entity class which must be consistent with data table be written.
The business logic be written, also it can get real-time data by Coherence in this layer.
These statistical data mainly consist of by data, by charging facilities, by vehicle and statistical curve, and so on.
The MVC(model view control) model separates the layer of data access and data performance, provides better decoupling and has obvious advantages in achieving multi-tier Web applications.
Fig. 2 Software Architecture In the first place, the data table and their relationship must be determined in the entity layer, then the Hibernate Mapping file and entity class which must be consistent with data table be written.
The business logic be written, also it can get real-time data by Coherence in this layer.
These statistical data mainly consist of by data, by charging facilities, by vehicle and statistical curve, and so on.
Online since: April 2014
Authors: Yong Han, Mu Qiao, Xue Wan, Qing Chang
Finally, measurement results are validated based on the known data.
It is noted that most of the filters help noise reduction but lead to the loss of edge strength in the meantime.
Pixel equivalent n=D/N, obtained by the actual distance D of the part and the concrete data shown in Table 1.
Table 1 Calculation data of out-of-gauge freights end elevation lateral pixel equivalent The abscissa of A left The abscissa of B right pixel number actual distance(mm) pixel equivalent(mm) 699 71 629 2970.00 4.72 Determination of unknown points coordinates: There are 3 groups’ data of the end of out-of-gauge goods: upper breadth, lower breadth and protrusion portion on the left respectively.
The concrete data is shown in Table 1 and Table 2.
It is noted that most of the filters help noise reduction but lead to the loss of edge strength in the meantime.
Pixel equivalent n=D/N, obtained by the actual distance D of the part and the concrete data shown in Table 1.
Table 1 Calculation data of out-of-gauge freights end elevation lateral pixel equivalent The abscissa of A left The abscissa of B right pixel number actual distance(mm) pixel equivalent(mm) 699 71 629 2970.00 4.72 Determination of unknown points coordinates: There are 3 groups’ data of the end of out-of-gauge goods: upper breadth, lower breadth and protrusion portion on the left respectively.
The concrete data is shown in Table 1 and Table 2.