Search Options

Sort by:

Sort search results by

Publication Type:

Publication Type filter

Open access:

Publication Date:

Periodicals:

Periodicals filter

Search results

Online since: December 2012
Authors: Gang Li, Yong Sheng Zhu, Yong Sheng Yang, You Yun Zhang
Geodesic Distance Metric and FCM Clustering In order to mathematically identify clusters in a set of n data, let X be the data set {x1, x2… xn}.
The main potential of FCM is to detect the underlying structure in data, and get a clustering prototype of data set.
In this paper, the FCM clustering is considered as a process of preprocessing and used to analyze data set for generating the initial prototype of original data.
The FCM is adopted to cluster the original data, and the prototypes are taken as initial data for next genetic clustering.
As a result computational cost can also be made a considerable reduction.
Online since: August 2013
Authors: Hai Xia Li, Hai Long Yu, Ping Chuan Zhang
The main problem of inversion methods is the long calculating time consuming and large memory occupying when the quantities of inverse data are large.
So reduction to the pole of magnetic anomalies was done, the contour of magnetic anomalies by reduction to the pole processing is shown in Fig2.
Fig.2 The magnetic anomaly △T contour of iron mine area after reduction to the pole We carried out the 3D apparent magnetic susceptibility inversion of magnetic anomaly data after reduction to the pole.
Only 3D graph of the apparent magnetic susceptibility is provided which constructed by the inversion apparent magnetic susceptibility data of 15 layers, shown in Fig3.
A least-squares minimization approach to invert gravity data, Geophysics, Vol.56(1991), p. 115 [3] H.L.
Online since: November 2007
Authors: Roman Pielaszek, Tomasz Wejrzanowski, Krzysztof Jan Kurzydlowski, M. Spychalski
Most of the data, available in the literature, is related to flat GBs obtained for bicrystals [4].
Since the GB structure and properties is influenced by its curvature (grain size) [5] this data is not applicable in the case of nanomaterials.
The data from this investigation may be further used for understanding processes, which occur at different length and time scale, such as grain growth or plastic deformation.
The slope of the data from this region represents the velocity of the grain shrinkage, dV/dt (grain boundary migration).
The quantitative data of these parameters for GB in α-iron have been presented in the present paper.
Online since: May 2014
Authors: Zhao Wei Cao
As an advanced artificial intelligence, machine learning methods has been widely used in data mining, decision support and other areas.
Its main advantages are: (1) The achievements process of Tree decisions does not need operator understand the relevant background knowledge; (2) Its operating result is a binary tree or tree and is easy to understand and operate; (3) Good at dealing with qualitative data formed by the subjective judgment; (4) Data in a certain scale, efficiency and accuracy of the modeling were higher.
Based on these characteristics, the method has been successfully applied to classify data set preprocessing, target sets, data mining, decision-making and forecasting and other fields.
The establishment of a new model input data, or in the recent competition to build the neural network data, you can complete the game prediction analysis.
Mathematical simulation methods of ball games is developed firstly by the German scholar Donald made, the main idea is: Choose a mathematical method, use existing data input and output training to get highly mathematical model fitting; training test input data set and the neural network model to simulate actual game; change the input of an analysis of indicators and indicator values remain unchanged other input, observe the effect on the probability of winning the game; according to the probability of winning the size of the change sort of input indicators, namely finish the race simulation.
Online since: May 2012
Authors: A.I.O. Zaid, M.M. Al-Tamimi
The analysis of their experimental data led to the conclusion that the growth of cavities was controlled primarily by plastic deformation, although diffusion may play a role in the early stages of growth [14].
Reduction in Height Percentage Fig. 4.
Reduction in Bore Diametere Percentage vs.
Reduction in Height Percentage Regarding the effect of reduction in height on the cavity closure, it can be seen from Fig. 6 that the reduction in bore volume percentage increases as the reduction in height percentage increases.
Reduction in Bore Volume Percentage vs.
Online since: December 2013
Authors: Hong Xu, Xue Ping Mao, Yong Zhong Ni, Chao Li, Sai Dong Huang, Yang Yu
Stress and temperature dependent constants and in Eq. (1) are obtained from the uniaxial creep rupture test data.
A fit to the data gives a value of .The failure mechanism control parameter value is 0.44.
It is clear in Fig. 6 that when the representative stress is used, the rupture data is in good agreement with uniaxial data.
A code of practice for notched bar creep rupture testing: Procedures and interpretation of data for design [S].
A code of practice for conducting notched bar creep rupture tests and for interpreting the data [S].
Online since: December 2014
Authors: Heng Luo, Ning Wang, Xiao Yan Ye, Lu Gao, Jun Chen Li, Tong Jing
As shown, the system consists of four different parts data collection, transmission, analysis and central control.
The data collection unit targets at obtaining environment parameters such as light, temperature and humidity.
Analysis part deals with the data from transmission units before delivering them to the central control unit further.
As seen, the data required will be transmitted only when data request happens and the request is authenticated.
The control unit analyzes the data received and starts the shadowing and irrigation after authentication.
Online since: January 2012
Authors: Lin Li Wu, Zhi Jun Lei
Rough set theory is a kind of analysis data mathematical theory and proposed by Poland mathematician Z.
This theory has been widely applied in data mining, artificial intelligence, pattern recognition and other cognitive areas.
Through the experiment based on UCI data set, this paper compares C4.5 algorithm based on information entropy and the generation algorithm of decision tree based on weighted average roughness [3].
All the attributes reduction of C. all the intersections of C attribute reduction in is called the core and written as Core (C).
the comparison of two generation algorithm in UCI data set Data set Object number Attribute number (C/D) C4.5 CCF accuracy leaf accuracy leaf Breast 699 9/2 95 6 96 5 Diabetes 768 8/2 74 14 78 12 Lymph 148 18/4 77 7 72 5 Iris 150 4/3 76 5 76 5 Primary tumor 339 3/21 34 167 59 135 From the table, it can be seen that comparing to C4.5 based on information entropy method, the methods based on classification contribution has better average classification accuracy.
Online since: August 2012
Authors: D.S. van Vuuren
As an approximation it was assumed that the diffusion coefficient of dissolved sodium in molten NaCl is the same as that of sodium cations in molten NaCl for which the data of Bockris and Hooper [28] can be used.
Pre-Reduction of TiCl4.
Unfortunately no data could be accessed for the solubility of TiCl4 in molten CaCl2.
Janz, Thermodynamic and Transport Properties of Molten Salts: Correlation Equations for Critically Evaluated Density, Surface Tension, Electrical Conductance, and Viscosity Data, A Chem.
Reference Data, Volume 17, 1988, Supplement No.2
Online since: August 2009
Authors: Guang Bin Wang, Y.I. Liu, X.Q. Zhao
Locally linear embedding (LLE) algorithm is an unsupervised technique recently proposed for nonlinear dimension reduction.
By LLE algorithm, original sample data is directly mapped to its' intrinsical dimension space,which data still keep primary nonlinear form. then via kernel fisher discriminant analysis(KFDA), the characteristics data in intrinsical dimension space are mapped into knernel high-dimensional linear space,and then different fault data are discriminated based on a criterion of between-class and insid-class deviatione ratio maximum.
Since 2000 two papers[1~2] about manifold learning were published in Science, THis has become a hot research topic by using manifold learning methods to carry out dimension reduction and data analysis,find instrinstic geometric structure of nonlinear high-dimensional data set.
Differential monifold believes that high-dimensional geometric distribution of observational data is determined by it's intrinsic nature, pattern recognition problems need reference to data set's specific geometric constraint.
LLE-KFDA algorithm bases on original observational data,and performs local linear embeding and map sample data to intrinsic dimension subspace,thus avoids from problems caused by improper feature parameters choice.Because of embedded subspace mostly hold the original data nonlinear feature, the fault identification through LDA will have a lot of limitations.
Showing 2191 to 2200 of 40694 items