Search Options

Sort by:

Sort search results by

Publication Type:

Publication Type filter

Open access:

Publication Date:

Periodicals:

Periodicals filter

Search results

Online since: June 2025
Authors: Adib Adib, Radhiyullah Armi, Sabrian Tri Anda, Azwanda Azwanda
Evaluating the Reliability of Satellite Gravity Data for Disaster Risk Mapping in West Coast Aceh's Coal Mines Adib Adib1,a*, Radhiyullah Armi2,b, Sabrian Tri Anda2,c, Azwanda Azwanda2,d 1University of Teuku Umar, West Aceh, Indonesia 2University of Samudra, Langsa, Indonesia aadib@utu.ac.id, bradhiyullah@unsam.ac.id, csabriantrianda@unsam.ac.id, dazwanda@utu.ac.id Keywords: Satellite Gravity Data, Disaster Risk Reduction, FHD, SVD.
This integration aligns with recent advancements in geophysical data analysis, such as those discussed by Pohan et al. [8], which emphasize the importance of combining multiple data sources for robust risk assessment.
The spatial resolution for latitude and longitude is 1 minute per grid, with an accuracy of roughly 0.1 mGal for gravity data and 1 meter for elevation data.
Future efforts should focus on creating integrated geophysical models that combine multiple data sources to improve predictive capabilities and address emerging challenges in disaster risk reduction.
This access has been essential in enabling the research and data analyzing done.
Online since: January 2014
Authors: Piotr Czarnocki, Kamila Czajkowska, Zbigniew Lorenc
For the data reduction two procedures were applied.
Data reduction procedures.
The data reduction procedure aimed obtaining Paris’ type relationship, Eq.1
For the purpose of comparison two variants of data reduction procedure were used.
Unfortunately, the data reduction method applied was not reported.
Online since: December 2013
Authors: Zhi Bo Ren, Lei Sun, Chun Miao Yan, Yu Zhou Wei
There are a lot of continuous attributes in data streams, and classic rough set can only deal with discrete data; therefore, we must discrete the continuous attributes, which directly affects the subsequent attributes’ reduction and ensemble model construction.
As the data flow, we delete the old data synchronously and make sure that data appearing at the window are the need for processing, so as to realize the dynamic updating of the data set.
The data set of 12 attributes obtained by the reduction of rough set trains neural network classifiers, and integrating the results of each classifier to the ensemble.
On the last data chunk, concept drifting happened suddenly, and moreover, the accuracy after reduction is also higher than that before reduction.
Concluding Methods of traditional data mining face two major challenges of data stream: unlimited data and concept drifting.
Online since: February 2017
Authors: Lei Jiao Ge, Fu Jian Chi, Zhang You Xu, Yi Fang Su, Pan Zhang, Cong Liu, Shu Yan Yu, Rong Gang Ge
Fault Location Principle based on RS-IA Data Mining Model Overview.
It finds the laws from a large number of data by analyzing each data, and expresses them in the understandable form as far as possible [13, 15, 16].
Fault Location Based On RS-IA Data Mining Model Fault location based on the RS-IA data mining model mainly uses RS to extract domain knowledge, and obtains correlation rules of input and output vectors [14].
Fault Location of Distribution Network based on RS-IA Data Mining Model When there is a fault in some region of power distribution network, the fault location rules obtained by the RS-IA data mining model.
Sun, Data mining technology and its application on power system, Autom.
Online since: November 2010
Authors: Li Yang, Xin Zhang, Yan Zhang
The data driven refers to realizing the data predict that appraisal, dispatch, monitoring, diagnosis, decision-making and optimization of the system and so on each kind of expectation function by using the controlled system online and the off-line data [4].
Fig. 1 The multi-source information fusing system functional block diagram based on data driven Data Level Information Fusion.
Suppose there is n sensor to carry on measurement to some object X, each sensor measured data number is m, and then the sampled data matrix is:
Supposes the system input includes n sample, each sample Xi is composed of m data, produces the training sample data matrix (omitted).
References [1] Waltz E, Llinas J.Multisensor Data Fusion[M].
Online since: September 2014
Authors: Li Ma, Gui Fen Chen
Rough set theory has some advantages in the processing of data and eliminating redundant information and dealing with uncertain information,so widely used in data preprocessing, attribute reduction and so on.
Applications Based on Rough Sets and decision tree Experimental Data Acquisition The experimental data is from the Nong'an survey data, provided by Agricultural Technology Promotion Center.
Data Mining Based on Rough Set and Decision Tree combination.
Rough set attribute reduction algorithm requires that data is discrete data, according to the soil data characteristics, the Entropy / mdl discrete algorithm is carried to do the data processing.
Data Mining Results.In 1400 Records, correctly classified data is 1298, the others are not.
Online since: September 2007
Authors: N. Ivanović, Milesa Srećković, N. Popović, V. Kusigerski, Lj. Vulićević, S. Vardić, Ž. Tomić
Reduction of Nanometric Magnetite Powder Lj.
The experimental data suggest that the reoxidation process proceeds in two stages, at various activation energies.
The mass loss (∆mT∼31%) measured after the reduction process at T=833 K is somewhat larger than the theoretically predicted one (∆mT=27,64%) for the reduction of magnetite to iron.
Assuming that the reaction is diffusion controlled, Jander's model was applied, and the experimental data were correlated with the equation (full lines in Fig. 4) [8]: [1-(1-α) 1/3 ] 2 = kτ
According to the experimental and calculated data, (see Fig.5), it seems that the (re)oxidation process has two distinct stages, at activation energies of 0.17 and 0.59 eV.
Online since: January 2014
Authors: Rui Xian Li
Traffic flow sampled data is noisy and chaotic time series.
Traditional ESN is only suitable for no noise or low-noise processing data, while the data with high noise prediction result is unsatisfactory[2].In interference of complex high-intensity noise component, simply optimization for ESN model does not completely solve the problem of prediction accuracy , while the use of data smoothing processing methods is often more direct and effective than optimal ESN model itself .The use of signal processing method filters out the data noise component, makes the data which is reduced noise have a higher signal noise ratio(SNA) , then provides the necessary conditions for high-precision prediction[3,4] .
The definition of goodness of fit, according to the References [6]: (2) Where is the original traffic flow data, is traffic flow data after noise reduction, is the sampling point .The Smaller is, the higher similarity to traffic flow before and after noise reduction is.
Multi-State Threshold Method Multi-state Threshold Method Which Is Based On Wavelet Transform In the Wavelet-based noise reduction algorithm, without any kind of soft-threshold method can be fit for all types of data.
Applicable threshold selection rules need us to fully consider characteristics of the denoising data.
Online since: December 2014
Authors: Xiao Xue Xing, Wei Wei Shang, Li Min Du
With the increasing of data in database systems, attribute reduction becomes more effective relative to the value reduction.
But the discernibility function simplification is a NP problem [5], so this algorithm is only suitable for with very small data set.
The dependence of attributes in rough set theory is the influence on decision rules in the current data, but it can’t reflect the prior knowledge of decision maker.
The experimental system chooses Visual C++6.0 as the development language, using Windows 2000 as the development environment, using six data sets from UCI standard data.
The data is processed after the missing information processing, and attributes are reduced using the above two algorithm respectively.
Online since: September 2011
Authors: Long Zhen Duan, Zhi Xin Zou, Gui Fen Wang
The Web Classification Based on ROUGH-GA-BP Long Zhen Duan1,a , Zhi Xin Zou1,band Gui Fen Wang1,c 1Department of Computer Application Technology, Nan Chang University, Jiang Xi , China alzhduan@126.com,bzzhxin@163.com, cfenfen353@126.com Keywords: Text Classification Algorithm, ROUGH-GA-BP, Data Reduction Abstract.
This algorithm reduces the data of the text input vector by the data reduction method based on rough sets theory, and presenting a genetic algorithm approach for feature selection.
Introduction Because of the rapid growth of text data, automatic methods of data management are especially important.
In document [1], using rough set theory reduction methods to pre-treat the information, and remove redundant data; taking reduced decision tables as design basis and training data of the neural network.
Test and Analysis Corpus used in this study is Chinese web pages data set CWT100g collected by Peking University's network laboratory.
Showing 61 to 70 of 40694 items