Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: June 2022
Authors: Dianisa Khoirum Sandi, Fahru Nurosyid, Yofentina Iriani, Rindhi Afriani
Based on the X-Ray Diffraction (XRD) and Fourier Transform Infra-Red (FTIR) data, the SrTi0.80Mn0.20O3 has been made.
The data also showed that the impurities phases have been reduced and eliminated by the higher sintering temperatures.
According to the UV-Vis absorption data, the irradiation time could considerably decline the absorption peaks of MB dye in SrTi0.80Mn0.20O3.
The XRD data were matched with the ICDD database and used to calculate the lattice constants and crystals size following Eq. 1 and Eq. 2, respectively.
Based on the XRD and FTIR data, the SrTi0.80Mn0.20O3 has been formed.
The data also showed that the impurities phases have been reduced and eliminated by the higher sintering temperatures.
According to the UV-Vis absorption data, the irradiation time could considerably decline the absorption peaks of MB dye in SrTi0.80Mn0.20O3.
The XRD data were matched with the ICDD database and used to calculate the lattice constants and crystals size following Eq. 1 and Eq. 2, respectively.
Based on the XRD and FTIR data, the SrTi0.80Mn0.20O3 has been formed.
Online since: November 2012
Authors: Xiang Hong Xue, Xiao Feng Xue, Lei Xu
Principle component analysis has two purposes: the first is data reduction, and the second is for revealing the relations between variables.
It assumes n samples and each sample has p characteristics (index), then the sample data set can be expressed as: (1) Steps of principle component analysis [6]: Step1: data standardization; standardize the original data first so as to eliminate dimension influence, thus gaining standardized data set .
Divide simulation data into two parts and take the data from 1990 to 2005 as training sample set data; data of 2006 to 2010 as test sample set data for the model; in order to eliminate influence to the prediction performance resulted from the dimension difference of the SVM prediction indicator data, it’s necessary to preliminarily process the input data before utilizing SVM to establish model for prediction; it will directly influence the training speed and water demand prediction accuracy.
This article conducts standardized processing to all the sample data with Eq.8 and normalizes all the data in interval [0, 1]
(8) refers to normalized data; xi refers to index series data; xmin, xmax respectively refer to the min and max value of the original series data.
It assumes n samples and each sample has p characteristics (index), then the sample data set can be expressed as: (1) Steps of principle component analysis [6]: Step1: data standardization; standardize the original data first so as to eliminate dimension influence, thus gaining standardized data set .
Divide simulation data into two parts and take the data from 1990 to 2005 as training sample set data; data of 2006 to 2010 as test sample set data for the model; in order to eliminate influence to the prediction performance resulted from the dimension difference of the SVM prediction indicator data, it’s necessary to preliminarily process the input data before utilizing SVM to establish model for prediction; it will directly influence the training speed and water demand prediction accuracy.
This article conducts standardized processing to all the sample data with Eq.8 and normalizes all the data in interval [0, 1]
(8) refers to normalized data; xi refers to index series data; xmin, xmax respectively refer to the min and max value of the original series data.
Online since: May 2011
Authors: Yue Zhen Zhang, Xiao Ming Guan, Chang Feng Yuan, Guang Ming Yu, Xu Chun Wang
Data flow is shown in Figure 1.
Fig.1 Data flowing in system Pipeline Property Database.
Pipeline attribute data comprises underground pipeline length, diameter data, depth, construction age, period of use, the type of pipe, the basic purpose, pipes and other relevant attribute data.
Table 1and 2 are part of the pipeline design table of data.
The MetaDatabase manages various data, through the integration and management to data storage, which can accurately describe the data content, quality condition and other functions, MetaDatabase structure of organizations include the establishment of the identity of the metadata, data quality, data maintenance, space representation, distribution and information content.
Fig.1 Data flowing in system Pipeline Property Database.
Pipeline attribute data comprises underground pipeline length, diameter data, depth, construction age, period of use, the type of pipe, the basic purpose, pipes and other relevant attribute data.
Table 1and 2 are part of the pipeline design table of data.
The MetaDatabase manages various data, through the integration and management to data storage, which can accurately describe the data content, quality condition and other functions, MetaDatabase structure of organizations include the establishment of the identity of the metadata, data quality, data maintenance, space representation, distribution and information content.
Online since: January 2014
Authors: Jian She Kang, Bao Chen Li, Mao Xing Shen, Rui Tong
However, many samples and long time are required in the reliability test, while no special reliability tests are made in the shooting range, so that it is difficult to obtain the reliability data of the electronic equipment in guided missile and also the amount of the data is small.
Also, the geometric figure of grey prediction is a relatively smooth curve, and the predication values fluctuate and the prediction accuracy if it is applied to the prediction of the data with great stochastic volatility.
If the time series data of the prediction problems is fit with the GM (1, 1) to find out the change tendency, the grey model can make up for Markov chain prediction, while the Markov prediction on the basis of the grey prediction can make up for the imperfection of the low data series prediction accuracy with the great stochastic volatility [2].
The modeling method for (GM (1, 1) is as follows: (1) the original data series is processed with accumulative generation to weaken its randomness, and then the generated data series is fit using the first-order differential equation to get the grey prediction model, and then the discrete solution is obtained for the equation, and finally the future development trend of the system is predicted after the accuracy is tested for meeting the accuracy requirements [3].
It is assumed that features the approximate exponential rule, and a differential equation is established for the accumulative data series for changing differential to difference, so the grey differential equation is obtained as follows: a is development coefficient, and its value reflects the growth speed of data series; b is a grey action.
Also, the geometric figure of grey prediction is a relatively smooth curve, and the predication values fluctuate and the prediction accuracy if it is applied to the prediction of the data with great stochastic volatility.
If the time series data of the prediction problems is fit with the GM (1, 1) to find out the change tendency, the grey model can make up for Markov chain prediction, while the Markov prediction on the basis of the grey prediction can make up for the imperfection of the low data series prediction accuracy with the great stochastic volatility [2].
The modeling method for (GM (1, 1) is as follows: (1) the original data series is processed with accumulative generation to weaken its randomness, and then the generated data series is fit using the first-order differential equation to get the grey prediction model, and then the discrete solution is obtained for the equation, and finally the future development trend of the system is predicted after the accuracy is tested for meeting the accuracy requirements [3].
It is assumed that features the approximate exponential rule, and a differential equation is established for the accumulative data series for changing differential to difference, so the grey differential equation is obtained as follows: a is development coefficient, and its value reflects the growth speed of data series; b is a grey action.
Online since: August 2014
Authors: Jing Qin Mu, Rui Qing Du, Xian Rui Deng
It have the functions for collecting and analysing real-time data, controlling data and history data.
The following functions are the important ones: · Collecting and analysing data.
It restores the historical data and provides query function.
Also, it can do query operation for production data and print the report forms
For example, cutting length, measuring length and speed, weight, forecasting data and tracing data
The following functions are the important ones: · Collecting and analysing data.
It restores the historical data and provides query function.
Also, it can do query operation for production data and print the report forms
For example, cutting length, measuring length and speed, weight, forecasting data and tracing data
Online since: September 2014
Authors: Rui Gu, Jian Feng Jiang
It is used to parallel compute massive data on the cluster system.
First, the segmentation of the input data, distribute the data after segmentation to Map, then Map can mapping the distributed data(generally as the set) to another set of middle data.
The rule of reduction is also according to a specified function.
Facing to massive data on web pages of the internet, this characteristic provides a guarantee which has data storage and processing platform for the development of efficient network crawler.
MapReduce: Simplied Data Proc.
First, the segmentation of the input data, distribute the data after segmentation to Map, then Map can mapping the distributed data(generally as the set
The rule of reduction is also according to a specified function.
Facing to massive data on web pages of the internet, this characteristic provides a guarantee which has data storage and processing platform for the development of efficient network crawler.
MapReduce: Simplied Data Proc.
Online since: March 2023
Authors: Anisa Nur Istiqomah, Retno Wati, Estu Salsabila, Hendika Sidiq Putra Pratama
However, there are insufficient data to eliminate the use of oral contrast and the available data are also not very clear [12,13].
The data entry steps can be seen in Fig. 3.
Patient data entry on the console computer system The parameters used for scanning are in accordance with the data in Table 1.
The results of the interviews were then transcribed and data reduction was carried out to retrieve important data according to the research objectives.
After reduction, the interview data were grouped according to the problem formulation and research objectives.
The data entry steps can be seen in Fig. 3.
Patient data entry on the console computer system The parameters used for scanning are in accordance with the data in Table 1.
The results of the interviews were then transcribed and data reduction was carried out to retrieve important data according to the research objectives.
After reduction, the interview data were grouped according to the problem formulation and research objectives.
Online since: January 2017
Authors: Wahyudi Wahyudi
Wang and his team [17] classified the method into two levels, level I for identifying liquefaction vulnerability of area based on secondary data such as geologic and seismic map from published data, and level II is defined as detail zonation of liquefaction which can be achieved by combing secondary data and primary data from field geotechnical investigation.
Two of the CPT profiles data are shown in Fig. 5.
CPT data of S 11 Fig. 5 b.
Muntohar, Evaluation of PGA Using CPT Data for Liquefaction Potential.
Potential Assessment using Loma Prieta Data, Proc. of 4th Japan-U.S.
Two of the CPT profiles data are shown in Fig. 5.
CPT data of S 11 Fig. 5 b.
Muntohar, Evaluation of PGA Using CPT Data for Liquefaction Potential.
Potential Assessment using Loma Prieta Data, Proc. of 4th Japan-U.S.
Online since: July 2022
Authors: Mariana Conde, João Henriques, António Andrade-Campos, Sam Coppieters
It is very important to have adequate input data for the FEMU.
Generally, this approach compares experimental with numerical data.
Additionally, the iteratively numerically generated data is not compared directly with the reference material data.
Again, synthetically deformed images were created iteratively based on the numerical analysis, being the numerical data a DIC-leveled numerical data.
The superscripts “num” and “exp” refer to the data originated iteratively during the optimization process and the virtual experimental (reference) data, respectively.
Generally, this approach compares experimental with numerical data.
Additionally, the iteratively numerically generated data is not compared directly with the reference material data.
Again, synthetically deformed images were created iteratively based on the numerical analysis, being the numerical data a DIC-leveled numerical data.
The superscripts “num” and “exp” refer to the data originated iteratively during the optimization process and the virtual experimental (reference) data, respectively.
Online since: November 2019
Authors: T.P. Jeevan, S.R. Jayaram, P.S. Suvin, Prasanna Vineeth Bharadwaj
Data pre-processing
2.
Missing data: The first step in data pre-processing is to take care of the missing data.
The entire row of the dataset with the missing data can be removed when the missing data are minimal or when a row contains multiple missing data.
Encoding categorical data: The “operations” column under the machining conditions contains categorical data i.e., the data consists of words instead of values, which needs to be encoded into numbers.
Principal Component Analysis (PCA): This is a data manipulation technique used to reduce the dimensionality of the original data [14].
Missing data: The first step in data pre-processing is to take care of the missing data.
The entire row of the dataset with the missing data can be removed when the missing data are minimal or when a row contains multiple missing data.
Encoding categorical data: The “operations” column under the machining conditions contains categorical data i.e., the data consists of words instead of values, which needs to be encoded into numbers.
Principal Component Analysis (PCA): This is a data manipulation technique used to reduce the dimensionality of the original data [14].