Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: March 2015
Authors: Xiao Fei Huang
(2) The original data is standardized; determine the original number of variables to be analysed are suitable for factor analysis
OPITIMAZATION MODELING Data collected from TongDa Communications Co., Ltd.
About 50 kinds of stock data available listed during 01/04/2013-04/24/2013.
Data collected from aspects of mean value, maximun/minimum closing price, trade volume,share price and total mean price is shown in Fig.2.
(2) Data from Jan. 04/2013 to Apr, 24/2013 is considered as original source, so that the model does not fully reflect all of the problems.
OPITIMAZATION MODELING Data collected from TongDa Communications Co., Ltd.
About 50 kinds of stock data available listed during 01/04/2013-04/24/2013.
Data collected from aspects of mean value, maximun/minimum closing price, trade volume,share price and total mean price is shown in Fig.2.
(2) Data from Jan. 04/2013 to Apr, 24/2013 is considered as original source, so that the model does not fully reflect all of the problems.
Online since: February 2012
Authors: Yu Gang Fan, Xu Yi Yuan, Jian De Wu, Xiao Dong Wang
From the resistance factor, the calculation model, drag reduction measures and so on, Defang Shu introduced the status of friction drag loss, the common models including Durand Friction drag loss calculation model and Guangwen Chen's improved model[3].
In fact, according to the actual production operation data coming from the longest, most complex terrain Dahongshan pipeline currently, the two formulas are unable to get the accurate frictional drag loss result, and lead to bigger error.
The basic idea of LS-SVM and SVM is similar, and the constructor is: (1) Where: is the nonlinear function, is the right value, is for the threshold. we set learning patterns data as based on the principle of structural risk minimization, LS-SVM regression learning can use optimization problem to describe: (2) Where: as optimize objective function, is the regularization parameter, is relaxation factor of the non-sensitive loss function.
The target of local regression model is to predict the current time output based on historical database, for the current given measurement input , we get training set data of local regression modeling first, in this paper, we take parameter value as a similarity standard between predict value and historical database, and take as the similarity coefficient, measure as follows: (4) Where is the point in historical data, we get after calculation, and arranged by size, choose the front data with higher matching degree as a local regression modeling data[11].
Experimental verification Experiment description and Definition of performance indicators Experiment shows: In the actual production of the Dahongshan pipeline data to test, In each experiment, take the top 50 hours data as the training sample, take later 70 hours data as a test samples.
In fact, according to the actual production operation data coming from the longest, most complex terrain Dahongshan pipeline currently, the two formulas are unable to get the accurate frictional drag loss result, and lead to bigger error.
The basic idea of LS-SVM and SVM is similar, and the constructor is: (1) Where: is the nonlinear function, is the right value, is for the threshold. we set learning patterns data as based on the principle of structural risk minimization, LS-SVM regression learning can use optimization problem to describe: (2) Where: as optimize objective function, is the regularization parameter, is relaxation factor of the non-sensitive loss function.
The target of local regression model is to predict the current time output based on historical database, for the current given measurement input , we get training set data of local regression modeling first, in this paper, we take parameter value as a similarity standard between predict value and historical database, and take as the similarity coefficient, measure as follows: (4) Where is the point in historical data, we get after calculation, and arranged by size, choose the front data with higher matching degree as a local regression modeling data[11].
Experimental verification Experiment description and Definition of performance indicators Experiment shows: In the actual production of the Dahongshan pipeline data to test, In each experiment, take the top 50 hours data as the training sample, take later 70 hours data as a test samples.
Online since: October 2011
Authors: Wei Wei, Rui Qin Li
At the same time, intelligent data platform construction also provides reference for control index of .detailed planning.
Urban System Planning of Smart City When integrated evaluating supporting elements and constraints of urban development like Land Resources, Water Resources, Energy and Capacity of eco-environments, data are analyzed and calculated to figure out requirements of protection and rational utilization of resources, eco-environments protection and Disaster Preparedness and Reduction.
In Planning process, the internet and the internet of things should be used to collect data which including data of topography, resources, transportation, economy and society.
Through comprehensive analyzing data of Land Resources, Water Resources and eco-environment, the spatial trial's controlling range of constructible zones, limited constructible zones and unconstructible zones according to requirements of resource protection, eco-environment and the optimization of spatial allocation.
New public service system will processes isolated data into one integrated platform, and create a unified process to focus on management systems and data, reducing the public service facility layout constraints.
Urban System Planning of Smart City When integrated evaluating supporting elements and constraints of urban development like Land Resources, Water Resources, Energy and Capacity of eco-environments, data are analyzed and calculated to figure out requirements of protection and rational utilization of resources, eco-environments protection and Disaster Preparedness and Reduction.
In Planning process, the internet and the internet of things should be used to collect data which including data of topography, resources, transportation, economy and society.
Through comprehensive analyzing data of Land Resources, Water Resources and eco-environment, the spatial trial's controlling range of constructible zones, limited constructible zones and unconstructible zones according to requirements of resource protection, eco-environment and the optimization of spatial allocation.
New public service system will processes isolated data into one integrated platform, and create a unified process to focus on management systems and data, reducing the public service facility layout constraints.
Online since: October 2013
Authors: Gang Shao, Lei Wan, Xiao Di Shen
The most commonly used sensors to measure navigation data on an AUV are the Inertial Measurement Unit (IMU) and the Doppler Velocity Log (DVL), while acoustic cameras or side-scan sonar are used to gather data from the environment.
Once a loop is detected, the data association between those maps that are closing the loop is computed.
The experiment was conducted to gather the data, but not to run our algorithms on-line with the mission.
The gathered data was post processed and run through our algorithms.
The experiments show a reduction of the effects of the linearisation error and also a more precise reconstruction of the map since the drift suffered in shorter distances is smaller and the data association can be more robustly solved.
Once a loop is detected, the data association between those maps that are closing the loop is computed.
The experiment was conducted to gather the data, but not to run our algorithms on-line with the mission.
The gathered data was post processed and run through our algorithms.
The experiments show a reduction of the effects of the linearisation error and also a more precise reconstruction of the map since the drift suffered in shorter distances is smaller and the data association can be more robustly solved.
Online since: May 2013
Authors: Elaine Aspinwall, Stephen Boakye Twum
The minimum cut-sets leading to series-parallel reduction of such systems were constituted into criteria and their reliability maximised while minimising cost of improving the system’s reliability.
In general, MCSs are not pair-wise mutually exclusive, therefore for some ; which suggests that component mixing is a likely feature of this type of series-parallel reduction and distinguishes it from the standard series-parallel system.
The minimum reliability value of a component may be taken as the current reliability value of the component which can be obtained from the component’s failure data and its corresponding statistical distribution.
Coit, Data Clustering of Solutions for Multiple Objective System Reliability Optimization problems, Quality Technology and Qualitative Management Journal, Vol. 4 No.2 (2007) pp 35-54 [12] D.
In general, MCSs are not pair-wise mutually exclusive, therefore for some ; which suggests that component mixing is a likely feature of this type of series-parallel reduction and distinguishes it from the standard series-parallel system.
The minimum reliability value of a component may be taken as the current reliability value of the component which can be obtained from the component’s failure data and its corresponding statistical distribution.
Coit, Data Clustering of Solutions for Multiple Objective System Reliability Optimization problems, Quality Technology and Qualitative Management Journal, Vol. 4 No.2 (2007) pp 35-54 [12] D.
Online since: July 2013
Authors: Cătălin Alexandru, Vlad Totu
The goodness-of-fit has been verified by computing the variance in the predicted results versus the real data, the probability that the fitted model has no useful terms, and the significance of the regression.
R-squared indicates the variance in the predicted results versus the real data.
This is the proportion of total variability in the data which is explained by the regression model, a score of "1" indicating a perfect fit.
Range-to-variance ratio indicates how well the model predicts values at the data points.
The results shown in figures 3-7 indicate that the regression model for the selected strategy, DOE Screening, D-Optimal design, Interactions, matches the test data very well.
R-squared indicates the variance in the predicted results versus the real data.
This is the proportion of total variability in the data which is explained by the regression model, a score of "1" indicating a perfect fit.
Range-to-variance ratio indicates how well the model predicts values at the data points.
The results shown in figures 3-7 indicate that the regression model for the selected strategy, DOE Screening, D-Optimal design, Interactions, matches the test data very well.
Online since: October 2022
Authors: Pinki Rani, Ajay Singh Verma, Sunil Rohilla
By using XRD data crystallite size of V2O5 was estimated to 33.98nm and crystallite size of TiO2 was estimated to 35.04nm.
After doing number of adjustment on the obtained pattern of XRD we get XRD pattern data.
That data gives us the structural and diffraction information about nanoparticles.
Ti m.mm 0 0 0 2a O m.2m 0.327 0.327 0 4f The profile of X-ray diffraction data is fitted with the Rietveld Refinement and obtained spectra of V2O5/TiO2 is shown in figure.
The rietveld refinement of the sample is done by Fullprof Software and there is good agreement between the data and also provides low value of R-Factor.
After doing number of adjustment on the obtained pattern of XRD we get XRD pattern data.
That data gives us the structural and diffraction information about nanoparticles.
Ti m.mm 0 0 0 2a O m.2m 0.327 0.327 0 4f The profile of X-ray diffraction data is fitted with the Rietveld Refinement and obtained spectra of V2O5/TiO2 is shown in figure.
The rietveld refinement of the sample is done by Fullprof Software and there is good agreement between the data and also provides low value of R-Factor.
Online since: May 2020
Authors: Alexander N. Kalitaev, Vlasta D. Tutarova, Aleksey N. Shapovalov
In order to determine the reasonable and balanced treatment parameters that ensure the required level of nitrogen content in the above steel grades, an analysis of production data for the period of November-December 2016 has been carried out.
Table 1 shows the average data of the results of degassing of the above steel grades.
According to the data of Table 1, the operating parameters of the degassing process vary within the sufficiently wide limits, which causes significant fluctuations in the content of hydrogen and nitrogen in steel.
Therefore, for the sake of further processing, the heats with a freeboard exceeding 500mm were rejected from the initial production data, and the effect of the thickness of the slag layer, assessed visually, was not taken into account.
To assess the cumulative quantitative effect of the main parameters of degassing on the nitrogen removal, there has been carried out a regression analysis of production data.
Table 1 shows the average data of the results of degassing of the above steel grades.
According to the data of Table 1, the operating parameters of the degassing process vary within the sufficiently wide limits, which causes significant fluctuations in the content of hydrogen and nitrogen in steel.
Therefore, for the sake of further processing, the heats with a freeboard exceeding 500mm were rejected from the initial production data, and the effect of the thickness of the slag layer, assessed visually, was not taken into account.
To assess the cumulative quantitative effect of the main parameters of degassing on the nitrogen removal, there has been carried out a regression analysis of production data.
Online since: August 2013
Authors: Y. K. Chen, G.J. Williams, F.S. Bhinde
J Power Sources 92 (2001) 26-34.
] used interpolation of empirical data thus negating the need for localised equations to be solved, which can be seen as another approach.
However the large number of variables to take into account and the lack of wide ranging data for these variables seriously limit the usefulness of this approach.
As such empirical data for these values would be required when looking at each temperature in question, which is clearly not useful for the purposes of the model.
Thus, tables of empirical data of entropy and enthalpy of Hydrogen, Oxygen and Water were collected [[] R.
This calculated data was then graphically analysed by plotting the individual data points in terms of entropy/enthalpy against temperature.
However the large number of variables to take into account and the lack of wide ranging data for these variables seriously limit the usefulness of this approach.
As such empirical data for these values would be required when looking at each temperature in question, which is clearly not useful for the purposes of the model.
Thus, tables of empirical data of entropy and enthalpy of Hydrogen, Oxygen and Water were collected [[] R.
This calculated data was then graphically analysed by plotting the individual data points in terms of entropy/enthalpy against temperature.
Online since: November 2012
Authors: Lan Min Wang, Wang Nai, Wang Qian, Zhong Xia Yuan
The study is expected to supply scientific basis for seismic landslide disaster prevention and reduction in valley city of loess area.
The data, as the estimated tool, of the landslide is given by field measurement.
Hence, field investigation and visit and local historical data could be an important way to verify the seismic landslide.
Assessing specific position, scale and damage of the landslide caused by earthquake demands the local data.
Through a full grasp of slope data, it is possible to predict landslides disasters and designate emergency relief program; so when the landslide occurs, it can be timely to rescue.
The data, as the estimated tool, of the landslide is given by field measurement.
Hence, field investigation and visit and local historical data could be an important way to verify the seismic landslide.
Assessing specific position, scale and damage of the landslide caused by earthquake demands the local data.
Through a full grasp of slope data, it is possible to predict landslides disasters and designate emergency relief program; so when the landslide occurs, it can be timely to rescue.