Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: January 2012
Authors: Zhe Ming Zhu, Ling Zhi Xie, Ru Zhang, Ting Ai, Li Ren
In the analysis of slope stability, a reduction method of non-linear shear strength was applied.
A power function was used to approximate the shear failure envelope which is determined by the non-linear criterion or experimental data, and then the coefficients of that function were determined by using the collocation points.
To well and simply implement the shear strength reduction in slope stability calculations, we used a power function to approximate the shear strength failure envelope which is determined by the nonlinear criterion or experimental data in the normal and shear stress plane (we chose the most popular non-linear criterion, Hoek-Brown criterion, as an example), and then calculated the coefficients of that function by using the collocation points.
(12) In order to obtain the EM-C parameters, we suppose that the cohesion ce has a initial value ce0, then gain the friction angle φe0 based on the method of least square (LSM) which is a procedure to determine the best fit line to data.
In this paper, we used a power function to approximate the shear strength failure envelope which is determined by the non-linear criterion or experimental data in the normal and shear stress plane, and calculated the coefficients of the function.
A power function was used to approximate the shear failure envelope which is determined by the non-linear criterion or experimental data, and then the coefficients of that function were determined by using the collocation points.
To well and simply implement the shear strength reduction in slope stability calculations, we used a power function to approximate the shear strength failure envelope which is determined by the nonlinear criterion or experimental data in the normal and shear stress plane (we chose the most popular non-linear criterion, Hoek-Brown criterion, as an example), and then calculated the coefficients of that function by using the collocation points.
(12) In order to obtain the EM-C parameters, we suppose that the cohesion ce has a initial value ce0, then gain the friction angle φe0 based on the method of least square (LSM) which is a procedure to determine the best fit line to data.
In this paper, we used a power function to approximate the shear strength failure envelope which is determined by the non-linear criterion or experimental data in the normal and shear stress plane, and calculated the coefficients of the function.
Online since: October 2011
Authors: De Hong Xia, Han Bing Bi, Ling Ren
Especially, the energy consumption of the reduction furnace in the thermal reduction process is more than 70% of the total energy consumption.
The reduction jar is the core equipment of the reduction furnace, where the reduction reaction to produce Mg occurs at high temperature and under perfect vacuum.
The Mg-reduction jar is a consumable equipment rather than an energy-consuming one in the thermal reduction process of silicothermic method.
Cross Section Design of Reduction Jar The cross section of the traditional reduction jar is regular ring, as shown in Fig. 1(a).
Based on the data fitting of Table 1, the dependence of the elastic modulus E on the temperature T is E=-0.0843T+234
The reduction jar is the core equipment of the reduction furnace, where the reduction reaction to produce Mg occurs at high temperature and under perfect vacuum.
The Mg-reduction jar is a consumable equipment rather than an energy-consuming one in the thermal reduction process of silicothermic method.
Cross Section Design of Reduction Jar The cross section of the traditional reduction jar is regular ring, as shown in Fig. 1(a).
Based on the data fitting of Table 1, the dependence of the elastic modulus E on the temperature T is E=-0.0843T+234
Online since: January 2015
Authors: Wei Jiang Zhang
It is proved that the application of fuzzy control in vehicle emission reduction is feasible through the processing and analysis of test data.
It does not need to the data processing of blur and eliminate blur in this study, because there are only 13 conditions measured in tests.
Put the original test data into formula (1) ~ formula (7), the urea solution needed theoretically to completely eliminate NOx under ESC conditions are obtained as shown in table 3.
Through look-up table, called the corresponding output values while in different conditions, measured the NOx values processed, the data are as shown in table 6.
The application of fuzzy control in vehicle emission reduction is feasible 2.
It does not need to the data processing of blur and eliminate blur in this study, because there are only 13 conditions measured in tests.
Put the original test data into formula (1) ~ formula (7), the urea solution needed theoretically to completely eliminate NOx under ESC conditions are obtained as shown in table 3.
Through look-up table, called the corresponding output values while in different conditions, measured the NOx values processed, the data are as shown in table 6.
The application of fuzzy control in vehicle emission reduction is feasible 2.
Online since: October 2011
Authors: Hai Zhong Tan
However, researchers have been illustrated that Pawlak’s rough set model has some limitations when handling some practical problems, especially when some noise data are included.
So that the rule properties need to be preserved after attribute reduction.
Attribute reduction in variable precision rough set model In this section, Ziarko’s and Mi’s attribute reduction models will be briefly recalled.
Rule acquisition based on rough set theory should correspond to the original data set.
If the rule properties are changed when reducing attributes, such as deterministic rules are changed to probabilistic rules, even some deterministic rules are eliminated, the obtained rule set will not confirm to the original data set. 3.
So that the rule properties need to be preserved after attribute reduction.
Attribute reduction in variable precision rough set model In this section, Ziarko’s and Mi’s attribute reduction models will be briefly recalled.
Rule acquisition based on rough set theory should correspond to the original data set.
If the rule properties are changed when reducing attributes, such as deterministic rules are changed to probabilistic rules, even some deterministic rules are eliminated, the obtained rule set will not confirm to the original data set. 3.
Online since: July 2017
Authors: Bing Yin Qu, Li Na Hao, Zhang Wei, Shu Bing Liu, Zhi Xing Tian
(2) Data preprocessing, feature extraction and data fusion.
The available sensor data and status monitoring data include the available online data and historical data off-line.
Appropriate data preprocessing methods include noise reduction, smoothing, imputing missing values and so on
(2) The data reduction method is applied to the research of PHM, and the method of extending the existing data to drive PHM.
PCA [2] is typical data reduction method
The available sensor data and status monitoring data include the available online data and historical data off-line.
Appropriate data preprocessing methods include noise reduction, smoothing, imputing missing values and so on
(2) The data reduction method is applied to the research of PHM, and the method of extending the existing data to drive PHM.
PCA [2] is typical data reduction method
Online since: October 2018
Authors: Aleksandr Bogatov, Dmitry Pavlov, E.A. Pavlova
The Investigation of Pipe Ends Formation under Reduction Mill Rolling
D.
It is established that the greatest "contribution" to wall thickness data spread is made by the pipe facets, which is caused by the influence of the reduction regimes and the rolls calibration due to the metal flow into the tapers of groove.
The reduction regime is chosen so that the wall thickness increases from 4.5 to 4.8 mm.
Thus, at the rear end the average wall thickness is greater than at the front end, which corresponds to the literature data [2-8].
Summary In the course of the investigation of the longitudinal and transverse wall thickness variation of the oil-well tubing with the size of 60 × 5 mm rolled at EWPM 20-102 it was established that the greatest "contribution" to the wall thickness data spread is the pipe facetedness that is caused by the influence of the reduction regimes at the reduction mill and the rolls calibration due to the metal flow into the tapers of groove.
It is established that the greatest "contribution" to wall thickness data spread is made by the pipe facets, which is caused by the influence of the reduction regimes and the rolls calibration due to the metal flow into the tapers of groove.
The reduction regime is chosen so that the wall thickness increases from 4.5 to 4.8 mm.
Thus, at the rear end the average wall thickness is greater than at the front end, which corresponds to the literature data [2-8].
Summary In the course of the investigation of the longitudinal and transverse wall thickness variation of the oil-well tubing with the size of 60 × 5 mm rolled at EWPM 20-102 it was established that the greatest "contribution" to the wall thickness data spread is the pipe facetedness that is caused by the influence of the reduction regimes at the reduction mill and the rolls calibration due to the metal flow into the tapers of groove.
Online since: August 2014
Authors: Bo Zhang, Fang Cheng Lv, Zi Jian Wang, Hu Jin
Results show that characteristic of this method contained all the information of the original data, and dimension is less than GIS insulation defect category numbers, and it can realize data dimension reduction without information loss, which improve the pattern recognition rate.
Feature Extraction Based on the Class Mean Kernel Principal Component Dimension Reduction and Compression of PD Data.
According to the experiment in section 2.2, every set of data was extracted 24 characteristic parameters. 50 groups of sample data were got from every discharge model.
The N kinds sample data was set as .
Matrix-based Kernel Method for Large-scale Data Set.
Feature Extraction Based on the Class Mean Kernel Principal Component Dimension Reduction and Compression of PD Data.
According to the experiment in section 2.2, every set of data was extracted 24 characteristic parameters. 50 groups of sample data were got from every discharge model.
The N kinds sample data was set as .
Matrix-based Kernel Method for Large-scale Data Set.
Online since: February 2011
Authors: Yan Hong Xie
Obviously, {a1} is not a true reduction.
This time, the attribute reduction set E is NULL
So E={a2,a3} is a reduction and it is a true reduction.
“An Attribute and Value Reduction and Rule Extraction Algorithm”, Computer Engineering & Science,Vol. 30, pp. 61-63.(2008) [3] L.J.Huang, Z.M.Zhang,C.Y.Zhou,M.H.Huang, “Method of Data Reduction Based on Boolean Matrix”.
“A New Method of Complete Data Reduction Based on the Boolean Matrix”.
This time, the attribute reduction set E is NULL
So E={a2,a3} is a reduction and it is a true reduction.
“An Attribute and Value Reduction and Rule Extraction Algorithm”, Computer Engineering & Science,Vol. 30, pp. 61-63.(2008) [3] L.J.Huang, Z.M.Zhang,C.Y.Zhou,M.H.Huang, “Method of Data Reduction Based on Boolean Matrix”.
“A New Method of Complete Data Reduction Based on the Boolean Matrix”.
Online since: September 2013
Authors: Zlatan Soskic, Jelena Tomić, Nebojša Bogojević, Snežana Ćirić Kostić
Data processing of the measured data is performed by computers, which are essentially digital data processing devices.
Data processing of uniform data is significantly easier and faster than data processing of nonuniform data because equal duration of time intervals between moments of measurements enables simplification of calculations performed during the data processing procedures.
However, when strong electromagnetic induction is present, long series of consecutive data are corrupted and lost during the data transmission process, and such data transmission losses are called heavy data transmission losses.
The received data in cases of heavy data transmission losses cannot be made uniform in a meaningful way.
It is obvious that data losses cause reduction of signal power and hence the reduction of spectral power, but even basic considerations show that the distribution of power losses depends on the signal spectra and duration and distribution in time of loss events.
Data processing of uniform data is significantly easier and faster than data processing of nonuniform data because equal duration of time intervals between moments of measurements enables simplification of calculations performed during the data processing procedures.
However, when strong electromagnetic induction is present, long series of consecutive data are corrupted and lost during the data transmission process, and such data transmission losses are called heavy data transmission losses.
The received data in cases of heavy data transmission losses cannot be made uniform in a meaningful way.
It is obvious that data losses cause reduction of signal power and hence the reduction of spectral power, but even basic considerations show that the distribution of power losses depends on the signal spectra and duration and distribution in time of loss events.