Search Options

Sort by:

Sort search results by

Publication Type:

Publication Type filter

Open access:

Publication Date:

Periodicals:

Periodicals filter

Search results

Online since: July 2015
Authors: Sergey V. Konovalov, Nadezhda Yaropolova, Dmitry Zaguyliaev, Alexander Semin, Yurii F. Ivanov, Victor Gromov
The testing machine works in combination with a program complex intended for recording the data of sample elongation as a function of time of the experiment and it also enables to analyze the creep stages and approximate the creep curves.
Standard methods of mathematical statistics were used for data processing.
Typical curves of copper creep From Fig. 1 it followers that magnetic field effect results in the significant reduction of creep rate on the linear section of creep curve.
As the quantitative component of estimation of magnetic field effect the parameter of relative change of creep rate at linear stage is used: , were and are the values of creep rate with magnetic field effect and without it, averaged over 20 experimental data.
In this manner, the analysis of the presented dependence shows that the reduction of creep rate has a threshold character.
Online since: December 2013
Authors: Jian Long Zheng, Jun Hui Zhang, Jun He, Jie Feng Lin
convenient B method Strain Reduction Type Calculating for Pc by the redundant deformation Δe of compression-rebound circle Point B is difficult to timely identify.
The excess deformation is different while test conditions are different. complicated S method Curve Reduction Type Taken the strength loss and redundant deformation into consideration, reasonable You can calculate the value of Pc and Cc.
Table 2 Consolidation test data of soft soil sample in Dongting Lake area P(kPa) 0 25 50 100 200 400 800 1600 lgP(kPa) 0 1.39794 1.69897 2 2.30103 2.60206 2.90309 3.20412 e 1.0934 1.0503 0.991 0.9202 0.8314 0.7328 0.6309 0.5287 Numerical method to determine the maximum curvature point and the pre-consolidation pressure.
The curvature K is expressed as follows: (4) From the Harris model formula (3), the following equations are given: (5) (6) In the formula: (7) (8) As the data points obtained from the consolidation test results of e-lgp compression curve are relatively small, the maximum curvature point of the compression curve can be calculated through Excel software based on the Harris model.
The steps are as follows: Firstly, the consolidation pressure starts from 0kPa, expand the interval of data points into 0.1kPa, namely 0kPa, 0.1kPa, 0.2kPa, till the last level pressure, and then convert it into lgp form.
Online since: January 2020
Authors: Dmitry Pavlov, Mikhail Erpalov
In addition, the approaches to the post-processing experimental data, obtained in the form of torque-twist angle curve, are reviewed.
The post-processing of obtained experimental data can be carried out with the use of modified Nadai method.
The issues of post-processing the experimental data obtained during testing specimens in torsion are discussed in [17, 18].
Hodgson, A heuristic model selection scheme for representing hot flow data using the hot torsion test results, Materials and Design.
Pavlov, Control and experimental data processing in torsion testing with variable acceleration.
Online since: October 2013
Authors: Feng Jin, Li Zhi Xing
Unlike traditional industrial parks, EIP emphasizes on cleaner production and on emission reduction within industrial clusters.
Data was gathered from multiple sources, mainly GETDD annual reports during the 2007–2010.
To quantize the performance, a gap mapping function is designed to represent the difference between maximum environmental loads and the actual EIP construction and development disturbance, where the maximum environment load of each indicator is either standard requirement or prediction according to its baseline data in EIP planning.
In the GDD plan, 29 goals were set in 2007, covering four domains, namely economy development, reduction and recycle, pollution control, and EIP management.
Goals achievement in pollution control category and EIP management category indicate better performance comparing with economy development, reduction and recycle.
Online since: December 2013
Authors: Li Mei Liu, Jian Wen Wang, Ying Guo, Hong Sheng Lin
As a result, the methods have stronger dependence on the data feature.
Wu is to generate a new class of training samples by genetic crossover operation and the reconstruction method that is converts imbalance data sets to balance data set [12].
First carries on the preliminary training by the standard SVM and get the normal vector with the project high-dimensional data to get a-dimensional vector data.
Intelligent Data Analysis, Vol. 6 (2002), p. 429–449
IEEE Transaction on Knowledge and Data Engineering, Vol. 18(2006), p. 63-77
Online since: June 2016
Authors: Eike Permin, Robert H. Schmitt, Tom Mannheim, Burkhard Corves, Mats Björkman, Jelena Kurilova-Palisaitiene, Kai Buhse
A detailed description of the simulation can be found in [15] Figure 3 Physical representations of the robot concepts As representatives for serial robots, a KUKA KR 6 R and a KUKA KR 16 have been chosen due to the wide usage of these systems in the industry and a good availability of necessary technical data.
All simulation data can be found in the appendix of this paper.
To allow for a fair comparison, the driver efficiency has been assumed to be similar for all robot systems and was based on manufacturer data from the motor supplier of the PARAGRIP system.
Based on average German energy prices for industrial consumers in 2015, the achievable reduction would lead to savings between 210 € and 430 € per 10.000 hours of service.
Even if this reduction seems to be rather small for each single IR, their massive appearance in the industrial context bears large opportunities for a rise in eco-efficiency.
Online since: September 2012
Authors: Chun Yang Wang, Hong Bing Li, Jian Gong
In this algorithm, the signal subspace is obtained using MSWF, which avoids the known expected signal, the estimation of the virtual data autocorrelation matrix and eigen-decomposition.
However, it would decrease the virtually extended array aperture; it also needs to estimate the data autocorrelation matrix and eigen-decompose it, which involves a large amount of computational cost when the virtual sensor array is large.
According to the theorem 1 of [8], we know that if the cross-correlation between the observation data and the desired signal lies in the signal subspace, the signal subspace can be constructed by adopting the forward recursions of the MSWF: (4) To begin with, let (5) be the observation data and (6) be the desired signal.
Then the cross-correlation between the observation data and the desired signal can be calculated as (7) where .
Partitioning the observation data with the matched filter in a manner similar to that of the MSWF, we can obtain the desired signal and the observation data at the ith stage by (9) (10) Then the matched filter is calculated by (11) From (6)-(11), it is easy to prove that the matched filters are orthogonal to each other, so can form the signal subspace.
Online since: February 2015
Authors: Anna V. Abakshonok, Alexander N. Eryomin, Vladimir E. Agabekov
The stability of the colloid solution was determined by the data of zeta potential and hydrodynamic size of particles using Zetasizer Nano («Malvern Instruments», UK).
Diameter of (HA)Ag NPs was calculated using spectral data and gauge dependence of the position of the absorption maximum of the plasmon resonance band of the diameter (l max of d) prepared according to [4].
(HA)Ag composite was fabricated by the reduction of AgNO3 with hydrazine hydrochloride, glucose and glycerol.
When the assessment was performed using the spectral data (Table 1) and the calibration curve (l max of d), given in [4], it was found that silver NPs presented in the reaction solution with hydrazine are greater (35-40 nm) than in systems with glucose and glycerol (£10.0 nm).
Therefore, in an environment with hydrazine hydrochloride reduction of Ag+ occurs over the entire surface of nanosorbent.
Online since: October 2011
Authors: Li Bo Yuan
The main difference between them is the fundamental data used and the decomposed effects.
Methodology and Data Structural decomposition analysis based on input-output model.
Data Sources.
The contribution of the emission intensity is 75%, which reflect the improvement of efficiency of energy use during the period, and the domestic demand factor contributes 35% reduction to the total change.
We find that energy intensity and domestic demand structure are the main reason of the reduction of carbon emission.
Online since: February 2021
Authors: Siddarth Sampath, Kusum Maharjan, Anthony Ozzello, Ashutosh Bhabhe
The system then uses this data along with the density of the metal to estimate the equivalent diameter of the detected particle by comparing it against reference standards of known size and density, such as Gold or Silver nanoparticles [4].
Each sample type is analyzed >20 times on the ICP-MS to ensure good statistical confidence in the data reported.
Based on this data, following observations can be made: a) A majority of the metal particles (~79%), regardless of metal type, are present in the high purity grade of 30% H2O2, and are < 30 nm in size: a.
The effect of the filtration is best exemplified by Cr-containing particles, where there is a reduction of overall particles by 97%, and specifically particles >30nm reduced by 98%.
A very similar trend is also seen with Fe-containing particles, where there is an overall reduction of 82%, with >30 nm reduced by 93%.
Showing 9191 to 9200 of 40345 items