Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: November 2010
Authors: Sérgio M.O. Tavares, Paulo Manuel Salgado Tavares de Castro
The demand for the reduction of environmental impact and operational costs, and
at the same time, for increasing reliability of aircrafts and comfort of the passengers, puts more
pressure in the development of new design solutions and concepts for air transportation.
Integral structures as exemplified in Figure 1, are examples of reduction of non-value added parts.
Based upon research and literature data, Table 1 a) gives the goals and their ranking using the method of Cross for the present case study and Table 1 b) presents the benchmarking exercise.
This potential consists in the reduction of part-count and in demonstrated adequate fatigue behavior as in the tests presented.
Integral structures as exemplified in Figure 1, are examples of reduction of non-value added parts.
Based upon research and literature data, Table 1 a) gives the goals and their ranking using the method of Cross for the present case study and Table 1 b) presents the benchmarking exercise.
This potential consists in the reduction of part-count and in demonstrated adequate fatigue behavior as in the tests presented.
Online since: June 2014
Authors: Cun Xiao Miao, Juan Juan Cao, Yang Bin Ou
Reductions in sensor cost and volume generally bring corresponding reductions in sensor accuracy and reliability.
So the improvement in algorithms and fusing with low cost integrated navigation sensors data are required.
(4) MEMS-SINS/GPS/magnetometer integrated system equation The kalman filter is the heart of the data fusion for integrated navigation system.
The following is the error model of data fusion for filtering estimation.
It provides navigation data for the guidance and control loop.
So the improvement in algorithms and fusing with low cost integrated navigation sensors data are required.
(4) MEMS-SINS/GPS/magnetometer integrated system equation The kalman filter is the heart of the data fusion for integrated navigation system.
The following is the error model of data fusion for filtering estimation.
It provides navigation data for the guidance and control loop.
Online since: May 2014
Authors: Shi Qing Dou, Xiao Yu Zhang
Data simplification is an important factor of the spatial data generalization.
Experiments data is used the multi-beam sounding data from a certain area in northern China, after the pre-treatment, experiments data generated the water depth data with three-dimensional geographic coordinates.
(a) The shading maps of data thinning using 0.25 thresholds in tight method.
(a)The shading maps of data thinning using 0.5 thresholds in tight method.
(c)The contour maps of data thinning using 0.5 thresholds in tight method.
Experiments data is used the multi-beam sounding data from a certain area in northern China, after the pre-treatment, experiments data generated the water depth data with three-dimensional geographic coordinates.
(a) The shading maps of data thinning using 0.25 thresholds in tight method.
(a)The shading maps of data thinning using 0.5 thresholds in tight method.
(c)The contour maps of data thinning using 0.5 thresholds in tight method.
Online since: July 2011
Authors: Ping Jie Cheng, Han Zhou Hu, Shu Guang Hu
This paper categorizes and analyzes some classic durability of concrete parameters calculation models for the deterioration of concrete and the steel corrosion.
1 Relevant Models of Concrete Deterioration
1.1 Concrete Carbonation
Scholars put forward many concrete carbonation depth prediction models, these models can be divided into two types: one is based on experimental data or the actual structure of the carbonation depth measuring, using mathematical methods such as statistical or neural network model fitting the actual data; another is the theoretical model which is based on quantitative analysis of carbonation reaction process.
In the application, choose one to calculate according to the measured data.
The model of average strength of concrete by the time: (9) The model of the standard deviation strength of concrete: (10) Concrete strength under the above two prediction models are based upon a large number of long-term exposure tests and measured data on the basis of the building, due to less internal information, statistical data based mainly in Japan, therefore, in the model application, yet Should be combined with the actual adjustment. 2 Related to Corrosion of Steel Bars in Concrete Model 2.1 Start time for reinforcement corrosion (1) Atmospheric Environment Studies have shown that[8], the reinforcement of concrete, there are two critical PH value, one is PH = 9.88, when the steel surface passive film began production, that is to say, less than this critical value, there cannot be passive film on the reinforced surface, that is completely activated; the other is PH = 11.5, when the steel surface passive film to form a complete, or less than this critical value
Steel corrosion loss rate as: (25) 2.3 Strength factor reduction model of corroded reinforced Hui Yun-ling model [14] (26) (27) Where: ky (t) is the reduction factor of yield strength of corroded reinforced; ku(t) is the ultimate strength reduction factor of corroded steel; ηs (t) for the loss rate of corrosion steel.
Summary ⑴{TTP}9332 Although the application of empirical models are convenient, but the data used is based on the specific environmental and materials obtained, the model coefficient is not very clear about the physical concept, the model applies to a limited extent. ⑵{TTP}9333 Theoretical model has clear concept while modeling, all parameters have clear definitions and dimensions, but the selection and calculation of parameters in the model will be difficult, so it lacks practicability. ⑶{TTP}9334 Based on the test data or the actual structure of the measured value, through the use of mathematical statistics or neural networks, support vector machines and other modern optimization model to fit the experience gained from the existing model results, we can get a relatively high accuracy.
In the application, choose one to calculate according to the measured data.
The model of average strength of concrete by the time: (9) The model of the standard deviation strength of concrete: (10) Concrete strength under the above two prediction models are based upon a large number of long-term exposure tests and measured data on the basis of the building, due to less internal information, statistical data based mainly in Japan, therefore, in the model application, yet Should be combined with the actual adjustment. 2 Related to Corrosion of Steel Bars in Concrete Model 2.1 Start time for reinforcement corrosion (1) Atmospheric Environment Studies have shown that[8], the reinforcement of concrete, there are two critical PH value, one is PH = 9.88, when the steel surface passive film began production, that is to say, less than this critical value, there cannot be passive film on the reinforced surface, that is completely activated; the other is PH = 11.5, when the steel surface passive film to form a complete, or less than this critical value
Steel corrosion loss rate as: (25) 2.3 Strength factor reduction model of corroded reinforced Hui Yun-ling model [14] (26) (27) Where: ky (t) is the reduction factor of yield strength of corroded reinforced; ku(t) is the ultimate strength reduction factor of corroded steel; ηs (t) for the loss rate of corrosion steel.
Summary ⑴{TTP}9332 Although the application of empirical models are convenient, but the data used is based on the specific environmental and materials obtained, the model coefficient is not very clear about the physical concept, the model applies to a limited extent. ⑵{TTP}9333 Theoretical model has clear concept while modeling, all parameters have clear definitions and dimensions, but the selection and calculation of parameters in the model will be difficult, so it lacks practicability. ⑶{TTP}9334 Based on the test data or the actual structure of the measured value, through the use of mathematical statistics or neural networks, support vector machines and other modern optimization model to fit the experience gained from the existing model results, we can get a relatively high accuracy.
Online since: September 2013
Authors: Ousanee Sangkathum, Ohm Sornil
The first term is the empirical count of feature k in the training data, and the second term is the expected count of this feature under the current trained model.
Suppose that in the training data a feature appears times, while under the current model, the expected count of is .
The training data consists of 8,000 printed characters from document images with various fonts and sizes of 16, 18, 20 and 22, and different styles of regular, italic and boldface.
The test data consists of 1,227 unknown characters gathered from 30 websites and license plates.
The training data are collected from a number of documents with various font sizes and styles.
Suppose that in the training data a feature appears times, while under the current model, the expected count of is .
The training data consists of 8,000 printed characters from document images with various fonts and sizes of 16, 18, 20 and 22, and different styles of regular, italic and boldface.
The test data consists of 1,227 unknown characters gathered from 30 websites and license plates.
The training data are collected from a number of documents with various font sizes and styles.
Online since: September 2015
Authors: P. Madhan Kumar, Elizabeth Jacob, S. Savithri, G.S. Suneeth
A clustering algorithm uses the solidification time/temperature data from the simulation to divide the casting into 3D feeding sections.
In this new version, the neighbor finding algorithm which requires all the mesh elements to be present in memory, is replaced by a distance-based clustering scheme where the simulation data is divided into small chunks and loaded into memory sequentially.
Virtual Feed a) Phase I: Data in Step 2 is used to divide the casting into feeding sections using the new algorithm for volumetric segmentation.
Table 1 : Part data Part Dimensions 602.31 mm X 552.61 mm X 147 mm Weight 57.41 kg Surface area 7518.61 cm² Volume 7863.74 cm³ (a) (b) (c) (d) Fig. 1 (a) imported model (b) solidification pattern (c) sectional view (d) feeder design in AutoCAST Analysis of the part in a cross-section through the ear portion of the rim shows a L-junction with a rapid change of thickness, a cause for stress concentration.
Manoj, Roschen Sasikumar, Volume segmentation by post-processing data from simulation of solidification in the metal casting process, International Journal of Modeling, Simulation, and Scientific Computing, (2013) 04, 1341005, DOI: 10.1142/S1793962313410055 [3] Campbell J, Castings, Butterworth-Heinemann, Oxford, 2003
In this new version, the neighbor finding algorithm which requires all the mesh elements to be present in memory, is replaced by a distance-based clustering scheme where the simulation data is divided into small chunks and loaded into memory sequentially.
Virtual Feed a) Phase I: Data in Step 2 is used to divide the casting into feeding sections using the new algorithm for volumetric segmentation.
Table 1 : Part data Part Dimensions 602.31 mm X 552.61 mm X 147 mm Weight 57.41 kg Surface area 7518.61 cm² Volume 7863.74 cm³ (a) (b) (c) (d) Fig. 1 (a) imported model (b) solidification pattern (c) sectional view (d) feeder design in AutoCAST Analysis of the part in a cross-section through the ear portion of the rim shows a L-junction with a rapid change of thickness, a cause for stress concentration.
Manoj, Roschen Sasikumar, Volume segmentation by post-processing data from simulation of solidification in the metal casting process, International Journal of Modeling, Simulation, and Scientific Computing, (2013) 04, 1341005, DOI: 10.1142/S1793962313410055 [3] Campbell J, Castings, Butterworth-Heinemann, Oxford, 2003
Online since: March 2012
Authors: Yu Qing Zhou
Data acquisition and pre-processing
2.1 Data acquisition
Date acquisition is refer to convert the object surface into discrete geometric point coordinate data by special measuring equipment and collecting method, it is known as point cloud [2].
Measurement data can be joined together in real time [4], it is very suitable for Auto part data acquisition. 2.2 Data pre-processing Due to machining precision of surveying instrument and influence of human element and environmental factors in the process of measurement, noise point is unavoidable.
Pre-processing of point cloud includes noise point remove, data compaction and data extract [5].
Data compaction can reduce total amount and improve the data pre-processing speed, but too much data reduction will effect the precision of model construction, so the point cloud is reduced in the condition of guaranteed precision The separation parts should be kept after point cloud reduction, it is useful for making surface when surface is designed, and also can provide precision data for subsequent surface modeling.
British library Cataloguing in Publication Data.2008:05-120
Measurement data can be joined together in real time [4], it is very suitable for Auto part data acquisition. 2.2 Data pre-processing Due to machining precision of surveying instrument and influence of human element and environmental factors in the process of measurement, noise point is unavoidable.
Pre-processing of point cloud includes noise point remove, data compaction and data extract [5].
Data compaction can reduce total amount and improve the data pre-processing speed, but too much data reduction will effect the precision of model construction, so the point cloud is reduced in the condition of guaranteed precision The separation parts should be kept after point cloud reduction, it is useful for making surface when surface is designed, and also can provide precision data for subsequent surface modeling.
British library Cataloguing in Publication Data.2008:05-120
Online since: September 2014
Authors: Si Xi Zhu, Feng You Wang, Yun Jie Wu, Hua Zhong He, Dong Fang Yang
Based on the investigation data of Chromium (Cr) in Jiaozhou Bay waters in1981, this paper analyzed the content, distribution and sources of Cr in Jiaozhou Bay.
Based on the investigation data of Chromium (Cr) in Jiaozhou Bay waters in April and Auguest 1981, this paper analyzed the content, distribution and sources of Cr in Jiaozhou Bay, in order to provide basis for pollution control and reduction.
The data was provided by North China Sea Environmental Monitoring Center.
However, monitored data indicated that Jiaozhou Bay had not been polluted by Cr in 1981.
Based on the investigation data of Chromium (Cr) in Jiaozhou Bay waters in April and Auguest 1981, this paper analyzed the content, distribution and sources of Cr in Jiaozhou Bay, in order to provide basis for pollution control and reduction.
The data was provided by North China Sea Environmental Monitoring Center.
However, monitored data indicated that Jiaozhou Bay had not been polluted by Cr in 1981.
Online since: September 2013
Authors: Qiong He, Xiao Li Yang, Fen Yang
Because of the dimensionality problem, the task is especially challenged for high-throughput proteomic data.
It can compute low-dimensional and neighborhood-preserving embeddings (features) from high-dimensional data.
Ovarian cancer mass spectrometry data analysis based on ICA algorithm [J].
A novel wavelet-based thresholding method for the pre-processing of mass spectrometry data that accounts for heterogeneous noise [J] Proteomics 2008 8(15) 3019-3029
Ovarian cancer classificatin based on dimensionality reduction for SELDI-TOF data [J].
It can compute low-dimensional and neighborhood-preserving embeddings (features) from high-dimensional data.
Ovarian cancer mass spectrometry data analysis based on ICA algorithm [J].
A novel wavelet-based thresholding method for the pre-processing of mass spectrometry data that accounts for heterogeneous noise [J] Proteomics 2008 8(15) 3019-3029
Ovarian cancer classificatin based on dimensionality reduction for SELDI-TOF data [J].
Online since: September 2022
Authors: Adam Hubáček, Richard Dvořák, Zdeněk Chobola, Jan Puchýř
Data processing was conducted with Python using different libraries (numpy, pandas, scipy, matplotlib, sklearn).
An example of such data is illustrated on Fig. 2.
PCA is a method of reducing dimensionality of multivariable data while retaining most of its information [17].
These arrows represent a different axis of multidimensional data observed in 2D.
Big Data, Cloud and Applications.
An example of such data is illustrated on Fig. 2.
PCA is a method of reducing dimensionality of multivariable data while retaining most of its information [17].
These arrows represent a different axis of multidimensional data observed in 2D.
Big Data, Cloud and Applications.