Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: September 2011
Authors: Jing Zhang, Hong Guang Jia, Ling Ding, Wen Hui Dong
Data testing is very important for the development of new missile.
However, the Shock isolation of the mechanical structure is important to improve system reliability and rationality and directly affects the success of data recovery [1].
Buffer and Vibration Reduction of Original Structure Design The original FEA model of the data recorder is shown in Figure 1: Fig. 1 The FEA model of the data recorder Fig. 2 The FEA model of target and projectile Figure 1 is the picture of 1 / 2 axial symmetry storage, where 1 is the case, 2 is the buffer material and 3 is storage board.
Buffer and Vibration Design of the data recorder structure according to the stress wave theory, and reasonable allocation of the generalized impedance ratio, in order to reduce data storage than the maximum dynamic stress chip as the goal, set up a missile penetration of the finite element model of joint Isight, with Ls-dyna complete data packet buffer vibration reduction structure optimization design.
Marshall Hammer Experiment Figure 9 for data recorder physical composition, the figure 10 for Marshall Hammer overload curve: Fig. 9 The data recorder Fig. 10 The overload curve of data recorder Conclusion Using the stress wave theory success in solving the narrow space impact resistant problems, make the dynamic stress of protected component down about 60.9%, and through the Marshall hammer test success.
However, the Shock isolation of the mechanical structure is important to improve system reliability and rationality and directly affects the success of data recovery [1].
Buffer and Vibration Reduction of Original Structure Design The original FEA model of the data recorder is shown in Figure 1: Fig. 1 The FEA model of the data recorder Fig. 2 The FEA model of target and projectile Figure 1 is the picture of 1 / 2 axial symmetry storage, where 1 is the case, 2 is the buffer material and 3 is storage board.
Buffer and Vibration Design of the data recorder structure according to the stress wave theory, and reasonable allocation of the generalized impedance ratio, in order to reduce data storage than the maximum dynamic stress chip as the goal, set up a missile penetration of the finite element model of joint Isight, with Ls-dyna complete data packet buffer vibration reduction structure optimization design.
Marshall Hammer Experiment Figure 9 for data recorder physical composition, the figure 10 for Marshall Hammer overload curve: Fig. 9 The data recorder Fig. 10 The overload curve of data recorder Conclusion Using the stress wave theory success in solving the narrow space impact resistant problems, make the dynamic stress of protected component down about 60.9%, and through the Marshall hammer test success.
Online since: November 2012
Authors: Xiao Han, Le Le Qu, Xuan Li, Rui Guo
This paper is the noise reduction methods based on CFA image, and combining with interpolation to consider noise reduction.
Noise reduction-Interpolation process and noise reduction-interpolation process The interpolation of the CFA image after noise reduction is researched.
Imaging performance from different order of interpolation and noise reduction In order to see the difference between the two imaging sequence, the experimental data are shown in Fig. 2 and Fig. 3.
The missing data of the Least Squares Method noise reduction process before Bilinear Interpolation uses the near the two points on the straight line to determine the corresponding point.
Fig.2 MSE data Fig.3 s-CIELAB data Table 1 Serial number of interpolation and noise reduction methods i \j Linear Filtering BM3D Bilateral Filtering Homogeneity Least Squares Method Bilinear Interpolation 1 2 3 4 POSC 5 6 7 8 AHA 9 10 11 12 Fig 4 (a) AHA interpolation- Linear Filtering (b) AHA interpolation- Bilateral Filtering (c) AHA interpolation-Least Squares Method The data of the Linear Filtering is most decentralized.
Noise reduction-Interpolation process and noise reduction-interpolation process The interpolation of the CFA image after noise reduction is researched.
Imaging performance from different order of interpolation and noise reduction In order to see the difference between the two imaging sequence, the experimental data are shown in Fig. 2 and Fig. 3.
The missing data of the Least Squares Method noise reduction process before Bilinear Interpolation uses the near the two points on the straight line to determine the corresponding point.
Fig.2 MSE data Fig.3 s-CIELAB data Table 1 Serial number of interpolation and noise reduction methods i \j Linear Filtering BM3D Bilateral Filtering Homogeneity Least Squares Method Bilinear Interpolation 1 2 3 4 POSC 5 6 7 8 AHA 9 10 11 12 Fig 4 (a) AHA interpolation- Linear Filtering (b) AHA interpolation- Bilateral Filtering (c) AHA interpolation-Least Squares Method The data of the Linear Filtering is most decentralized.
Online since: February 2011
Authors: Jin Peng Wang, Bao Xiang Liu, Zhen Dong Li, Li Chao Feng
Concept lattice as a powerful tool for data analysis and dealing has a rapid growth in recent years [7].
On the other hand, we can better understand the data analysis and data rules through established interoperability between the two theories [10].
Theoretical Aspects of Reasoning about Data.
He: Concept Lattice and Rough Set Data Analysis.
Theoretical aspects of reasoning about data.
On the other hand, we can better understand the data analysis and data rules through established interoperability between the two theories [10].
Theoretical Aspects of Reasoning about Data.
He: Concept Lattice and Rough Set Data Analysis.
Theoretical aspects of reasoning about data.
Online since: April 2012
Authors: De Yong Wang, Mao Fa Jiang, Yan Liu
The mixture calculation of producing stainless steel crude melts by chromium ore smelting reduction in a 150 t converter is carried out by use of the empirical data and the calculation method of refining plain carbon steel in a converter, according to the blowing conditions of 185 t smelting reduction converter of No.4 steelmaking shop in Chiba Works of JFE Steel.
Mixture Calculation The Required Raw Data of Calculation.
These can be calculated respectively according to the basic raw data and the set values of process parameters.
The total amount of slag and its components are calculated respectively according to the basic raw data and the set values of process parameters.
Chen, Diagram and Data Manual in Steelmaking, Metallurgy Industry Publications, Beijing, 1984.
Mixture Calculation The Required Raw Data of Calculation.
These can be calculated respectively according to the basic raw data and the set values of process parameters.
The total amount of slag and its components are calculated respectively according to the basic raw data and the set values of process parameters.
Chen, Diagram and Data Manual in Steelmaking, Metallurgy Industry Publications, Beijing, 1984.
Online since: July 2011
Authors: Zhi Xia He, Qian Wang, Hang Xu, Fang Yin Tu, Jun Ma
According to the results of simulation, it shows good agreement with experimental data.
From the data in the figure, it can be seen in different operating points the error between simulation result and experimental data is very small, generally within 5%.
Comparison of simulation results with experimental data.
Comparison of simulation results with experimental data.
All these simulation results compared with experimental data shows good agreement.
From the data in the figure, it can be seen in different operating points the error between simulation result and experimental data is very small, generally within 5%.
Comparison of simulation results with experimental data.
Comparison of simulation results with experimental data.
All these simulation results compared with experimental data shows good agreement.
Online since: December 2010
Authors: Zhao Qi Wu, Shao Fei Jiang
This model is used for structural damage detection, particularly for cases where the measurement data has many uncertainties.
RSPNN Model for Damage Detection Data Preprocessing.
There are 4 methods used for discretized data: First, S method.
Clustering is a process of dividing data set into several groups or kinds which make data in the same group with its own similarity while dissimilarity in different groups.
The main merit of K-means clustering is simplicity, speediness and efficiency of processing large data-base.
RSPNN Model for Damage Detection Data Preprocessing.
There are 4 methods used for discretized data: First, S method.
Clustering is a process of dividing data set into several groups or kinds which make data in the same group with its own similarity while dissimilarity in different groups.
The main merit of K-means clustering is simplicity, speediness and efficiency of processing large data-base.
Online since: March 2015
Authors: Da Qiang Chen, Er Tian Hua, Xiao Juan Gong, Lei Hu, Yan Zhen He, Dan Zhi Sun
This paper makes the historical data which customers bought the products as the foundation, and suggests a method of personalized product design knowledge acquisition based on knowledge reduction and knowledge mining.
Thirdly, the product design information was taken as the initial data of knowledge mining, and the product design knowledge of specific customer group was obtained by a classification consistency algorithm.
Thus, the main idea of product design knowledge acquisition proposed in this paper is as follows: a) based on the history data of the customer’s online shopping and the customer segmentation results, the core matrices of each specific customer group were taken as the decision variables of the decision table based on the key customer segmentation; b) using the knowledge reduction algorithm based on Skowron discernibility matrix, the redundant and even product matrices that is not necessary are deleted; c) with the product design information as the initial data for knowledge mining, by using the classification consistency algorithm, eventually the product design knowledge of specific customer groups is obtain.
Case Study In this case study, a baby carrier brand's product information is gained from Web technology by online interaction, in which there is a mass data of customer history purchase of a certain brand stroller.
Stefanowski: Rough Sets in Data Mining and Knowledge Discovery, Physica-Verlag Vol. 1 (1998), p. 500 [8] J.H.
Thirdly, the product design information was taken as the initial data of knowledge mining, and the product design knowledge of specific customer group was obtained by a classification consistency algorithm.
Thus, the main idea of product design knowledge acquisition proposed in this paper is as follows: a) based on the history data of the customer’s online shopping and the customer segmentation results, the core matrices of each specific customer group were taken as the decision variables of the decision table based on the key customer segmentation; b) using the knowledge reduction algorithm based on Skowron discernibility matrix, the redundant and even product matrices that is not necessary are deleted; c) with the product design information as the initial data for knowledge mining, by using the classification consistency algorithm, eventually the product design knowledge of specific customer groups is obtain.
Case Study In this case study, a baby carrier brand's product information is gained from Web technology by online interaction, in which there is a mass data of customer history purchase of a certain brand stroller.
Stefanowski: Rough Sets in Data Mining and Knowledge Discovery, Physica-Verlag Vol. 1 (1998), p. 500 [8] J.H.
Online since: June 2011
Authors: Zhao Zheng, Di Wu
With the rapid development of computer vision, more and more data dimensionality reduction problems are involved, this leads to the rapid development of dimensionality reduction algorithms.
Further more, high-dimensional data always led to low computational efficiency.
These problems can be solved by data dimensionality reduction algorithms.
Many data dimensionality reduction algorithms have emerged until now.
While PCA aims to preserve the global structure of the data, LPP aims to preserve the local structure of the data, assuming that the samples from the same class are likely close to each other in the input space.
Further more, high-dimensional data always led to low computational efficiency.
These problems can be solved by data dimensionality reduction algorithms.
Many data dimensionality reduction algorithms have emerged until now.
While PCA aims to preserve the global structure of the data, LPP aims to preserve the local structure of the data, assuming that the samples from the same class are likely close to each other in the input space.
Online since: July 2015
Authors: Gerhard Hirt, Markus Bambach, Johannes Lohmar, Alexander Kraemer
The natural decrease in accuracy with the use of less data compared to the gain due to the reduction of experimental effort is analysed.
Using the vertices and symmetrical distribution of the data within the full experimental matrix allows a drastic reduction of experimental effort while maintaining the initial accuracy.
The data distribution for reduced data sets achieving the best accuracy can then be optimized inversely.
Fitting with reduced data For the fitting with reduced data two key questions have to be answered, 1) how much data are necessary to maintain the same accuracy as the initial fit and 2) how should the data be distributed within the full experimental matrix.
The material model imposes a variety of conditions on the data distribution.
Using the vertices and symmetrical distribution of the data within the full experimental matrix allows a drastic reduction of experimental effort while maintaining the initial accuracy.
The data distribution for reduced data sets achieving the best accuracy can then be optimized inversely.
Fitting with reduced data For the fitting with reduced data two key questions have to be answered, 1) how much data are necessary to maintain the same accuracy as the initial fit and 2) how should the data be distributed within the full experimental matrix.
The material model imposes a variety of conditions on the data distribution.
Online since: February 2011
Authors: Mao Fa Jiang, De Yong Wang, Yan Liu
The process calculation of producing stainless steel crude melts by smelting reduction in a 150 t converter is carried out for the first time by use of the empirical data and calculation method of refining plain carbon steel in a converter, according to the blowing conditions of 185 t smelting reduction converter of No.4 steelmaking shop in Chiba Works of JFE Steel.
Using the self-programming procedure for the processing calculation, the proper coke quantity and a complete set of calculation data for the process of smelting reduction in a converter are obtained, including the batch calculation, the material balance and heat balance calculations.
Using the self-programming procedure for the processing calculation, the proper coke quantity and a complete set of calculation data for the process of smelting reduction in a converter are obtained, including the batch calculation, the material balance and heat balance calculations.
They are calculated respectively according to the base data and set values of processing parameters.
Using the self-programming procedure containing proper coke quantity for the process calculation (shown in Fig. 1), a complete set of calculation data for the process of smelting reduction in a converter are obtained.
Using the self-programming procedure for the processing calculation, the proper coke quantity and a complete set of calculation data for the process of smelting reduction in a converter are obtained, including the batch calculation, the material balance and heat balance calculations.
Using the self-programming procedure for the processing calculation, the proper coke quantity and a complete set of calculation data for the process of smelting reduction in a converter are obtained, including the batch calculation, the material balance and heat balance calculations.
They are calculated respectively according to the base data and set values of processing parameters.
Using the self-programming procedure containing proper coke quantity for the process calculation (shown in Fig. 1), a complete set of calculation data for the process of smelting reduction in a converter are obtained.