Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: July 2025
Authors: Muhammad Taufiq Yuda Saputra, Sudirman Haji Umar, Badrun Ahmad
Materials and Method
This research employed experimental techniques to assess the water's quality and compare the findings to data on the standards for quality established by laws and regulations.
The data that was gathered consisted of both primary and secondary data.
The primary data collected for this study consists of direct measurements obtained through assessing the physical and chemical properties of healthy water in the Fitu sub-district.
Secondary data refers to information about water quality standards derived from existing sources such as references, legislation, and regulations.
Chemical Properties Fe Mg/L 1,3 0,91 30 Mn Mg/L 0,6 0,45 25 pH - 9 7,90 12 According to the data presented in Table 3, the most notable improvement in filtering efficiency is the reduction of Fe concentration in healthy water.
The data that was gathered consisted of both primary and secondary data.
The primary data collected for this study consists of direct measurements obtained through assessing the physical and chemical properties of healthy water in the Fitu sub-district.
Secondary data refers to information about water quality standards derived from existing sources such as references, legislation, and regulations.
Chemical Properties Fe Mg/L 1,3 0,91 30 Mn Mg/L 0,6 0,45 25 pH - 9 7,90 12 According to the data presented in Table 3, the most notable improvement in filtering efficiency is the reduction of Fe concentration in healthy water.
Online since: July 2018
Authors: Wolfgang Schlüter, David Wagner
These models are only used to develop a methodology to evaluate model applicability beforehand by using sensitivity analysis and model reduction.
FIM=VarCov-1=((ST∙S)-1∙Res/(n-h))-1 (10) With S being the sensitivity matrix shown below in the case of model A: S=∂dxdt∂k1⋯∂dxdt∂kh∂dsdt∂k1⋱∂dsdt∂kh∂dpdt∂k1⋯∂dpdt∂kh (11) Res is the residual sum of squared errors between measurement and estimated data set, n is the number of measurements and h the number of parameters in the model.
Therefore it has to be determined if all of them are estimable or if complexity reduction is necessary.
Therefore for detailed analysis of the model structure further complexity reduction is necessary.
As depicted in the introduction a major challenge is the validation of a suitable model candidate with real measurement data.
FIM=VarCov-1=((ST∙S)-1∙Res/(n-h))-1 (10) With S being the sensitivity matrix shown below in the case of model A: S=∂dxdt∂k1⋯∂dxdt∂kh∂dsdt∂k1⋱∂dsdt∂kh∂dpdt∂k1⋯∂dpdt∂kh (11) Res is the residual sum of squared errors between measurement and estimated data set, n is the number of measurements and h the number of parameters in the model.
Therefore it has to be determined if all of them are estimable or if complexity reduction is necessary.
Therefore for detailed analysis of the model structure further complexity reduction is necessary.
As depicted in the introduction a major challenge is the validation of a suitable model candidate with real measurement data.
Online since: April 2011
Authors: Yi Mei, Fang Ping Wang, Qiao Ying Liu, Yu Tao Mao
Finite Element Theory of Steady Temperature Field Distribution Analysis
Temperature field is calculated by FEA method, thermal analysis model and structure analysis model can be unified, a unified finite element 3D Model can be used for thermal analysis and structure analysis, data transmission is not needed between thermal model and structure model, computation is decreased, calculation accuracy is increased [5].
According to working condition characteristics of the reduction gearbox and calculation method of box bearing temperature in reference[7-9], the place of 2 bearing on reduction gearbox high speed shaft is loaded 58℃,the 2 bearings on intermediate shaft is loaded 70℃, the 2 bearings on low speed shaft is loaded 60℃ external temperature is defined as 25℃, convection coefficient is 0.001.
Heavy load reduction gearbox finite element model can be obtained at last, which is showed in Fig.1.
Steady temperature field of heavy load reduction gearbox is calculated by FEM, which is shown in Fig.2.
The reduction box's biggest stress occurs in the picture mark place, the biggest stress for 304.54MPa and the smallest stress is 0.18MPa.
According to working condition characteristics of the reduction gearbox and calculation method of box bearing temperature in reference[7-9], the place of 2 bearing on reduction gearbox high speed shaft is loaded 58℃,the 2 bearings on intermediate shaft is loaded 70℃, the 2 bearings on low speed shaft is loaded 60℃ external temperature is defined as 25℃, convection coefficient is 0.001.
Heavy load reduction gearbox finite element model can be obtained at last, which is showed in Fig.1.
Steady temperature field of heavy load reduction gearbox is calculated by FEM, which is shown in Fig.2.
The reduction box's biggest stress occurs in the picture mark place, the biggest stress for 304.54MPa and the smallest stress is 0.18MPa.
Online since: December 2012
Authors: T. Kiewkem, P. Chaiwiwatworakul, S. Chirarattananon
All measured data from the sensors are acquired by National Instrument (NI) data logging system and recorded onto the computer hard disc at every one minute.
The average value of humidity reduction (W1-W2) was about 9 gw/kgda.
The results are presented in term of humidity reduction.
Please be noted that for each case, the inlet of conditions of the dehumidified air varied following the measured data.
The simulation was made using the hourly weather data from the station.
The average value of humidity reduction (W1-W2) was about 9 gw/kgda.
The results are presented in term of humidity reduction.
Please be noted that for each case, the inlet of conditions of the dehumidified air varied following the measured data.
The simulation was made using the hourly weather data from the station.
Online since: May 2012
Authors: Da Min Liang, Hui Ling Cao, Xun Lai Tu
In the Simple Approach [3], two primary elements (aircraft and engine types) have been simplified in a list of the types of aircraft for which pre-calculated emission data is provided.
(3) where: NjLTO=the number of LTO for aircraft type j Sophisticated Approach requires the actual and refined data obtained from real-time measurements for the calculation.
We input some parameters obtained from QAR data and ICAO emission databank, such as reference initial mass, number of engines, height of initial approach, type of engines, EI etc.
SAGE need more practical data such as operation environment, system parameter etc. 5) EPA, ICAO simple and advanced approach can get the general emission level known by the public.
During the process of the assessment, the accuracy of calculating is determined by the integrality and reliability of collected data.
(3) where: NjLTO=the number of LTO for aircraft type j Sophisticated Approach requires the actual and refined data obtained from real-time measurements for the calculation.
We input some parameters obtained from QAR data and ICAO emission databank, such as reference initial mass, number of engines, height of initial approach, type of engines, EI etc.
SAGE need more practical data such as operation environment, system parameter etc. 5) EPA, ICAO simple and advanced approach can get the general emission level known by the public.
During the process of the assessment, the accuracy of calculating is determined by the integrality and reliability of collected data.
Online since: December 2014
Authors: Yuan Lian Huo, Yong Feng Qi, Hong Wei Yang
Introduction
Linear discriminant analysis (LDA) is a well-know linear dimension reduction technique widely used in pattern recognition field.
For a task with very high-dimensional data such as face recognition, the traditional LDA algorithm encounters “small sample size problem”.
Obviously, the large class distances are overemphasized, the small class distances are neglected, thus, the resulting transformation preserves the distance of well-separated classes and causes a large overlap of neighboring classes. 2D-DLPP [9]can capture the underlying structure of data by embedding adjacency graphs into 2D-LDA, but the overlap of neighboring classes is still an unresolved issue. 2D-DWLDA [10] only focus on the overlap of neighboring classes, but it neglects the importance of local structure information of data, in addition, removing the null space of between-class matrix by dimensionality reduction will also remove part of the null space of within-class matrix and may result in the loss of important discriminative information.
Two-dimensional weighted and Locality preserving discriminant analysis (2D-WLPDA)1 Principle 2D-LDA is simply saw the Euclidean structure of image space, and it can preserve the global discriminant information of data well in a low-dimensional space when the given data are linearly distributed.
The method can discover the discriminant local structure of sample data by embedding the nearest-neighbor graphs which characterize the within-class compactness of the same class samples.
For a task with very high-dimensional data such as face recognition, the traditional LDA algorithm encounters “small sample size problem”.
Obviously, the large class distances are overemphasized, the small class distances are neglected, thus, the resulting transformation preserves the distance of well-separated classes and causes a large overlap of neighboring classes. 2D-DLPP [9]can capture the underlying structure of data by embedding adjacency graphs into 2D-LDA, but the overlap of neighboring classes is still an unresolved issue. 2D-DWLDA [10] only focus on the overlap of neighboring classes, but it neglects the importance of local structure information of data, in addition, removing the null space of between-class matrix by dimensionality reduction will also remove part of the null space of within-class matrix and may result in the loss of important discriminative information.
Two-dimensional weighted and Locality preserving discriminant analysis (2D-WLPDA)1 Principle 2D-LDA is simply saw the Euclidean structure of image space, and it can preserve the global discriminant information of data well in a low-dimensional space when the given data are linearly distributed.
The method can discover the discriminant local structure of sample data by embedding the nearest-neighbor graphs which characterize the within-class compactness of the same class samples.
Online since: January 2014
Authors: Guo Bao Ding, Rui Zhu, Zhi Song Huo, Gui You Lu
Designing CAN total line, we need to pack to fill to control the finished concrete function that system wants according to the auto, set a data to deliver the meaning of data different in the process [3].Thus, while designing communication software, according to CAN total line communication the agreement and data connect, can immediately complete delivering of accurate data credibility.
For making people keep the effect of seeing of view breakdown diagnosis and satisfy applied request, adopt VS2008 design place of honor to show interface, collect a data and data to prepare a processing, nerve network breakdown diagnosis of the result keep a presenting of view, give humanize of operation interface.
The place of honor shows to mainly include that Solid the data collect with monitor window way, rough gather an attribute to invite a reduction window, nerve network to break down to diagnose window.
Automatically read the data window way mainly complete the collecting of data that delivers to the next machine, before the system circulates, need to be passed to carry on a constitution to the string, include a string of slogan, wave to lead especially, data, strange accidentally the school check and stop and only have a correct allocation good string of message interest, so as to promise the accuracy delivering data.
The attribute invites the completion of reduction window to the attribute of data for collecting invite reduction, clean of no account attribute of some link and the attribute value of redundancy, reducing the amount of information of data.
For making people keep the effect of seeing of view breakdown diagnosis and satisfy applied request, adopt VS2008 design place of honor to show interface, collect a data and data to prepare a processing, nerve network breakdown diagnosis of the result keep a presenting of view, give humanize of operation interface.
The place of honor shows to mainly include that Solid the data collect with monitor window way, rough gather an attribute to invite a reduction window, nerve network to break down to diagnose window.
Automatically read the data window way mainly complete the collecting of data that delivers to the next machine, before the system circulates, need to be passed to carry on a constitution to the string, include a string of slogan, wave to lead especially, data, strange accidentally the school check and stop and only have a correct allocation good string of message interest, so as to promise the accuracy delivering data.
The attribute invites the completion of reduction window to the attribute of data for collecting invite reduction, clean of no account attribute of some link and the attribute value of redundancy, reducing the amount of information of data.
Online since: May 2013
Authors: Sergey E. Kokin, Stepan A. Dmitriev, Alexandra I. Khalyasmaa
In this paper the main data analysis methods of technical diagnostics are presented.
The numerical data is given in Table 1.
So far as the data is presented in such a detailed from, one can carry out deep analysis with causal-effect relations for all possible types of technical diagnostics data
Nowadays technical diagnostics data analysis is a sharp question, so far as this data is used for operation and maintenance planning of power supply objects.
In such a case selected methodology should have an option of initial data set reduction without distortion of the final result.
The numerical data is given in Table 1.
So far as the data is presented in such a detailed from, one can carry out deep analysis with causal-effect relations for all possible types of technical diagnostics data
Nowadays technical diagnostics data analysis is a sharp question, so far as this data is used for operation and maintenance planning of power supply objects.
In such a case selected methodology should have an option of initial data set reduction without distortion of the final result.
Online since: December 2014
Authors: Han Wen Zhou, Xiao Ping Fan, Xue Rong Li, Zhi Ming Zhou, Bing Mei Chen
The learning samples come from the epidemiological data in more than a dozen nationwide hospitals.
It has many available abilities, such as: learning ability by itself, organizing by itself, adapting by itself, allowing error data ability, and synthesis reasoning ability to not learned data .
Neural Network Diagnosis result Pro*C transform data Pro*C transform data Expert system Yes Fig.7 The program flow chart of neural expert diagnosis system For example, we give parameter setting of the eighth neural network.
The learning samples come from the clinical data and the epidemiological data in more than a dozen nationwide hospitals.
Attribute reduction algorithm based on reduction pruning [J].
It has many available abilities, such as: learning ability by itself, organizing by itself, adapting by itself, allowing error data ability, and synthesis reasoning ability to not learned data .
Neural Network Diagnosis result Pro*C transform data Pro*C transform data Expert system Yes Fig.7 The program flow chart of neural expert diagnosis system For example, we give parameter setting of the eighth neural network.
The learning samples come from the clinical data and the epidemiological data in more than a dozen nationwide hospitals.
Attribute reduction algorithm based on reduction pruning [J].
Online since: July 2011
Authors: Jian Guo Chen, Min Chen, Xin Xin Cheng
Firstly, the data should be preprocessed through accumulation and normalization.
All the samples are collected from the cost index published by Shanghai Construction and Building Materials Market Management Station [8] since 2005. 18 of them are taken as training data while others are taken as testing data.
Table 1 shows the normalized training data.
Table1 Training data base after normalization type Input vectors Output vector NO.
Table4 Quantization of the features of the project Input vectors T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 3 59.88 16 1 2 2 2 116.96 2 2 2 Put the quantized data into the testing data base to compute with MATLAB and then calculate the output through IAGO.
All the samples are collected from the cost index published by Shanghai Construction and Building Materials Market Management Station [8] since 2005. 18 of them are taken as training data while others are taken as testing data.
Table 1 shows the normalized training data.
Table1 Training data base after normalization type Input vectors Output vector NO.
Table4 Quantization of the features of the project Input vectors T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 3 59.88 16 1 2 2 2 116.96 2 2 2 Put the quantized data into the testing data base to compute with MATLAB and then calculate the output through IAGO.