Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: August 2013
Authors: Yin Shu Wang, Xi Liu, Peng Han, Xiu Sheng Cheng
Signal processing:filtering, shaping,and noise reduction, reducing the noise and deviation of sensor.
Road identification:according to the signal, road conditions are recognized, such as road adhesion, slope; Data selection:According to manipulating type and road conditions,the data representing the driving style are selected; Driving style identification:the driving style is identified based on this data and its condition; Style Storage: storing the above conditions recognition results; Integration of decision: the results of the identification and data in the past are fused and a final decision on the driver type is made.
Data collection and selection The data collection and selection is the previous work of driver type identification.The accuracy of the data on the driving style characterization restricts the validity of the identification structure[2].
This part of the work There are two points for this part of the work.The first is signal processing from the sensor and the second is the choice of the processed data.
The input layer accepts the sample characterization data, and its number of nodes is equal to the dimension of the input feature vector.For the starting condition, the input layer has seven nodes,which are respectively the vehicle speed at the end of starting, the average speed and the five elements in the throttle array of throttle opening,which are respectively 20%,30%,40%,50% and 60%。
Road identification:according to the signal, road conditions are recognized, such as road adhesion, slope; Data selection:According to manipulating type and road conditions,the data representing the driving style are selected; Driving style identification:the driving style is identified based on this data and its condition; Style Storage: storing the above conditions recognition results; Integration of decision: the results of the identification and data in the past are fused and a final decision on the driver type is made.
Data collection and selection The data collection and selection is the previous work of driver type identification.The accuracy of the data on the driving style characterization restricts the validity of the identification structure[2].
This part of the work There are two points for this part of the work.The first is signal processing from the sensor and the second is the choice of the processed data.
The input layer accepts the sample characterization data, and its number of nodes is equal to the dimension of the input feature vector.For the starting condition, the input layer has seven nodes,which are respectively the vehicle speed at the end of starting, the average speed and the five elements in the throttle array of throttle opening,which are respectively 20%,30%,40%,50% and 60%。
Online since: September 2013
Authors: Chi Yun Xu, Fa Tang Chen
Introduction
Since the demand for the data service of use equipment (UE) has grown highly.
We perform autocorrelation calculation between PSS data in time domain and the received wireless frame data and find the maximum value.
And we just need to store only 1024 words for each group of data .So the storage memory space of local PSS is only 2048 word.
We adopt real-time data processing in order to solve problems of the massive memory needed to be created.
It means that we perform correlation calculations while receiving data at the same time, which requires that we should have accomplished the processing of the prior received data before the later data arrives.
We perform autocorrelation calculation between PSS data in time domain and the received wireless frame data and find the maximum value.
And we just need to store only 1024 words for each group of data .So the storage memory space of local PSS is only 2048 word.
We adopt real-time data processing in order to solve problems of the massive memory needed to be created.
It means that we perform correlation calculations while receiving data at the same time, which requires that we should have accomplished the processing of the prior received data before the later data arrives.
Online since: November 2012
Authors: You Sai Zhang, Shu Jin Zhu, Yuan Jiang Li
The proposed method firstly seeks the relationship between the Structural Similarity index (SSIM) [2] and match distance in blocks and obtains the data with fine SSIM values.
The reduction of the numbers of matched blocks directly results in bad denoising performance.
The proposed technique is applied in following steps: (1) Seek the relation between match distance and SSIM which is a method for measuring the similarity between two images and outperforms the traditional methods like peak signal-to-noise ratio (PSNR) and mean squared error (MSE). (2) After obtaining data with fine SSIM values in the first step, compute the noise level and gradient in blocks of the same size. (3) Adopt surface fitting to get a formula which applies weak thresholds for flat blocks and strong thresholds for detail blocks.
Buckley, “Fast computation of a discretized thin-plate smoothing spline for image data,” Biometrika, Vol. 81-2, (1994) [6] Information on http://www.biomecardio.com/matlab/evar.html [7] K.
The reduction of the numbers of matched blocks directly results in bad denoising performance.
The proposed technique is applied in following steps: (1) Seek the relation between match distance and SSIM which is a method for measuring the similarity between two images and outperforms the traditional methods like peak signal-to-noise ratio (PSNR) and mean squared error (MSE). (2) After obtaining data with fine SSIM values in the first step, compute the noise level and gradient in blocks of the same size. (3) Adopt surface fitting to get a formula which applies weak thresholds for flat blocks and strong thresholds for detail blocks.
Buckley, “Fast computation of a discretized thin-plate smoothing spline for image data,” Biometrika, Vol. 81-2, (1994) [6] Information on http://www.biomecardio.com/matlab/evar.html [7] K.
Online since: September 2013
Authors: Cheng Zan Chu, Chuan Zheng Zhu, Yao Wang
In order to realize energy-saving and emission-reduction, there have been some new ventilation technology, for instance, the improved ventilation device, the improved ventilation method, and some new ventilation control technology.
Aim to the object to be evaluated, it can be expressed in formula 3 with the indicator data
Aim to the object to be evaluated, it can be expressed in formula 3 with the indicator data
Online since: July 2014
Authors: Jing Di, Dong Mei Zhou, Yu Hong Liu
The computer simulation results show that the proposed algorithm is superior to the traditional Wiener method in the low CPU cost, real-time statistics, the reduction of the speech distortion and residual musical noise.
Experimental results We select a piece of single channel data as pure speech which is saved as 1.wav The speech signal is sampled at 8 kHz and quantified with 16 bits.
Experimental results We select a piece of single channel data as pure speech which is saved as 1.wav The speech signal is sampled at 8 kHz and quantified with 16 bits.
Online since: December 2013
Authors: Mohd Zaki Nuawi, Abdul Rahim Bahari, Shamsul Akmar Ab Aziz, Mohd Jailani Mohd Nor, Dian Darina Indah Daruis
However, this study aims to analyse and represent the measured WBV data by using I-kazTM method.
In order to measure the degree of scattering of the data distribution, the I-kazTM coefficient calculates the distance of each data point from the signal’s centroid.
This process produces 3D graphical representations which permit the reduction in dimension.
Thus, the significant relationships within the data observations or samples can be identified.
The coefficient indicates the degree of data scattering with respect to the data centroid [5].
In order to measure the degree of scattering of the data distribution, the I-kazTM coefficient calculates the distance of each data point from the signal’s centroid.
This process produces 3D graphical representations which permit the reduction in dimension.
Thus, the significant relationships within the data observations or samples can be identified.
The coefficient indicates the degree of data scattering with respect to the data centroid [5].
Online since: April 2011
Authors: Henry Hu, Qiang Zhang, Xiao Ping Niu
The acquired temperature (T) vs. time (t) data were processed and cooling curves (T vs. t) were plotted using the Microsoft Excel spreadsheet software.
The nonequilibrium liquidus temperature (615.58-616.380C) measured is very close to the value of 6150C reported in the Norsk Hydro Magnesium’s data sheet [4].
The nonequilibrium solidus temperature (434.080C) is in good agreement with the data (420-4350C) existing in the literature.
[4] Data sheet, die casting magnesium alloys, Norsk Hydro Magnesium, August (1995)
Calvert: Pearson’s Handbook of Crystallographic Data for Intermetallic Phases, ASM, Metals Park, OH, 1985.
The nonequilibrium liquidus temperature (615.58-616.380C) measured is very close to the value of 6150C reported in the Norsk Hydro Magnesium’s data sheet [4].
The nonequilibrium solidus temperature (434.080C) is in good agreement with the data (420-4350C) existing in the literature.
[4] Data sheet, die casting magnesium alloys, Norsk Hydro Magnesium, August (1995)
Calvert: Pearson’s Handbook of Crystallographic Data for Intermetallic Phases, ASM, Metals Park, OH, 1985.
Online since: January 2014
Authors: D.V. Guzei, M.I. Pryazhnikov, Alexander S. Lobasov, Andrey V. Minakov
The results showed that this model very well described the experimental data in some cases.
Experimental data are often contradictory and even mutually exclude each other.
The experimental data are very contradictory and do not provide full answers to all the questions.
However, such model not always adequately describes the experimental data.
Doucet, New temperature dependent thermal conductivity data for water-based nanofluids, Int.
Experimental data are often contradictory and even mutually exclude each other.
The experimental data are very contradictory and do not provide full answers to all the questions.
However, such model not always adequately describes the experimental data.
Doucet, New temperature dependent thermal conductivity data for water-based nanofluids, Int.
Online since: January 2012
Authors: Fang Xiao
Forest coverage prediction based on least squares support vector regression algorithm is presented in the paper.Forest coverage data of Heilongjiang from 1994 to 2005 are used to study the effectiveness of least squares support vector regression algorithm.The prediction results of the proposed least squares support vector regression model by using the training samples with the different dimensional input vector are given in the study.
Introduction Least square support vector regression(LSSVR) is a novel statistical learning method,which has the better generalization ability than artificial neural network[1-3].Forest coverage prediction based on least squares support vector regression algorithm is presented in the paper.Forest coverage data of Heilongjiang from 1994 to 2005 are used to study the effectiveness of least squares support vector regression algorithm.The prediction results of the proposed least squares support vector regression model by using the training samples with the different dimensional input vector are given in the study.
The Lagrange function is introduced to solve the optimization problem, which is as follows: (5) The optimization conditions of the formula are given as: (6) Eliminating and,the regression function of least squares support vector regression can be written as (7) Experimental Analysis for Forest Coverage Prediction Forest coverage data of Heilongjiang from 1994 to 2005 are used to study the effectiveness of least squares support vector regression algorithm.Fig.1~Fig.3 show the prediction results of the proposed least squares support vector regression model by using the training samples with the different dimensional input vector.
Portilla-Figueras,Improving the training time of support vector regression algorithms through novel hyper-parameters search space reductions,Neurocomputing Vol.72 (2009), p.3683-3691
Introduction Least square support vector regression(LSSVR) is a novel statistical learning method,which has the better generalization ability than artificial neural network[1-3].Forest coverage prediction based on least squares support vector regression algorithm is presented in the paper.Forest coverage data of Heilongjiang from 1994 to 2005 are used to study the effectiveness of least squares support vector regression algorithm.The prediction results of the proposed least squares support vector regression model by using the training samples with the different dimensional input vector are given in the study.
The Lagrange function is introduced to solve the optimization problem, which is as follows: (5) The optimization conditions of the formula are given as: (6) Eliminating and,the regression function of least squares support vector regression can be written as (7) Experimental Analysis for Forest Coverage Prediction Forest coverage data of Heilongjiang from 1994 to 2005 are used to study the effectiveness of least squares support vector regression algorithm.Fig.1~Fig.3 show the prediction results of the proposed least squares support vector regression model by using the training samples with the different dimensional input vector.
Portilla-Figueras,Improving the training time of support vector regression algorithms through novel hyper-parameters search space reductions,Neurocomputing Vol.72 (2009), p.3683-3691
Online since: November 2015
Authors: Denis W. Shiers, David M. Collinson, Helen R. Watling
This research forms part of an ongoing development of a data base with which to interpret the impacts of leaching conditions in heaps on microbial activity without having to disrupt metal production by invasive sampling campaigns.
There are comparatively few data on the effects of physico-chemical parameters on acidophiles utilising organic compounds.
This took place ahead of the reduction in reduced material present at pH 5.0, but lagged slightly behind at pH 7.0.
These experimental data indicate that mixotrophic and heterotrophic species can tolerate, or adapt to, pH gradients in heaps when suitable organic substrates are present.
There are comparatively few data on the effects of physico-chemical parameters on acidophiles utilising organic compounds.
This took place ahead of the reduction in reduced material present at pH 5.0, but lagged slightly behind at pH 7.0.
These experimental data indicate that mixotrophic and heterotrophic species can tolerate, or adapt to, pH gradients in heaps when suitable organic substrates are present.