Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: October 2014
Authors: Duo Wang
FNMF-ITWC Algorithm Applied to the Cancer Gene Expression Data
Duo Wang1, a
1Party School of Shijiazhuang Municipal Committee of C.P.C, Shijiazhuang, China
aduow2000@163.com
Keywords: cancer gene expression data, biclustering algorithm, non-negative matrix factorization, iterative clustering, data dimensionality reduction
Abstract.
In the biclustering algorithm, the sum of all genes and samples of the submatrix can be unequal to those of the orignal data matrix. 2.2 FNMF-ITWC Algorithm FNMF-ITWC algorithm first makes gene choice of original gene expression data, and carries out factorization of nonnegative matrix on rows ( gene dimension), so as to realize the reduction of data dimension and diminution of data redundancy, and find out nonredundant genes which are relevant to research question.
The advantages of the FNMF-ITWC algorithm: 1)Firstly, conducting gene selection and fast non-negative matrix factorization algorithm to original data expression data can achieve data dimensionality reduction and removal of data redundancy and relativity, and finding out non-redundant genes related to research problem.
The three groups of data sets are: gastric cancer gene expression data set, colon cancer gene expression data set, and leukemia gene expression data set.
The application of the NMF-ITWC algorithm on this kind of gene expression data owning features of high dimension and small sample achieves the purpose of extracting dimensionality reduction and consistent gene characteristic.
In the biclustering algorithm, the sum of all genes and samples of the submatrix can be unequal to those of the orignal data matrix. 2.2 FNMF-ITWC Algorithm FNMF-ITWC algorithm first makes gene choice of original gene expression data, and carries out factorization of nonnegative matrix on rows ( gene dimension), so as to realize the reduction of data dimension and diminution of data redundancy, and find out nonredundant genes which are relevant to research question.
The advantages of the FNMF-ITWC algorithm: 1)Firstly, conducting gene selection and fast non-negative matrix factorization algorithm to original data expression data can achieve data dimensionality reduction and removal of data redundancy and relativity, and finding out non-redundant genes related to research problem.
The three groups of data sets are: gastric cancer gene expression data set, colon cancer gene expression data set, and leukemia gene expression data set.
The application of the NMF-ITWC algorithm on this kind of gene expression data owning features of high dimension and small sample achieves the purpose of extracting dimensionality reduction and consistent gene characteristic.
Online since: January 2013
Authors: Ke Wen Xia, Zhi Chai, Jing Dong
Subjective evaluation by the observer on the assessment of the effect of image noise reduction, on the other hand, objective evaluation is used to contrast with the original picture parameter data, including Mean Square Error (MSE) [2], Signal Noise Ratio (SNR), Peak Signal Noise Ratio, PSNR [2][3], Entropy and so on.
The Table 1 also shows that the data is non-linear growth when we choose large windows.
With the increasing of the window size, the data tends to be a constant.
Analyzing from the data recorded in Table 2, the effect of reduction is not simply proportional or inversely proportional to the size of filter window, and the 5×5 window is the best.
MSE Entropy PSNR Noisy image 1.4722e+3 7.1873 16.4510 Method (1) 674.8619 6.4043 19.8387 Method (2) 748.3711 6.3485 19.3896 Table 7 Results of data for Simulation (III) MSE Entropy PSNR Noisy image 1.4722e+3 7.1873 16.4510 Method (1) 674.8619 6.4043 19.8387 Method (2) 748.3711 6.3485 19.3896 Table 7 Results of data for Simulation (III) Fig. 20 Noisy image Fig. 21 Step 1 and 2 by Method (1) Fig. 22 Step 1 and 2 by Method (2) As the data shows in Table 7, if a noisy image contains a large Salt & Pepper noise, the Method (1) is the best choice for noise reduction.
The Table 1 also shows that the data is non-linear growth when we choose large windows.
With the increasing of the window size, the data tends to be a constant.
Analyzing from the data recorded in Table 2, the effect of reduction is not simply proportional or inversely proportional to the size of filter window, and the 5×5 window is the best.
MSE Entropy PSNR Noisy image 1.4722e+3 7.1873 16.4510 Method (1) 674.8619 6.4043 19.8387 Method (2) 748.3711 6.3485 19.3896 Table 7 Results of data for Simulation (III) MSE Entropy PSNR Noisy image 1.4722e+3 7.1873 16.4510 Method (1) 674.8619 6.4043 19.8387 Method (2) 748.3711 6.3485 19.3896 Table 7 Results of data for Simulation (III) Fig. 20 Noisy image Fig. 21 Step 1 and 2 by Method (1) Fig. 22 Step 1 and 2 by Method (2) As the data shows in Table 7, if a noisy image contains a large Salt & Pepper noise, the Method (1) is the best choice for noise reduction.
Online since: October 2013
Authors: Xue Mei Wu, Po Yu Lou, Xiao Hui Yang
The convolution algorithms calculate the convolution of experimental data sequence and coefficient data sequence.
According to signal processing theory, the essential of convolution algorithms is to calculate the product between the frequency spectra of experimental data sequence and coefficient data sequence.
Raw data are transformed into the wavelet domain by WT, and the information contained in the raw data can be represented by the wavelet coefficients.
If we actually perform this operation on a real digital signal, we wind up with twice as much data as we started with.
The interferogram data (with 15240 points) of samples were collected on a Fourier transform near infrared spectrometer (Bruker, VERTEX70).
According to signal processing theory, the essential of convolution algorithms is to calculate the product between the frequency spectra of experimental data sequence and coefficient data sequence.
Raw data are transformed into the wavelet domain by WT, and the information contained in the raw data can be represented by the wavelet coefficients.
If we actually perform this operation on a real digital signal, we wind up with twice as much data as we started with.
The interferogram data (with 15240 points) of samples were collected on a Fourier transform near infrared spectrometer (Bruker, VERTEX70).
Online since: November 2013
Authors: Norlia Baharun, S. Aishah Syed Salim, S.A. Rezan, Hanizam Shah Saidin, Hussin Hashim
Reduction experiments for isothermal reduction were conducted in a horizontal electric furnace with molybdenum disilicide heating elements.
The elaboration of these statistical results provides different plots that can be used to analyze the data as illustrated in Figure 1 and 2.
Figure 3 shows 3D surface plots for the effect of reduction time and temperature on extent of reduction using 10 wt% catalysts.
At longer period of reduction, the energy provided is adequate to bring appreciable reduction of iron oxides.
Fig. 3: 3D surface plot of temperature and reduction time for extent of reduction at10 wt.% of catalyst.
The elaboration of these statistical results provides different plots that can be used to analyze the data as illustrated in Figure 1 and 2.
Figure 3 shows 3D surface plots for the effect of reduction time and temperature on extent of reduction using 10 wt% catalysts.
At longer period of reduction, the energy provided is adequate to bring appreciable reduction of iron oxides.
Fig. 3: 3D surface plot of temperature and reduction time for extent of reduction at10 wt.% of catalyst.
Online since: November 2012
Authors: Hua Hui Cai, Yan Cheng, Yong Hong Zhu
And then, we deal with multi-degree reduction of NP curves by degree reduction of Bézier curve.
A parametric polynomial curve of degree in the DP form can be given by [3] (2) Where are called the DP control points of the curve , and are the NTP-DP polynomial basis defined by: in addition, if is even, and, if is odd, The degree reduction of parameterized curves is one of the most common operations in CAGD, because it reduces the amount of communicating data.
Recently, degree reduction approximation of the DP curves has been studied.
Since degree reduction algorithms of Bézier curves in different norm have been very mature, it is a feasible way to deal with degree reduction of DP curve by Bézier curves.
It is very obvious to use the mature Bézier degree reduction theory to deal with the DP degree reduction.
A parametric polynomial curve of degree in the DP form can be given by [3] (2) Where are called the DP control points of the curve , and are the NTP-DP polynomial basis defined by: in addition, if is even, and, if is odd, The degree reduction of parameterized curves is one of the most common operations in CAGD, because it reduces the amount of communicating data.
Recently, degree reduction approximation of the DP curves has been studied.
Since degree reduction algorithms of Bézier curves in different norm have been very mature, it is a feasible way to deal with degree reduction of DP curve by Bézier curves.
It is very obvious to use the mature Bézier degree reduction theory to deal with the DP degree reduction.
Online since: December 2013
Authors: Na Gao, Shu Guo Xie
Therefore, the system model reduction is critical.
So the balanced truncation model reduction method is more suitable for order reduction of the large scale systems.
Fitting results and test data are shown in Fig. 1.
As can be seen from Fig. 1, vector fitting results fit the test data in a very high degree. .
Figure 1.Vector fitting results compared with test data.
So the balanced truncation model reduction method is more suitable for order reduction of the large scale systems.
Fitting results and test data are shown in Fig. 1.
As can be seen from Fig. 1, vector fitting results fit the test data in a very high degree. .
Figure 1.Vector fitting results compared with test data.
Online since: December 2012
Authors: Geng Wang, Fang Fei Gao
The index system reflects not only current situation, but also situation before and after this work.
1.2 To establish a evaluation index system
According to principles, national planning goal for as well as research bibliography on energy conservation and emission reduction for reference, reality of Jiangsu about environment and limits of data, here we divide indicator system into layers-goal layer, rule layer and indices layer.
The original index data standardization, standard matrix Z; Step 2.
Following with the Jiangsu province thirteen city above eighteen indexes data in 2008, this paper adopts the method of principal component analysis empirical evaluation, This paper uses the data from the" Jiangsu Province in 2008Statistical Yearbook", each city2008statistical yearbook,2008 each city Environmental Status Bulletin 2.1 City industrial energy saving and emission reduction investment performance comprehensive evaluation By means of statistical analysis software Minitab15.0, on 18 indicators, do principal component analysis, Evaluation of the results in Table 2 are obtained.
Conclusion According to the data table 2-5, The following Conclusions is obtained ,respectively
[4] Data Information on http://www.jssb.gov.cn/
The original index data standardization, standard matrix Z; Step 2.
Following with the Jiangsu province thirteen city above eighteen indexes data in 2008, this paper adopts the method of principal component analysis empirical evaluation, This paper uses the data from the" Jiangsu Province in 2008Statistical Yearbook", each city2008statistical yearbook,2008 each city Environmental Status Bulletin 2.1 City industrial energy saving and emission reduction investment performance comprehensive evaluation By means of statistical analysis software Minitab15.0, on 18 indicators, do principal component analysis, Evaluation of the results in Table 2 are obtained.
Conclusion According to the data table 2-5, The following Conclusions is obtained ,respectively
[4] Data Information on http://www.jssb.gov.cn/
Online since: January 2011
Authors: Xiao Ping Hu, Yun Feng Lin
Diagnostic Object
Information Collection
Data Preprocessing
Attribute reduction
Rule base generation
Control Measures
Diagnosis
Figure 1.
Thus see that, DMFD model framework consists of the following three components: (1) Data pre-processing module will fault case data in the database through data cleansing, data conversion and data extraction tools such as pretreatment, into an analytical data set, this analysis of the data set is for the next mining algorithm established
Rough set reduction is an important concept for data analysis.
Use reduction results, you can get a preliminary classification of fault data, are diagnostic rules.
Data Mining in Fault Diagnosis.
Thus see that, DMFD model framework consists of the following three components: (1) Data pre-processing module will fault case data in the database through data cleansing, data conversion and data extraction tools such as pretreatment, into an analytical data set, this analysis of the data set is for the next mining algorithm established
Rough set reduction is an important concept for data analysis.
Use reduction results, you can get a preliminary classification of fault data, are diagnostic rules.
Data Mining in Fault Diagnosis.
Online since: July 2012
Authors: Yin Long, Ya Nan Lv, Jian Fang Wang, Hui Zhu, Cheng An Tao
Preparation of Graphene
(1) Photochemical-thermal reduction.
X-ray diffraction (XRD) data were obtained using a Rigaku D/MAX2200 diffractometer featuring Cu Ka radiation filtered with Ni (λ= 0.15418 nm).
(a) and (b) are graphene oxide via modified Hummer method; (c) is Graphene prepared by photochemical-thermal reducing; (d) is Graphene prepared by High-temperature thermal reduction Thermal reduction (1) Photochemical-thermal reduction After illuminated in infrared lamp for 6h, nearly 130℃, the brown graphene oxide solid became black (Fig.1c), indicating that the removal of the absorbed water and the occurrence of the reduction reaction [11].
XRD pattern of graphene prepared by High-temperature thermal reduction Fig.5.
Photochemical-thermal reduction and high- temperature thermal reduction are used to treat the obtained graphene oxide.
X-ray diffraction (XRD) data were obtained using a Rigaku D/MAX2200 diffractometer featuring Cu Ka radiation filtered with Ni (λ= 0.15418 nm).
(a) and (b) are graphene oxide via modified Hummer method; (c) is Graphene prepared by photochemical-thermal reducing; (d) is Graphene prepared by High-temperature thermal reduction Thermal reduction (1) Photochemical-thermal reduction After illuminated in infrared lamp for 6h, nearly 130℃, the brown graphene oxide solid became black (Fig.1c), indicating that the removal of the absorbed water and the occurrence of the reduction reaction [11].
XRD pattern of graphene prepared by High-temperature thermal reduction Fig.5.
Photochemical-thermal reduction and high- temperature thermal reduction are used to treat the obtained graphene oxide.
Online since: January 2012
Authors: Zhe Ming Zhu, Ru Zhang, Ting Ai, Li Ren, Ling Zhi Xie
In the analysis of slope stability, a reduction method of non-linear shear strength was applied.
A power function was used to approximate the shear failure envelope which is determined by the non-linear criterion or experimental data, and then the coefficients of that function were determined by using the collocation points.
To well and simply implement the shear strength reduction in slope stability calculations, we used a power function to approximate the shear strength failure envelope which is determined by the nonlinear criterion or experimental data in the normal and shear stress plane (we chose the most popular non-linear criterion, Hoek-Brown criterion, as an example), and then calculated the coefficients of that function by using the collocation points.
(12) In order to obtain the EM-C parameters, we suppose that the cohesion ce has a initial value ce0, then gain the friction angle φe0 based on the method of least square (LSM) which is a procedure to determine the best fit line to data.
In this paper, we used a power function to approximate the shear strength failure envelope which is determined by the non-linear criterion or experimental data in the normal and shear stress plane, and calculated the coefficients of the function.
A power function was used to approximate the shear failure envelope which is determined by the non-linear criterion or experimental data, and then the coefficients of that function were determined by using the collocation points.
To well and simply implement the shear strength reduction in slope stability calculations, we used a power function to approximate the shear strength failure envelope which is determined by the nonlinear criterion or experimental data in the normal and shear stress plane (we chose the most popular non-linear criterion, Hoek-Brown criterion, as an example), and then calculated the coefficients of that function by using the collocation points.
(12) In order to obtain the EM-C parameters, we suppose that the cohesion ce has a initial value ce0, then gain the friction angle φe0 based on the method of least square (LSM) which is a procedure to determine the best fit line to data.
In this paper, we used a power function to approximate the shear strength failure envelope which is determined by the non-linear criterion or experimental data in the normal and shear stress plane, and calculated the coefficients of the function.