Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: July 2011
Authors: Cheng Gen Dong, Qi Shan Zhang, Jin Pei Wu
Fingerprint algorithm from the above point of view, that is, a group from the original text to the hash value (integer array) conversion, and telecommunications applications that are more similar to the data validation.
Therefore, this paper used data validation algorithm to calculate the fingerprint telecommunications network events.
Cyclic Redundancy Check (CRC) is a use of a simple binary polynomial division to get the data validation and the method of cyclic redundancy check (CRC) is a popular ensure that the data (eg, file) method is not damaged[4-6].
Network operators are now taking a hundred thousand actual data, the text mode reduction and reduction of the cost fingerprint algorithm compared the event to get the following table (unit: ms): Table 3 Performance Comparison of reduction Process Reduction method Fingerprint calculation Reduction of non-fault events Reduction of repeating events The total time Text 0 462 442 904 fingerprint 731 33 32 796 From the Table 3, fingerprints method than text mode network events in the telecommunications part of the reduction, improved performance of 11.95%.
References [1] RFC3309, Stream Control Transmission Protocol (SCTP) Checksum Change[S], J.Stone, R.Stewart, D.Otis, September 2002 [2] RFC1321, The MD5 Message-Digest Algorithm[S], MIT Laboratory for Computer Science and RSA Data Security, Inc.
Therefore, this paper used data validation algorithm to calculate the fingerprint telecommunications network events.
Cyclic Redundancy Check (CRC) is a use of a simple binary polynomial division to get the data validation and the method of cyclic redundancy check (CRC) is a popular ensure that the data (eg, file) method is not damaged[4-6].
Network operators are now taking a hundred thousand actual data, the text mode reduction and reduction of the cost fingerprint algorithm compared the event to get the following table (unit: ms): Table 3 Performance Comparison of reduction Process Reduction method Fingerprint calculation Reduction of non-fault events Reduction of repeating events The total time Text 0 462 442 904 fingerprint 731 33 32 796 From the Table 3, fingerprints method than text mode network events in the telecommunications part of the reduction, improved performance of 11.95%.
References [1] RFC3309, Stream Control Transmission Protocol (SCTP) Checksum Change[S], J.Stone, R.Stewart, D.Otis, September 2002 [2] RFC1321, The MD5 Message-Digest Algorithm[S], MIT Laboratory for Computer Science and RSA Data Security, Inc.
Online since: December 2014
Authors: Xiao Xue Xing, Wei Wei Shang, Li Min Du
With the increasing of data in database systems, attribute reduction becomes more effective relative to the value reduction.
But the discernibility function simplification is a NP problem [5], so this algorithm is only suitable for with very small data set.
The dependence of attributes in rough set theory is the influence on decision rules in the current data, but it can’t reflect the prior knowledge of decision maker.
The experimental system chooses Visual C++6.0 as the development language, using Windows 2000 as the development environment, using six data sets from UCI standard data.
The data is processed after the missing information processing, and attributes are reduced using the above two algorithm respectively.
But the discernibility function simplification is a NP problem [5], so this algorithm is only suitable for with very small data set.
The dependence of attributes in rough set theory is the influence on decision rules in the current data, but it can’t reflect the prior knowledge of decision maker.
The experimental system chooses Visual C++6.0 as the development language, using Windows 2000 as the development environment, using six data sets from UCI standard data.
The data is processed after the missing information processing, and attributes are reduced using the above two algorithm respectively.
Online since: December 2013
Authors: Lian Ying Zhang, Si Lu Zhang
This paper presents a practical applicationof Eco-costsModel in evaluating different emissions reduction measures on construction equipment.We propose acomparison model withthe input data of emission reduction percentageand Eco-costs of emissions.
Fig.1 Eco-costs of Air Pollution on Construction Site Input data of Eco-costs Model.
(Exchange rate are based on the data in October, 2012) Table 1 Eco-costs of Emissions per unit (2012) Pollutants/Sources CO HC NOX PM Diesel Euro[€/kg] 0.240 3.727 5.290 27.44 / RMB [¥/kg] 1.965 30.52 43.32 224.7 8.500 Construction of the Comparison Model As techniques of combustion chamber cleanup need a substantial structural change of diesel engine, it is unfit for applying Eco-costs Model.
∆Pm=βm*fm*I+j=1n∆rnm*qn*cn (1) fm= 1-im*H/hmI*t, ifalternative fuel is used 0, otherwise (2) ΔPm: Price reduction of Measure m, doing a kilowatt-hour work f (m): Price adjustment coefficient of alternative fuel in Measure m H:Heat value ofdiesel, MJ/kg hm: Heat value of alternative fuel in Measure m, MJ/kg I:Price of diesel per kilogram im: Price of alternative fuel per kilogramin Measure m t:A kilowatt-hour equivalent diesel, equals to 0.084kg βm:Proportion of alternative fuel in Measure m Δrnm: Emission Reduction Percentage of Pollutant n in Measure m qn:Emission Quantity of Pollutant n ifno measure is taken, g cn:Eco-cost of Pollutant n Data Collection and Calculation Six measures are analyzed in this article.
Others usetechniques of alternative fuel, including Dimethyl Ether (DME),Biodiesel (B100), andLiquefied Natural Gas (LNG).Emissions Reduction Percentage are based on technical testing authority in China[8, 9], and Emissions Quantity are considered as Emission Standards of European Union (Stage IV), which is approved by Chinese government for a long-term strategy.Detailed data and results are listed in Table 2, where the functional currency is Chinese Yuan.
Fig.1 Eco-costs of Air Pollution on Construction Site Input data of Eco-costs Model.
(Exchange rate are based on the data in October, 2012) Table 1 Eco-costs of Emissions per unit (2012) Pollutants/Sources CO HC NOX PM Diesel Euro[€/kg] 0.240 3.727 5.290 27.44 / RMB [¥/kg] 1.965 30.52 43.32 224.7 8.500 Construction of the Comparison Model As techniques of combustion chamber cleanup need a substantial structural change of diesel engine, it is unfit for applying Eco-costs Model.
∆Pm=βm*fm*I+j=1n∆rnm*qn*cn (1) fm= 1-im*H/hmI*t, ifalternative fuel is used 0, otherwise (2) ΔPm: Price reduction of Measure m, doing a kilowatt-hour work f (m): Price adjustment coefficient of alternative fuel in Measure m H:Heat value ofdiesel, MJ/kg hm: Heat value of alternative fuel in Measure m, MJ/kg I:Price of diesel per kilogram im: Price of alternative fuel per kilogramin Measure m t:A kilowatt-hour equivalent diesel, equals to 0.084kg βm:Proportion of alternative fuel in Measure m Δrnm: Emission Reduction Percentage of Pollutant n in Measure m qn:Emission Quantity of Pollutant n ifno measure is taken, g cn:Eco-cost of Pollutant n Data Collection and Calculation Six measures are analyzed in this article.
Others usetechniques of alternative fuel, including Dimethyl Ether (DME),Biodiesel (B100), andLiquefied Natural Gas (LNG).Emissions Reduction Percentage are based on technical testing authority in China[8, 9], and Emissions Quantity are considered as Emission Standards of European Union (Stage IV), which is approved by Chinese government for a long-term strategy.Detailed data and results are listed in Table 2, where the functional currency is Chinese Yuan.
Online since: May 2013
Authors: Feng Chang Xue
Applications showed that geomagic can, from the original point cloud process, find out useful point cloud, delete speckles, and through the methods of resampling and noise reduction processing, improve cloud quality and reduce data deviation thus improving the data processing efficiency.
Merging cloud data.Merging cloud data refers to merge scanning data to form a point cloud object, which will become a full point cloud data.
Noise reduction can make data smoothing and reduce deviation and make the point cloud data in modeling to better object true shape.
Encapsulating data.Packaging data refers to encapsulation calculating the point cloud data, changing the point cloud data into polygon model data.
Geomagic can find out useful point cloud, delete speckles, and through the methods of resampling and noise reduction processing, improve cloud quality and reduce data deviation thus improving the data processing efficiency.
Merging cloud data.Merging cloud data refers to merge scanning data to form a point cloud object, which will become a full point cloud data.
Noise reduction can make data smoothing and reduce deviation and make the point cloud data in modeling to better object true shape.
Encapsulating data.Packaging data refers to encapsulation calculating the point cloud data, changing the point cloud data into polygon model data.
Geomagic can find out useful point cloud, delete speckles, and through the methods of resampling and noise reduction processing, improve cloud quality and reduce data deviation thus improving the data processing efficiency.
Online since: May 2011
Authors: Wan Zheng Ai, Bai Gang Huang
The energy loss coefficient of sudden reduction tube flows is an important index of sudden reduction tube.
An empirical expression, which was verified by comparison with other experiment data, was presented to calculate the energy loss coefficient of sudden reduction tube flows. 1 Introduction Sudden reduction tube was broadly used in drainage pipes and discharge tunnel, such as orifice plate discharge tunnel and plug discharge tunnel.
Figure 2 is drawn by using the data in table 2, which demonstrates that when Re is constantly, the energy loss coefficient ξ decrease with the increase of contraction ratio β. ξ is closely related with La and Lb.
Figure 3 is drawn by using the data in table 2, which demonstrate that vortices are the important regions of the energy dissipation.
Table 1 Variations of ξ with Re (β = 0.50) R (×105) 0.90 1.80 9.20 18.40 27.60 ξ 10.75 10.92 10.92 10.92 10.92 Table 2 Variations of backflow length and ξ with β(Re = 1.8×105) β 0.4 0.5 0.6 0.7 0.8 La 0.17 0.12 0.09 0.07 0.05 Lb 0.45 0.41 0.39 0.34 0.32 ξ 31.65 10.92 4.11 1.46 0.47 Figure 2 Variations of ξ with β Figure 3 Variations of ξ with La and Lb 4 Verification Figure 4 is the comparison results between the simulation data in this paper and the data presented by Liu[6](2002), which implies that the conclusion obtained by simulation in this paper is very close to that by Liu[6](2002).
An empirical expression, which was verified by comparison with other experiment data, was presented to calculate the energy loss coefficient of sudden reduction tube flows. 1 Introduction Sudden reduction tube was broadly used in drainage pipes and discharge tunnel, such as orifice plate discharge tunnel and plug discharge tunnel.
Figure 2 is drawn by using the data in table 2, which demonstrates that when Re is constantly, the energy loss coefficient ξ decrease with the increase of contraction ratio β. ξ is closely related with La and Lb.
Figure 3 is drawn by using the data in table 2, which demonstrate that vortices are the important regions of the energy dissipation.
Table 1 Variations of ξ with Re (β = 0.50) R (×105) 0.90 1.80 9.20 18.40 27.60 ξ 10.75 10.92 10.92 10.92 10.92 Table 2 Variations of backflow length and ξ with β(Re = 1.8×105) β 0.4 0.5 0.6 0.7 0.8 La 0.17 0.12 0.09 0.07 0.05 Lb 0.45 0.41 0.39 0.34 0.32 ξ 31.65 10.92 4.11 1.46 0.47 Figure 2 Variations of ξ with β Figure 3 Variations of ξ with La and Lb 4 Verification Figure 4 is the comparison results between the simulation data in this paper and the data presented by Liu[6](2002), which implies that the conclusion obtained by simulation in this paper is very close to that by Liu[6](2002).
Online since: September 2011
Authors: Long Zhen Duan, Zhi Xin Zou, Gui Fen Wang
The Web Classification Based on ROUGH-GA-BP
Long Zhen Duan1,a , Zhi Xin Zou1,band Gui Fen Wang1,c
1Department of Computer Application Technology, Nan Chang University,
Jiang Xi , China
alzhduan@126.com,bzzhxin@163.com, cfenfen353@126.com
Keywords: Text Classification Algorithm, ROUGH-GA-BP, Data Reduction
Abstract.
This algorithm reduces the data of the text input vector by the data reduction method based on rough sets theory, and presenting a genetic algorithm approach for feature selection.
Introduction Because of the rapid growth of text data, automatic methods of data management are especially important.
In document [1], using rough set theory reduction methods to pre-treat the information, and remove redundant data; taking reduced decision tables as design basis and training data of the neural network.
Test and Analysis Corpus used in this study is Chinese web pages data set CWT100g collected by Peking University's network laboratory.
This algorithm reduces the data of the text input vector by the data reduction method based on rough sets theory, and presenting a genetic algorithm approach for feature selection.
Introduction Because of the rapid growth of text data, automatic methods of data management are especially important.
In document [1], using rough set theory reduction methods to pre-treat the information, and remove redundant data; taking reduced decision tables as design basis and training data of the neural network.
Test and Analysis Corpus used in this study is Chinese web pages data set CWT100g collected by Peking University's network laboratory.
Online since: January 2011
Authors: Shi An, Xian Ye Ben, Jian Wang, Hai Yang Liu
We propose a novel method for data reduction in gait recognition, called Subblock Complete Two Dimensional Principal Component Analysis (SbC2DPCA).
This translates to data reduction with very minimal loss of information, as demonstrated by the remarkable recognition accuracy when subjects change clothing or have a backpack.
This is the second stage of data reduction in the process.
This balances recognition accuracy and data dimensionality.
C2DPCA is combined with blocking, to achieve further dimension reduction because of a large amount of data.
This translates to data reduction with very minimal loss of information, as demonstrated by the remarkable recognition accuracy when subjects change clothing or have a backpack.
This is the second stage of data reduction in the process.
This balances recognition accuracy and data dimensionality.
C2DPCA is combined with blocking, to achieve further dimension reduction because of a large amount of data.
Online since: March 2009
Authors: Evgeny N. Selivanov, R.I. Gulyaeva, A.N. Mansurova
The Netzsch Thermokinetics program was used for the analysis of the experimental data.
Experimental data satisfy a kinetic mode of the СаFеSО reduction process by carbon monoxide.
Fig. 5 - Polyterms of Са2FеСuSО3 sample reduction in a flow (75 % CO - 25 % Ar) (a) and the results of processing of the experimental data by the Friedman method (b).
According to the data [16] they are within the limits of 108.8 - 156.6 kJ/mol.
Summary The obtained data on the kinetics of СаFеSО and Са2FеСuSО3 oxysulfides reduction by carbon monoxide are evidence of differences in the processes mechanisms.
Experimental data satisfy a kinetic mode of the СаFеSО reduction process by carbon monoxide.
Fig. 5 - Polyterms of Са2FеСuSО3 sample reduction in a flow (75 % CO - 25 % Ar) (a) and the results of processing of the experimental data by the Friedman method (b).
According to the data [16] they are within the limits of 108.8 - 156.6 kJ/mol.
Summary The obtained data on the kinetics of СаFеSО and Са2FеСuSО3 oxysulfides reduction by carbon monoxide are evidence of differences in the processes mechanisms.
Online since: January 2015
Authors: Qing Chao Jiang
The Research of High-dimensional Big Data Dimension Reduction Strategy
Qingchao Jiang
Hebei Software Institute Baoding Hebei China
1358206@sina.com
Keywords: High dimensional data, Data mining, DE, Lasso, Lars.
The huge amount of data is often described as "big data".
(2) Data processing Due to the dimensions of the data is very large, so the data processing uses grouping strategy, eigenvectors can be divided into T interval, named Xi.
The purpose of this article is for high-dimensional data dimension reduction, so choose Lars algorithm to compare the non-zero elements contained in the Ei(t+1) and Gi(t) to measure the fitness of individual variation and parent.
The experiment choose UCI database the data set is means.
The huge amount of data is often described as "big data".
(2) Data processing Due to the dimensions of the data is very large, so the data processing uses grouping strategy, eigenvectors can be divided into T interval, named Xi.
The purpose of this article is for high-dimensional data dimension reduction, so choose Lars algorithm to compare the non-zero elements contained in the Ei(t+1) and Gi(t) to measure the fitness of individual variation and parent.
The experiment choose UCI database the data set is means.
Online since: August 2013
Authors: Tong Wang, Yan Xia Pang, Yue Ping Wu, Yi Du
Dimension reduction method is one of most famous machine learning tools.
Few such attempts have been made for classification of high-dimensional protein data sets.
The major problem involved in the ability to accurately classify these massive high dimensional data sets is the high computing time and classifier complexity.
Data sets The protein–RNA complexes used in our experiments were retrieved from the PDB.
The accuracy of the low dimensional representations of the high dimensional data obtained by the different DR methods was evaluated via KNN algorithm.
Few such attempts have been made for classification of high-dimensional protein data sets.
The major problem involved in the ability to accurately classify these massive high dimensional data sets is the high computing time and classifier complexity.
Data sets The protein–RNA complexes used in our experiments were retrieved from the PDB.
The accuracy of the low dimensional representations of the high dimensional data obtained by the different DR methods was evaluated via KNN algorithm.