Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: January 2011
Authors: Shi An, Xian Ye Ben, Jian Wang, Hai Yang Liu
We propose a novel method for data reduction in gait recognition, called Subblock Complete Two Dimensional Principal Component Analysis (SbC2DPCA).
This translates to data reduction with very minimal loss of information, as demonstrated by the remarkable recognition accuracy when subjects change clothing or have a backpack.
This is the second stage of data reduction in the process.
This balances recognition accuracy and data dimensionality.
C2DPCA is combined with blocking, to achieve further dimension reduction because of a large amount of data.
This translates to data reduction with very minimal loss of information, as demonstrated by the remarkable recognition accuracy when subjects change clothing or have a backpack.
This is the second stage of data reduction in the process.
This balances recognition accuracy and data dimensionality.
C2DPCA is combined with blocking, to achieve further dimension reduction because of a large amount of data.
Online since: October 2013
Authors: Tsair Rong Chen, Wen Chin Lin
The researcher revises the questionnaire, practices questionnaire survey, analyzes data and selects 685 university students by stratified cluster sampling.
Data obtained are analyzed by descriptive statistics, independent sample t test, One-way ANOVA and structural equation model.
Data Processing (1) Cognition of energy saving and carbon reduction There are 17 items of “cognition of energy saving and carbon reduction”.
Path analysis of cognition and attitude toward energy saving and carbon reduction Regarding samples collected by formal test and by SEM program Amos 17.0, this study probes into relationship between students’ cognition and attitude toward energy saving and carbon reduction in order to validate the fit between theoretical model and actual data.
,Multivariate Data Analysis (5th ed.) , Englewood Cliffs, NJ : Prentice-Hal(1998)
Data obtained are analyzed by descriptive statistics, independent sample t test, One-way ANOVA and structural equation model.
Data Processing (1) Cognition of energy saving and carbon reduction There are 17 items of “cognition of energy saving and carbon reduction”.
Path analysis of cognition and attitude toward energy saving and carbon reduction Regarding samples collected by formal test and by SEM program Amos 17.0, this study probes into relationship between students’ cognition and attitude toward energy saving and carbon reduction in order to validate the fit between theoretical model and actual data.
,Multivariate Data Analysis (5th ed.) , Englewood Cliffs, NJ : Prentice-Hal(1998)
Online since: September 2013
Authors: Wen Jing Zhao, Qiu Na Zhang, Ai Min Yang, Na Ji
Introduction
Z.Pawlak, a Poland mathematician, has put forward Rough Set Theory a kind of new mathematics data analysis theory in 1982.
So, Rough Set is widely used in machine study, knowledge finding, knowledge acquisition, decision analysis, data mining, pattern-recognition, and so on, So far, Rough Set Theory already becomes the focus-studied in international artificial intelligence.
In this paper, on the basis of rough set theory about data reduction, using granular computing for knowledge reduction.
Export: A attribute reduction.
Thus the reduction is.
So, Rough Set is widely used in machine study, knowledge finding, knowledge acquisition, decision analysis, data mining, pattern-recognition, and so on, So far, Rough Set Theory already becomes the focus-studied in international artificial intelligence.
In this paper, on the basis of rough set theory about data reduction, using granular computing for knowledge reduction.
Export: A attribute reduction.
Thus the reduction is.
Online since: July 2011
Authors: Qi Shan Zhang, Cheng Gen Dong, Jin Pei Wu
Fingerprint algorithm from the above point of view, that is, a group from the original text to the hash value (integer array) conversion, and telecommunications applications that are more similar to the data validation.
Therefore, this paper used data validation algorithm to calculate the fingerprint telecommunications network events.
Cyclic Redundancy Check (CRC) is a use of a simple binary polynomial division to get the data validation and the method of cyclic redundancy check (CRC) is a popular ensure that the data (eg, file) method is not damaged[4-6].
Network operators are now taking a hundred thousand actual data, the text mode reduction and reduction of the cost fingerprint algorithm compared the event to get the following table (unit: ms): Table 3 Performance Comparison of reduction Process Reduction method Fingerprint calculation Reduction of non-fault events Reduction of repeating events The total time Text 0 462 442 904 fingerprint 731 33 32 796 From the Table 3, fingerprints method than text mode network events in the telecommunications part of the reduction, improved performance of 11.95%.
References [1] RFC3309, Stream Control Transmission Protocol (SCTP) Checksum Change[S], J.Stone, R.Stewart, D.Otis, September 2002 [2] RFC1321, The MD5 Message-Digest Algorithm[S], MIT Laboratory for Computer Science and RSA Data Security, Inc.
Therefore, this paper used data validation algorithm to calculate the fingerprint telecommunications network events.
Cyclic Redundancy Check (CRC) is a use of a simple binary polynomial division to get the data validation and the method of cyclic redundancy check (CRC) is a popular ensure that the data (eg, file) method is not damaged[4-6].
Network operators are now taking a hundred thousand actual data, the text mode reduction and reduction of the cost fingerprint algorithm compared the event to get the following table (unit: ms): Table 3 Performance Comparison of reduction Process Reduction method Fingerprint calculation Reduction of non-fault events Reduction of repeating events The total time Text 0 462 442 904 fingerprint 731 33 32 796 From the Table 3, fingerprints method than text mode network events in the telecommunications part of the reduction, improved performance of 11.95%.
References [1] RFC3309, Stream Control Transmission Protocol (SCTP) Checksum Change[S], J.Stone, R.Stewart, D.Otis, September 2002 [2] RFC1321, The MD5 Message-Digest Algorithm[S], MIT Laboratory for Computer Science and RSA Data Security, Inc.
Online since: January 2015
Authors: Qing Chao Jiang
The Research of High-dimensional Big Data Dimension Reduction Strategy
Qingchao Jiang
Hebei Software Institute Baoding Hebei China
1358206@sina.com
Keywords: High dimensional data, Data mining, DE, Lasso, Lars.
The huge amount of data is often described as "big data".
(2) Data processing Due to the dimensions of the data is very large, so the data processing uses grouping strategy, eigenvectors can be divided into T interval, named Xi.
The purpose of this article is for high-dimensional data dimension reduction, so choose Lars algorithm to compare the non-zero elements contained in the Ei(t+1) and Gi(t) to measure the fitness of individual variation and parent.
The experiment choose UCI database the data set is means.
The huge amount of data is often described as "big data".
(2) Data processing Due to the dimensions of the data is very large, so the data processing uses grouping strategy, eigenvectors can be divided into T interval, named Xi.
The purpose of this article is for high-dimensional data dimension reduction, so choose Lars algorithm to compare the non-zero elements contained in the Ei(t+1) and Gi(t) to measure the fitness of individual variation and parent.
The experiment choose UCI database the data set is means.
Online since: May 2013
Authors: Feng Chang Xue
Applications showed that geomagic can, from the original point cloud process, find out useful point cloud, delete speckles, and through the methods of resampling and noise reduction processing, improve cloud quality and reduce data deviation thus improving the data processing efficiency.
Merging cloud data.Merging cloud data refers to merge scanning data to form a point cloud object, which will become a full point cloud data.
Noise reduction can make data smoothing and reduce deviation and make the point cloud data in modeling to better object true shape.
Encapsulating data.Packaging data refers to encapsulation calculating the point cloud data, changing the point cloud data into polygon model data.
Geomagic can find out useful point cloud, delete speckles, and through the methods of resampling and noise reduction processing, improve cloud quality and reduce data deviation thus improving the data processing efficiency.
Merging cloud data.Merging cloud data refers to merge scanning data to form a point cloud object, which will become a full point cloud data.
Noise reduction can make data smoothing and reduce deviation and make the point cloud data in modeling to better object true shape.
Encapsulating data.Packaging data refers to encapsulation calculating the point cloud data, changing the point cloud data into polygon model data.
Geomagic can find out useful point cloud, delete speckles, and through the methods of resampling and noise reduction processing, improve cloud quality and reduce data deviation thus improving the data processing efficiency.
Online since: December 2013
Authors: Lian Ying Zhang, Si Lu Zhang
This paper presents a practical applicationof Eco-costsModel in evaluating different emissions reduction measures on construction equipment.We propose acomparison model withthe input data of emission reduction percentageand Eco-costs of emissions.
Fig.1 Eco-costs of Air Pollution on Construction Site Input data of Eco-costs Model.
(Exchange rate are based on the data in October, 2012) Table 1 Eco-costs of Emissions per unit (2012) Pollutants/Sources CO HC NOX PM Diesel Euro[€/kg] 0.240 3.727 5.290 27.44 / RMB [¥/kg] 1.965 30.52 43.32 224.7 8.500 Construction of the Comparison Model As techniques of combustion chamber cleanup need a substantial structural change of diesel engine, it is unfit for applying Eco-costs Model.
∆Pm=βm*fm*I+j=1n∆rnm*qn*cn (1) fm= 1-im*H/hmI*t, ifalternative fuel is used 0, otherwise (2) ΔPm: Price reduction of Measure m, doing a kilowatt-hour work f (m): Price adjustment coefficient of alternative fuel in Measure m H:Heat value ofdiesel, MJ/kg hm: Heat value of alternative fuel in Measure m, MJ/kg I:Price of diesel per kilogram im: Price of alternative fuel per kilogramin Measure m t:A kilowatt-hour equivalent diesel, equals to 0.084kg βm:Proportion of alternative fuel in Measure m Δrnm: Emission Reduction Percentage of Pollutant n in Measure m qn:Emission Quantity of Pollutant n ifno measure is taken, g cn:Eco-cost of Pollutant n Data Collection and Calculation Six measures are analyzed in this article.
Others usetechniques of alternative fuel, including Dimethyl Ether (DME),Biodiesel (B100), andLiquefied Natural Gas (LNG).Emissions Reduction Percentage are based on technical testing authority in China[8, 9], and Emissions Quantity are considered as Emission Standards of European Union (Stage IV), which is approved by Chinese government for a long-term strategy.Detailed data and results are listed in Table 2, where the functional currency is Chinese Yuan.
Fig.1 Eco-costs of Air Pollution on Construction Site Input data of Eco-costs Model.
(Exchange rate are based on the data in October, 2012) Table 1 Eco-costs of Emissions per unit (2012) Pollutants/Sources CO HC NOX PM Diesel Euro[€/kg] 0.240 3.727 5.290 27.44 / RMB [¥/kg] 1.965 30.52 43.32 224.7 8.500 Construction of the Comparison Model As techniques of combustion chamber cleanup need a substantial structural change of diesel engine, it is unfit for applying Eco-costs Model.
∆Pm=βm*fm*I+j=1n∆rnm*qn*cn (1) fm= 1-im*H/hmI*t, ifalternative fuel is used 0, otherwise (2) ΔPm: Price reduction of Measure m, doing a kilowatt-hour work f (m): Price adjustment coefficient of alternative fuel in Measure m H:Heat value ofdiesel, MJ/kg hm: Heat value of alternative fuel in Measure m, MJ/kg I:Price of diesel per kilogram im: Price of alternative fuel per kilogramin Measure m t:A kilowatt-hour equivalent diesel, equals to 0.084kg βm:Proportion of alternative fuel in Measure m Δrnm: Emission Reduction Percentage of Pollutant n in Measure m qn:Emission Quantity of Pollutant n ifno measure is taken, g cn:Eco-cost of Pollutant n Data Collection and Calculation Six measures are analyzed in this article.
Others usetechniques of alternative fuel, including Dimethyl Ether (DME),Biodiesel (B100), andLiquefied Natural Gas (LNG).Emissions Reduction Percentage are based on technical testing authority in China[8, 9], and Emissions Quantity are considered as Emission Standards of European Union (Stage IV), which is approved by Chinese government for a long-term strategy.Detailed data and results are listed in Table 2, where the functional currency is Chinese Yuan.
Online since: May 2011
Authors: Wan Zheng Ai, Bai Gang Huang
The energy loss coefficient of sudden reduction tube flows is an important index of sudden reduction tube.
An empirical expression, which was verified by comparison with other experiment data, was presented to calculate the energy loss coefficient of sudden reduction tube flows. 1 Introduction Sudden reduction tube was broadly used in drainage pipes and discharge tunnel, such as orifice plate discharge tunnel and plug discharge tunnel.
Figure 2 is drawn by using the data in table 2, which demonstrates that when Re is constantly, the energy loss coefficient ξ decrease with the increase of contraction ratio β. ξ is closely related with La and Lb.
Figure 3 is drawn by using the data in table 2, which demonstrate that vortices are the important regions of the energy dissipation.
Table 1 Variations of ξ with Re (β = 0.50) R (×105) 0.90 1.80 9.20 18.40 27.60 ξ 10.75 10.92 10.92 10.92 10.92 Table 2 Variations of backflow length and ξ with β(Re = 1.8×105) β 0.4 0.5 0.6 0.7 0.8 La 0.17 0.12 0.09 0.07 0.05 Lb 0.45 0.41 0.39 0.34 0.32 ξ 31.65 10.92 4.11 1.46 0.47 Figure 2 Variations of ξ with β Figure 3 Variations of ξ with La and Lb 4 Verification Figure 4 is the comparison results between the simulation data in this paper and the data presented by Liu[6](2002), which implies that the conclusion obtained by simulation in this paper is very close to that by Liu[6](2002).
An empirical expression, which was verified by comparison with other experiment data, was presented to calculate the energy loss coefficient of sudden reduction tube flows. 1 Introduction Sudden reduction tube was broadly used in drainage pipes and discharge tunnel, such as orifice plate discharge tunnel and plug discharge tunnel.
Figure 2 is drawn by using the data in table 2, which demonstrates that when Re is constantly, the energy loss coefficient ξ decrease with the increase of contraction ratio β. ξ is closely related with La and Lb.
Figure 3 is drawn by using the data in table 2, which demonstrate that vortices are the important regions of the energy dissipation.
Table 1 Variations of ξ with Re (β = 0.50) R (×105) 0.90 1.80 9.20 18.40 27.60 ξ 10.75 10.92 10.92 10.92 10.92 Table 2 Variations of backflow length and ξ with β(Re = 1.8×105) β 0.4 0.5 0.6 0.7 0.8 La 0.17 0.12 0.09 0.07 0.05 Lb 0.45 0.41 0.39 0.34 0.32 ξ 31.65 10.92 4.11 1.46 0.47 Figure 2 Variations of ξ with β Figure 3 Variations of ξ with La and Lb 4 Verification Figure 4 is the comparison results between the simulation data in this paper and the data presented by Liu[6](2002), which implies that the conclusion obtained by simulation in this paper is very close to that by Liu[6](2002).
Online since: September 2011
Authors: Jing Zhang, Hong Guang Jia, Ling Ding, Wen Hui Dong
Data testing is very important for the development of new missile.
However, the Shock isolation of the mechanical structure is important to improve system reliability and rationality and directly affects the success of data recovery [1].
Buffer and Vibration Reduction of Original Structure Design The original FEA model of the data recorder is shown in Figure 1: Fig. 1 The FEA model of the data recorder Fig. 2 The FEA model of target and projectile Figure 1 is the picture of 1 / 2 axial symmetry storage, where 1 is the case, 2 is the buffer material and 3 is storage board.
Buffer and Vibration Design of the data recorder structure according to the stress wave theory, and reasonable allocation of the generalized impedance ratio, in order to reduce data storage than the maximum dynamic stress chip as the goal, set up a missile penetration of the finite element model of joint Isight, with Ls-dyna complete data packet buffer vibration reduction structure optimization design.
Marshall Hammer Experiment Figure 9 for data recorder physical composition, the figure 10 for Marshall Hammer overload curve: Fig. 9 The data recorder Fig. 10 The overload curve of data recorder Conclusion Using the stress wave theory success in solving the narrow space impact resistant problems, make the dynamic stress of protected component down about 60.9%, and through the Marshall hammer test success.
However, the Shock isolation of the mechanical structure is important to improve system reliability and rationality and directly affects the success of data recovery [1].
Buffer and Vibration Reduction of Original Structure Design The original FEA model of the data recorder is shown in Figure 1: Fig. 1 The FEA model of the data recorder Fig. 2 The FEA model of target and projectile Figure 1 is the picture of 1 / 2 axial symmetry storage, where 1 is the case, 2 is the buffer material and 3 is storage board.
Buffer and Vibration Design of the data recorder structure according to the stress wave theory, and reasonable allocation of the generalized impedance ratio, in order to reduce data storage than the maximum dynamic stress chip as the goal, set up a missile penetration of the finite element model of joint Isight, with Ls-dyna complete data packet buffer vibration reduction structure optimization design.
Marshall Hammer Experiment Figure 9 for data recorder physical composition, the figure 10 for Marshall Hammer overload curve: Fig. 9 The data recorder Fig. 10 The overload curve of data recorder Conclusion Using the stress wave theory success in solving the narrow space impact resistant problems, make the dynamic stress of protected component down about 60.9%, and through the Marshall hammer test success.
Online since: August 2013
Authors: Tong Wang, Yan Xia Pang, Yue Ping Wu, Yi Du
Dimension reduction method is one of most famous machine learning tools.
Few such attempts have been made for classification of high-dimensional protein data sets.
The major problem involved in the ability to accurately classify these massive high dimensional data sets is the high computing time and classifier complexity.
Data sets The protein–RNA complexes used in our experiments were retrieved from the PDB.
The accuracy of the low dimensional representations of the high dimensional data obtained by the different DR methods was evaluated via KNN algorithm.
Few such attempts have been made for classification of high-dimensional protein data sets.
The major problem involved in the ability to accurately classify these massive high dimensional data sets is the high computing time and classifier complexity.
Data sets The protein–RNA complexes used in our experiments were retrieved from the PDB.
The accuracy of the low dimensional representations of the high dimensional data obtained by the different DR methods was evaluated via KNN algorithm.