Sort by:
Publication Type:
Open access:
Publication Date:
Periodicals:
Search results
Online since: August 2013
Authors: Hong Wu Zhang, Lin Yun Shi, Su Yuan Zhang
In this paper, the author based on the calculated basic data to analyze on China's low carbon pilot provinces’s CO2 emission characteristics and use Kaya model to decompose the factors influencing the size and driving forces.
For lacking data of other cities set by national development and Reform Commission, they are not in this list.
As the present situation that CO2 emissions data is very lack in China, the author uses the data of China's energy consumption to calculate CO2 emissions of the inter-provincial department and the type of energy.
We use the energy consumption of various provinces and departments multiplying CO2 emission coefficient of various energy as the calculating method, the energy consumption data is from the,the discharge coefficient is from Technology Policy Research Institute [3]which is from Japanese research achievements of science and Technology Department of science.
The CO2 emissions of Chongqing increase to 79 million tons, economic growth is the only increased emissions factors, which increases the capacity of 145 million tons, the other three factors are the reduction factors, energy saving factors with the reduction of 047 million tons, energy conversion emissions with the reduction of 17 million tons, and the driving force factors of population movement is very small, only 2 million tons.
For lacking data of other cities set by national development and Reform Commission, they are not in this list.
As the present situation that CO2 emissions data is very lack in China, the author uses the data of China's energy consumption to calculate CO2 emissions of the inter-provincial department and the type of energy.
We use the energy consumption of various provinces and departments multiplying CO2 emission coefficient of various energy as the calculating method, the energy consumption data is from the
The CO2 emissions of Chongqing increase to 79 million tons, economic growth is the only increased emissions factors, which increases the capacity of 145 million tons, the other three factors are the reduction factors, energy saving factors with the reduction of 047 million tons, energy conversion emissions with the reduction of 17 million tons, and the driving force factors of population movement is very small, only 2 million tons.
Online since: July 2013
Authors: Shi Jun He, Shi Ting Zhao, Fan Bai, Jia Wei
A Method for Spatial Data Registration based on PCA-ICP Algorithm
Shi Jun He1, a, Shi Ting Zhao1, b, Fan Bai1, c, Jia Wei1, d
1college of Information Technology Shanghai Ocean University, Shanghai, 201306, China
asjhe@shou.edu.cn, byilantiankong@126.com, cabia321@163.com,
dwei_jia_viga@163.com
Keywords: spatial data. registration.
It takes the way of non-contact and multi-angle which can acquire the data correctly.
However, in order to obtain the spatial data of the object surface all position, we need set up one or more station that we can acquire the data completely, these spatial data also can be called cloud points.
The algorithm can use SVD to getand when the data sets are small.
Used 3D laser scanning technology to obtain spatial data which is huge and redundant, there is a amount of redundancy when got the point clouds of target object, we maybe get spatial data, for instance, trees when we get the spatial data about building, the exclusion of redundant data can depend on professional software.
It takes the way of non-contact and multi-angle which can acquire the data correctly.
However, in order to obtain the spatial data of the object surface all position, we need set up one or more station that we can acquire the data completely, these spatial data also can be called cloud points.
The algorithm can use SVD to getand when the data sets are small.
Used 3D laser scanning technology to obtain spatial data which is huge and redundant, there is a amount of redundancy when got the point clouds of target object, we maybe get spatial data, for instance, trees when we get the spatial data about building, the exclusion of redundant data can depend on professional software.
Online since: July 2005
Authors: Wen Jiang Ding, Yan Ping Zhu, Yu Fan, Hong Tao Gao, Guo Hua Wu
Iron reduction mechanism was studied thermodynamically.
Therefore, we try to reduce the iron content in magnesium alloy by using purification flux containing some iron reduction agents.
The borides in the purification fluxes are iron reduction agents.
In contract both JDMR-1 and JDMR-2 have excellent effect on iron reduction in the magnesium alloy.
Formation of FeB is considered as the primary reason for iron reduction in magnesium alloy.
Therefore, we try to reduce the iron content in magnesium alloy by using purification flux containing some iron reduction agents.
The borides in the purification fluxes are iron reduction agents.
In contract both JDMR-1 and JDMR-2 have excellent effect on iron reduction in the magnesium alloy.
Formation of FeB is considered as the primary reason for iron reduction in magnesium alloy.
Online since: January 2012
Authors: Feng Ming Guo, Yi Gang He, Bing Li
In this paper, to alleviate the influence of the noise the inaccurate measurement in the complicated environment, based on the robust ability in multivariate linear regression of PLS, and in combination with nonlinear data dimension reduction of manifold learning, a novel kernel matrix Isomap algorithm is proposed.
Compared with traditional Isomap and MDS, simulation results indicate that the algorithm has good topology stability, generalization property, robustness and lower computational complexity. 1 Introduction Isomap is one of manifold learning and nonlinear dimensionality reduction algorithm based on the framework of MDS algorithm which can recover the meaningful low dimensional structures hidden in high dimensional data [1], [2], [3].
In the Isomap algorithm, geodesic distances between data points are extracted instead of simply taking the Euclidean distance, thus a geometric distance graph is constructed and the embedding is obtained from the graph.
Isomap and KIsomap algorithm solves based on least squares (LS), which is much sensitive to abnormal points of data [4], [5], [6], [7].
The edge lengths between data points in the neighborhood graph are set as their Euclidean distances.
Compared with traditional Isomap and MDS, simulation results indicate that the algorithm has good topology stability, generalization property, robustness and lower computational complexity. 1 Introduction Isomap is one of manifold learning and nonlinear dimensionality reduction algorithm based on the framework of MDS algorithm which can recover the meaningful low dimensional structures hidden in high dimensional data [1], [2], [3].
In the Isomap algorithm, geodesic distances between data points are extracted instead of simply taking the Euclidean distance, thus a geometric distance graph is constructed and the embedding is obtained from the graph.
Isomap and KIsomap algorithm solves based on least squares (LS), which is much sensitive to abnormal points of data [4], [5], [6], [7].
The edge lengths between data points in the neighborhood graph are set as their Euclidean distances.
Online since: June 2017
Authors: Duongruitai Nicomrat
Since the levels of reduction and oxidation are dependent on the oxygen concentration.
These data indicated the situation suitable for either SOB or SRB community is dependent on oxygen concentration.
The activity of mixed SOB cultures would be recorded for its pH and sulfide reduction but sulfate induction.
Fig. 1 The reduction in pH in the presence of mixed SOB and pure SOB.
Mixed culture of SOB could do utilize sulfide after day 5 and thus produced sulfate increasing and high SOB concentration to 109-1011 cells/mL (data not shown).
These data indicated the situation suitable for either SOB or SRB community is dependent on oxygen concentration.
The activity of mixed SOB cultures would be recorded for its pH and sulfide reduction but sulfate induction.
Fig. 1 The reduction in pH in the presence of mixed SOB and pure SOB.
Mixed culture of SOB could do utilize sulfide after day 5 and thus produced sulfate increasing and high SOB concentration to 109-1011 cells/mL (data not shown).
Online since: July 2015
Authors: Riri Murniati, Sutisna Sutisna, Edy Wibowo, Mamat Rokhmat, Neni Surtiyeni, Elfi Yuliza, Khairurrijal Khairurrijal, Mikrajuddin Abdullah
Samples were characterized using FT-IR spectrophotometer and samples data absorbance obtained in each catchment area.
Hazardous substances reduction will be seen and also some metals such as nickel, arsenic, cadmium and lead.
Summary It can be concluded from this study that is based on reduction percentage of absorbance values for inputting method of TiO2 suspension through the cigarette filter tip and base, it appears that the most stable and least reduction present in C3 and H3 samples.
Xu, Selectively reduction of tobacco specific nitrosamines in cigarette smoke by use of nanostructural titanates, Nanoscale. (2013)1-5
Wei, Significant reduction of harmful compounds in tobacco smoke by the use of titanate nanosheets and nanotubes, Chem.
Hazardous substances reduction will be seen and also some metals such as nickel, arsenic, cadmium and lead.
Summary It can be concluded from this study that is based on reduction percentage of absorbance values for inputting method of TiO2 suspension through the cigarette filter tip and base, it appears that the most stable and least reduction present in C3 and H3 samples.
Xu, Selectively reduction of tobacco specific nitrosamines in cigarette smoke by use of nanostructural titanates, Nanoscale. (2013)1-5
Wei, Significant reduction of harmful compounds in tobacco smoke by the use of titanate nanosheets and nanotubes, Chem.
Online since: October 2014
Authors: Xiao Rong Fu
By using the data fusion method the input variable dimension is reduced.
Algorithm f2 realized the function of the fuzzy inference by the reduction factors, it can be called fuzzy function.
It contains input and output data which the target system needs.
Here we utilize the linearized data from feedback controller as the training data.
Llinas.An introduction to multisensor data fusion.Proc.
Algorithm f2 realized the function of the fuzzy inference by the reduction factors, it can be called fuzzy function.
It contains input and output data which the target system needs.
Here we utilize the linearized data from feedback controller as the training data.
Llinas.An introduction to multisensor data fusion.Proc.
Online since: August 2013
Authors: Yan Xue Dong, Fu Hai Huang
Concepts and Features of Rough Set
Rough set theory is proposed by Polish mathematician Z.Pawlak in the early 1980s, it is an theory of mathematical analysis used to study the incomplete data and expression, learning and summary of inaccurate knowledge.
The data set of all the objects analyzed based on rough set theory is called the information system (IS).
Before extracting texture feature, we need to regularize the data: (2) Where R is a regularization constant, which is the sum of all the elements in the GLCM.
Reduction of Knowledge In this paper, MIBARK (Mutual Information Based Algorithm for Reduction of Knowledge) are used in knowledge reduction and summarized value reduction algorithm is used in attribute value reduction.
Different discretization algorithm and attribute reduction algorithm are tested, classification results as shown on the final classification results: Fig.4.
The data set of all the objects analyzed based on rough set theory is called the information system (IS).
Before extracting texture feature, we need to regularize the data: (2) Where R is a regularization constant, which is the sum of all the elements in the GLCM.
Reduction of Knowledge In this paper, MIBARK (Mutual Information Based Algorithm for Reduction of Knowledge) are used in knowledge reduction and summarized value reduction algorithm is used in attribute value reduction.
Different discretization algorithm and attribute reduction algorithm are tested, classification results as shown on the final classification results: Fig.4.
Online since: December 2009
Authors: Jun Kuwano, Morihiro Saito, Hidenobu Shiroishi, Kenji Yoshihara, Hideki Kawai, Takayuki Konishi
The
onset potential of the ORR current was over 1.0 V vs RHE, and the efficiency of 4-electron
reduction was almost 100%.
INTRODUCTION Platinum is the best-known oxygen reduction reaction (ORR) catalyst for polymer electrolyte fuel cells (PEFC) because of its high ORR activity and electrochemical stability.
The lattice parameters and the crystallite sizes of the pyrochlore PRMn samples were determined by the XRD-pattern-processing program JADE ver. 5.0 (Materials Data, Inc.).
RHE, the efficiencies (Eff4) of 4-electron reduction of at 0.9 V vs.
The onset potential of the ORR current was over 1.0 V vs RHE, and the efficiency of 4-electron reduction, almost 100%.
INTRODUCTION Platinum is the best-known oxygen reduction reaction (ORR) catalyst for polymer electrolyte fuel cells (PEFC) because of its high ORR activity and electrochemical stability.
The lattice parameters and the crystallite sizes of the pyrochlore PRMn samples were determined by the XRD-pattern-processing program JADE ver. 5.0 (Materials Data, Inc.).
RHE, the efficiencies (Eff4) of 4-electron reduction of at 0.9 V vs.
The onset potential of the ORR current was over 1.0 V vs RHE, and the efficiency of 4-electron reduction, almost 100%.
Online since: November 2015
Authors: Miron Zapciu, Alin Posteucă
The purpose of this paper is to present a method to quantify the costs of potential losses from production processes for new products to prioritize improvement projects based on the target cost and provide data and information for feasibility studies of continuous improvement projects.
The proposed method will help develop scenarios for continuous cost reduction after starting production through continuous improvement of productivity and quality required.
In terms of new products, the originality of our approach is the transformation of losses in costs for future production processes, in order to identify the opportunities for cost reduction, targeting future kaizen projects to reduce costs.
As follows: Target cost reduction/unit: Drifting cost - Allowable cost (first year) (6) Target cost reduction/unit: = 8,314 €/unit – 7,84 €/unit = 0,474 €/unit Total drifting cost: Drifting cost * Sales Quantities (7) Total drifting cost: 8,314 €/unit * 1.300.000 units = 10.808.200 € Total target cost reduction = Total drifting cost - Total allowable cost (first year) (8) Total target cost reduction = 10.808.200 € - 10.192.000 € = 616.200 € Total target cost reduction allocated to processes (Table 2): a) from reduction of losses: Total process losses (P1+P2+P3+P4): 26.000 € + 358.800 € + 68.510 € + 0 € = 453.310 € (non-value-added manufacturing costs); Calculation: P1= (0,1150 € - 0,0950 €) * 1.300.000 units = 26.000 €; P2 = (0,6210 € - 0,3450 €) * 1.300.000 units = 358.800 €; P3 = (0,1680 € - 0,1153 €) * 1.300.000 units = 68.510
Sources of data collection and data collected (monthly average for the last 6 months, plastic injection process - P2): - production department: ü OEE = Value-adding operating time / Loading time (9) ü OEE = 10.550 min. / 13.500 min. = 0,78%.
The proposed method will help develop scenarios for continuous cost reduction after starting production through continuous improvement of productivity and quality required.
In terms of new products, the originality of our approach is the transformation of losses in costs for future production processes, in order to identify the opportunities for cost reduction, targeting future kaizen projects to reduce costs.
As follows: Target cost reduction/unit: Drifting cost - Allowable cost (first year) (6) Target cost reduction/unit: = 8,314 €/unit – 7,84 €/unit = 0,474 €/unit Total drifting cost: Drifting cost * Sales Quantities (7) Total drifting cost: 8,314 €/unit * 1.300.000 units = 10.808.200 € Total target cost reduction = Total drifting cost - Total allowable cost (first year) (8) Total target cost reduction = 10.808.200 € - 10.192.000 € = 616.200 € Total target cost reduction allocated to processes (Table 2): a) from reduction of losses: Total process losses (P1+P2+P3+P4): 26.000 € + 358.800 € + 68.510 € + 0 € = 453.310 € (non-value-added manufacturing costs); Calculation: P1= (0,1150 € - 0,0950 €) * 1.300.000 units = 26.000 €; P2 = (0,6210 € - 0,3450 €) * 1.300.000 units = 358.800 €; P3 = (0,1680 € - 0,1153 €) * 1.300.000 units = 68.510
Sources of data collection and data collected (monthly average for the last 6 months, plastic injection process - P2): - production department: ü OEE = Value-adding operating time / Loading time (9) ü OEE = 10.550 min. / 13.500 min. = 0,78%.