Search Options

Sort by:

Sort search results by

Publication Type:

Publication Type filter

Open access:

Publication Date:

Periodicals:

Periodicals filter

Search results

Online since: December 2010
Authors: Jian Ping Hu, Jia Guo Zhu
An Improved CAL Register File with Dual-Threshold Technique for Leakage Reduction Jianping Hu1, a and Jiaguo Zhu1,b 1Faculty of Information Science and Technology, Ningbo University, Ningbo City, China anbhjp@yahoo.com.cn, bnbuzhu@yahoo.com.cn Keywords: Dual-threshold technique, Leakage reduction, Clocked adiabatic logic, Register file, Nanometer circuits.
There are two classes of leakage-reduction techniques: standby leakage reduction technique and active leakage reduction technique.
The active leakage reduction techniques, such as stacking transistor techniques, dual threshold CMOS and P-type CMOS design technique, have also been proposed mainly to reduce the leakage of devices in active modes [8].
However, the previously proposed adiabatic register files focus mainly on dynamic energy reduction.
Improved CAL Register Using Dual-Threshold Technique An improved CAL 32´32 register file is shown in Fig. 5 [12], which consists of a storage-cell array, address decoders, read/write word-line drivers, sense amplifiers, read bit-line and write data-line drivers, and a auxiliary clock generator that supplies auxiliary clock CX and CXb of the whole circuits.
Online since: April 2014
Authors: Hong Wang, Chun Han Wang, Zhi Na Li
So the Gabor feature need dimension reduction.
After sampling, the dimension of data is reduced from 655360 to 163840.But it is still a very high dimensional, so is needed to reduce dimension further.
Locally Linear Embedding attempt to discover nonlinear structure in high dimensional data by exploiting the local symmetries of linear reconstructions [7,8].Supervised LLE algorithm is the complement to LLE by Dick and Robert [9],because the original LLE does not make use of class label information.
For different data structure, use different methods.
The linear dimension reduction method FastPCA is firstly used to deal with the linear structure of data, then use SLLE to deal with the nonlinear structure of data, can get more effective results.
Online since: January 2013
Authors: Zoran Stamenic, Dragan Jovanovic, Zarko Miskovic, Milan Tasic, Radivoje Mitrovic
Data acquisition and processing are also considered, which includes control, monitoring and automatisation of the test stand.
For such measurements, appropriate sensors and devices for general-purpose data acquisition are needed.
The obtained data can be used to define the load spectrum which is simulated in the laboratory.
The data collected enabled the determination of the working characteristics of all types of conveyor idlers, as well as a comparative analysis.
An example of recorded data is shown in Fig. 13.
Online since: August 2016
Authors: S. Sendilvelan, M. Ganesan
It has been found that 13% reduction in CO light-off time was achieved with pre-catalyst (40%vol.), 50% reduction with pre-catalyst (20%vol.) and 66% reduction with hot air injector system, when compared to TCCS.
Also 14% reduction in HC light-off time was achieved with pre-catalyst (40%vol.), 43% reduction with pre-catalyst (20%vol.) and 63% reduction with hot air injection system, when compared to TCCS.
The temperatures are measured using K type thermocouples, stored in data logger and sent to ECS for actuating the motor shafts to open and close the valves.
The temperature of the catalyst was measured at the following points using K type thermocouple and data logger.
It has been found that there is 66% reduction in CO light-off time and 63% reduction in HC light-off time when compared to TCCS without pre-catalyst at Position 1.
Online since: October 2014
Authors: Dun Nan Liu, Zhong Kang Wei, Yan Ling Du, Yuan Zhuo Li, Yu Jie Xu
First, three scenes of carbon emission reduction have been set, including standard scene, moderate control scene, strict control scene, which corresponds to different demand of carbon emission reduction.
(2) Scenario B - target constraint Under Scenario B, based on the further emission reduction targets, various plants determine their own emission reduction progress.
Case study In this paper, the carbon emission quota and generation capacity in Jibei under different scenarios will be analyzed based on the generation and electricity data of Jibei in 2013.
The annual growth rate of electricity consumption is 3%. 4.1 Carbon quota allocation results Based on the historical data in 2013, the carbon quota and the generation power situation of thermal power units in Jibei region in 2014-2017 are calculated, shown in Table 2.
But it may lead to urgent emission reduction in some companies.
Online since: July 2007
Authors: J. Steinbeck, K. Lamprecht, J. Hecht, Konstantin Galanulis, Marion Merklein
With respect to the utilization of digitized tooling data within the finite element analysis, further investigations were performed on the impact of data reduction strategies.
The data can be used for i.e. the surface reconstruction using common CAD systems, the comparison of measured real geometries with CAD data or with other measured data, the inspection of dimensions and last but not least the generation of NC machine data in CAM systems [3].
If now STL data of formed parts is imported from PAM-STAMP 2G into the ATOS software in order to be compared with measured data, the data needs to get an offset.
Here, the three above mentioned reduction levels of the measured data were investigated in order to identify the influence of the mesh density on the calculation results.
When using digitized data of real tools for the purpose of finite element analyses, a reasonable data reduction of the generated meshes has to occur.
Online since: November 2013
Authors: Xue Wu Zhang, Xin Nan Fan, Hong Hai Zhuang, Guo Gao Liu, Zhuo Zhang, Min Li
Harris Feature Points Mosaic Algorithm Based On PCA Dimension Reduction In this paper, we combine with the method of principle component analysis (PCA)[6] and use PCA to realize the dimension reduction of the high dimensional feature vector.
For decreasing the complexity of the algorithm, PCA is a classical method of information compression and dimension reduction [15].
In the actual image mosaic, the image data sets are usually got in different conditions and time and there are no benchmark data sets.
Conclusion According to the bad real-timeperformance of Harris feature mosaicalgorithm, this paper presents a feature points mosaic algorithm based on PCA dimension reduction.
The algorithm reduces the calculation of the matching process by establishing characteristic descriptors and dimension reduction.
Online since: May 2014
Authors: Xiang Ming Wen, Zhi Qun Hu, Zhen Hai Zhang, Zhao Ming Lu, Xiao Kui Chen, Yang Chun Li
For operators, voice applications and their own data applicationsare the main source of revenue.
The asymmetry between signaling traffic and the data traffic is the distinctive feature of the always-on applications.As the traditional mobile communication service, voice applications must be guaranteed at any time.
The other applications take up a lot of data traffic and the corresponding signaling resources.
The eNodeB obtains the resources usage of each type of applications over a period of time, and then sends the data to the strategy controller.
The QoS negotiation consists of the QoS reduction of the new application and the QoS reduction of the existing applications which belong to Type I and Type III.
Online since: August 2013
Authors: Xin Jie Shao, Yun Guang Qi, Jin Hua Liu, Guang Tian
Introduction Fault diagnosis patterns recognition[1,2] method classify with measure data and needn’t systemic mathematics model, so it is called fault diagnosis based on data.
In 1992, French scholar Dubois and Praded[3] put forward fuzzy-rough sets which combine the advantages of fuzzy sets and rough sets theory, use fuzzy sets replace the precise sets, and get the reduction with lesser information loss, holding the classification capability of original data sets furthest.
Dynamic Clustering and Discrete The essential of continuous data discrete is clustering, there are many kinds of continuous data discrete method, but they don’t consider the resolution of inner set element.
Thus get conditon attribute reduction , the result is the same as reduction method fo common rough sets.
Using the above rules, we can do patterns recognition to measuring data with 4 kinds of working conditions, to compare the effect, we do patterns recognition with common rough sets method to aboved data, shown in Table 5, we can find that the indentification precision of fuzzy rough sets method is much better than common rough sets method.
Online since: October 2010
Authors: Yu Zhu Zhang, Wei Gang Han, Chang Qing Hu
The weight loss rate was calculated based on the data of Fig. 1, and DTG curves were shown in Fig. 2.
Fig. 4 Relationship between [1-(1-x)1/3] and reaction time The value of 1-(1-x)1/3 for different reaction time t was calculated by using the data in Fig. 3.
Reaction Rate and Activation Energy Deforming (5): k= (6) where k can be calculated by using the data in Fig. 5.
Taking the natural logarithm of both sides of Equation (7): lnk=lnA-Ea/RT (8) Taking the data of the peak part of DTG curves, which had a fastest weight-loss rate, lnk were plotted versus 1/T in Fig. 5.
Table 2 The activation energy and frequency factors for each reaction Atmosphere Activation Energy Ea, kJ/mol Frequency Factor A Correlation Coefficient r Ar 337.590 1.13×1010 0.9918 N2 439.611 7×1013 0.9926 According to the data given in Table 2, the empirical formula about the reduction reaction rate and the temperature were obtained.
Showing 1751 to 1760 of 40694 items