Search Options

Sort by:

Sort search results by

Publication Type:

Publication Type filter

Open access:

Publication Date:

Periodicals:

Periodicals filter

Search results

Online since: August 2013
Authors: Bing Wei
Introduction Energy saving in data center has received increasing intention recently, virtualization [1,2] as a powerful tool improves energy efficiency and offers server consolidation and application isolation.
So the clustering data, expressed by [(CPU, I/O, Net), P], consists of two parts including the workload part and the performance part, the workload part of data is used for clustering while the performance part regulates the restriction of service quality.
To increase accuracy of clustering results and accelerate the clustering rate we define a time interval and only the data withinis clustered (t represents the current time), an exponential weight is attached to each data, which results in the older data contributes smaller influence on the clustering while newer data contributes bigger influence.
Model-based Optimization for Data-intensive Application on Virtual Cluster.
Xiong: A Novel Scheduling Policy for Power Reduction in Cluster with Virtual Machines.
Online since: December 2012
Authors: Guang Xue Chen, Hai Wen Wang, Jie Li
It took charge of presenting the obtained multispectral image data which included raster object and vector object onto the internal surface.
The raster object and the vector object could be processed freely, through space presentation it could express spectral and surface characteristic, then realized the presentation of spectral data from the data level
It obtained spectral data from the presentation model, and combined demonstration image which could be visualized from the data level, the principle and effect of which was similar to virtual proof
It was responsible for demonstrating the spectral data subject which was combined by the combination model through the internal procedure, consequently offered reference to the subsequent spectral data transformation and spectral color gamut mapping
It was mainly in charge of appropriately processing the multispectral image data obtained, offering support for the work of presentation model, and provided data and reference for the spectral data transformation
Online since: December 2013
Authors: Yi Tian Li, Li Bing Huang
Fig.1 The study area Data processing.
The data[12-14] can be divided to four sections according to the period(Table 1).
Table 1 Suspended sediment concentration data The hydrodynamic conditions in the study area.
Fig.7 The vertical distribution of the SSC The effect of suspended sediment distribution with the reduction of the sediment into sea in the offshore area According to the measured data of Datong, the sediment which is from river into sea has the significant reduced trend since 1950.
Fig.8 The variation trend of SSC In other way, it averages the data from 1959 to 2000 and the data from 2001 to 2010, and compares the two (Fig.9).
Online since: February 2013
Authors: Jun Fan
According to the complex surface measurement programming provided by the measuring data, through the data processing, complex surface shape reconstruction theory, the curved surface modeling module in the reverse engineering system describe the surface geometry characteristics mathematical model.So, according to the numerical control equipment, tooling, cutting tool condition types and sizes, parts processing of the comprehensive selected factors, CAM module could determine the reasonable blank size, the machining way, the automatic machining surfaces programming processing and international standards of APT cutter location data file.After the post processing to generate NC code[2].
Using the Pro/NC processing module, the computer geometry model ( CAD ) and the computer aided manufacturing ( CAM ) of products will be integrated.So, used by the manufacturing processing data, it can be planned that product processing and manufacturing processing.
Manufacturing technology division using Pro/NC for manufacturing process planning, according to the manufacturing process data, processing and manufacturing parameters calculated from the cutting tool relative processing coordinate tool path data, called the CL DATA ( CUTTER LOCATION DATA )[3].Tool path data after the processor converts the processing machine code to a working machine tool for the actual machining process design, manufacturing products and part model.
References [1] H Woo,E Kall,SemyungWang,et,a1.A new segmentation method for point cloud data.International Journal of Machine Tools&Manufacture, 42(2002):167-168
[6] Lee K H,Woo H, Suk T.Data reduction methods for reverse engineering.Intemational Journal of Advanced Manufacturing Technology.17(2001):735-743
Online since: June 2010
Authors: Fang Li, Qun Xiong Zhu
Generally, LSI can be implemented via singular-value decomposition (SVD) of the original data matrix.
So, LSI is very efficient in reducing the latent semantic dimensions of the original data set.
In the field of text mining, NMF plays an important role in dealing with high dimensional data.
Using NMF to the document-term matrix obtained from original document data set, it can get term-feature matrix and feature-document matrix accordingly.
There, term-feature matrix shows the features of text data set, while feature-document matrix may imply the data clustering results for this data set.
Online since: June 2010
Authors: Mieczyslaw Siemiatkowski, Mariusz Deja
A conceptual scheme for generation of alternative process plans in the form of a network is developed, based on part design data modeling in terms of machining features.
This in turn may yield more efficient utilization of machine resources, correspondingly to reductions in manufacturing cycle times.
A conceptual scheme for generation of alternative process plans in the form of a network is developed, based on part design data modeling by machining features.
Data model for CAPP.
Subsequent features are established by searching data matrices until the need for a new setup occurs.
Online since: July 2013
Authors: Rosario Ceravolo, Alessandro de Stefano, Antonino Quattrone, E. Matta, L. Zanotti Fragonara
Hence the collected data can be considered as symptoms and used to assess the reliability and the prognosis of the monitored structure.
They are based on the parameters suggested in the literature [3]; they become the base for the assumptions for the experimental test data.
The statistical models of the actions and resistances are based on existing literature [7] and on the experimental data collected.
Assuming a bi-linear profile [3] [12], from the experimental data it is possible to estimate the deterioration rates (Eq. 5)
Frangopol, Structural Health Monitoring and Reliability Estimation: Long Span Truss Bridge Application with Environmental Monitoring Data, Journal of Engineering Structures, Elsevier, 30 (2008), 2347-2359
Online since: March 2015
Authors: Marion Merklein, Sebastian Suttner, Martin Rosenschon
The characterisation of the Bauschinger effect and the identification of kinematic hardening models require test setups that are able to generate cyclic stress-strain data.
To receive accurate strain data, an optical strain measurement system ARAMIS (GOM mbh, Braunschweig, Germany) is installed to quantify the local strain data in the 4.0 mm² deformation area of the tension-compression test specimen (fig. 1b) as well as in the 7.5 mm² deformation area of the modified ASTM shear test specimen (fig. 1c).
In contrast to the tension-compression test, this characterisation method needs - resulting from the shear state - an adequate transformation of the raw data for the parameter identification.
Therefore, the kinematic hardening according to Chaboche and Rousselier is identified at two different pre-strains on the basis of data from a tension-compression test and a modified ASTM shear test.
However, the data of tension-compression test and the shear test lead to hardening parameters, which are more or less in a similar order of magnitude.
Online since: March 2007
Authors: Beatriz López, C. Iparraguirre, Ana Isabel Fernández-Calvo, J.M. Rodriguez-Ibabe
This results in a reduction of the available nucleation sites for precipitation as well as a decrease in the driving force for recrystallization along time.
Some of the aforementioned physical parameters were back determined from the comparison between experimental data and model calculations.
The activation volume, Va, is back calculated from experimental data.
As the softening kinetics are provided by stress measurements, so as to be able to compare the predictions of the model with the experimental softening data, a hardening model incorporating the contribution of precipitation, strain and solid solution hardening is considered.
From the data the following relationship can be written: [ ] 5480 2 10862 .
Online since: July 2014
Authors: Ramachandran Rajeswari, Kuttiyapillai Dhanasekaran
They expressed parallelism by creating parallel data structures and invoking the parallel operators.
Similarly, we define two documents in the same data set.
After increasing the percentage of data selection, the average accuracy was 96.8%.
Classifier was designed to make use of a development test data set to avoid overfitting of data so that while estimation of accuracy, our method shows better result.
[9] Y.Jun,Z.Benyu,L.Ning,Y.Shuicheng,C.Qiansheng,F.Weiguo,Y.Qiang,X.Wensi,C.Zheng,Effec-tive and Efficient Dimensionality Reduction for Large-Scale and Streaming Data Preprocessing, IEEE Transactions on Knowledge and Data Engineering.
Showing 21461 to 21470 of 40389 items