HPRD: High Performance Tiered Replica Distribution in Cluster Storage System

Article Preview

Abstract:

The cluster system exists as a variety of different machines. There are Nodes with high performance and nodes with lower performance. The distribution of data on each machine and I / O are determined by the equalization of load balancing. According to the traditional way of clustering data distribution, high performance node performance generally can't fulfill full potential. In this way, the lower performance nodes will become bottleneck. To address this issue we carried out some research and propose a High Performance Tiered Replica Distribution (HPRD) a way of the replica data distribution in the cluster storage system. HPRD can fulfill full high-performance nodes potential in the aspect of the performance benefits and high performance nodes can share more high-performance I/O load. Such performance bottleneck will not exist in the lower performance nodes. It can improve the performance of cluster system and the cluster system resource Utilization rate.It also can reduce inefficient use of resources.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

2375-2378

Citation:

Online since:

August 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] E. Pinheiro, W. D. Weber, and L. A. Barroso. Failure trends in a large disk drive population[C]\In Proc. of the FAST'07 Conference on File and Storage Technologies. San Jose, CA 2007: 17–28.

Google Scholar

[2] Bianca Schroeder, Garth A. Gibson. Disk Failures in the Real World: What Does an MTTF of 1, 000, 000 Hours Mean to You?[C]\In Proc. of the FAST'07 Conference on File and Storage Technologies, 2007: 1–16.

DOI: 10.1145/1288783.1288785

Google Scholar

[3] J. G. Elerath. AFR: problems of definition, calculation and measurement in a commercial environment[C] \ InProc. of the Annual Reliability and Maintainability Symposium , (2000).

DOI: 10.1109/rams.2000.816286

Google Scholar

[4] Jim Gray, Catharine van Ingen. Empirical Measurements of Disk Failure Rates and Error Rates[R]. Microsoft Research Technical Report MSR-TR-2005-166, (2005).

Google Scholar

[5] S. A. Brandt, L. Xue, E. L. Miller, and D. D. E. Long. Efficient metadata management in large distributed file systems. In Proceedings of the 20th IEEE / 11th NASA Goddard Conference on Mass Storage Systems and Technologies, Apr. 2003: 290–298.

DOI: 10.1109/mass.2003.1194865

Google Scholar

[6] S. A. Weil, S. A. Brandt, E. L. Miller, and C. Maltzahn. CRUSH: Controlled, scalable, decentralized placement of replicated data. In Proceedings of the 2006 ACM/IEEE Conference on Supercomputing (SC '06), Tampa, FL, Nov. 2006. ACM.

DOI: 10.1109/sc.2006.19

Google Scholar

[7] Jeffrey Stuecheli, Dimitris Kaseridis, David Daly, Hillery C. Hunter, Lizy K. John. The virtual write queue: coordinating DRAM and last-level cache policies[J]. ACM SIGARCH Computer Architecture News - ISCA '10, June 2010, 38(3): 72-82.

DOI: 10.1145/1816038.1815972

Google Scholar