Testing and Performance Analysis of FCA Algorithm Accelerating IBM Platform MPI

Article Preview

Abstract:

MPI is one of the important standards in high performance computing. MPI performance is generally focused on collective communications. And FCA (Fabric Collective Accelerator) is a new method accelerating collective communications. Through high performance computing environment testing, this paper mainly analyses the result of FCA with shared memory and without share memory accelerating IBM Platform MPI, FCA's principle and integration between IBM Platform MPI and FCA. At the same time, this paper may be a good reference for high performance computing using FCA.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

429-433

Citation:

Online since:

July 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Zhihui DU. High-performance Computing Parallel Programming Technology—MPI Parallel Programming Technology [M]. page: 12~15.

Google Scholar

[2] http: /www. mpi-forum. org.

Google Scholar

[3] http: /mvapich. cse. ohio-state. edu.

Google Scholar

[4] http: /www. open-mpi. org.

Google Scholar

[5] http: /www. mcs. anl. gov/research/projects/mpi/mpich1-old.

Google Scholar

[6] Rajeev Thakur, William Gropp, Ewing Lusk. Optimizing Noncontiguous Accesses in MPI-IO.

DOI: 10.1016/s0167-8191(01)00129-6

Google Scholar

[7] http: /mellanox. com/content/pages. php?pg=products_dyn&product_family=104& me nu_section=73.

Google Scholar

[8] Mellanox_FCA V2. 2 User Guider.

Google Scholar

[9] Rajeev Thakur, William Gropp. Improving the Performance of Collective Operations in MPICH.

Google Scholar

[10] Ernie Chan, Robert van de Geijn, William Gropp, Rajeev Thakur. Colletive Communication on Architectures that Support Simultaneous Communication over Mutiple Links.

DOI: 10.1145/1122971.1122975

Google Scholar

[11] Swamy N. Kandadai, Xinghong He. Performance of HPC application over InfiniBand, 10 Gigabit and 1 Gigabit Ethernet.

Google Scholar