Paper Title:
Parallelizing Network Coding Using CUDA
  Abstract

Network coding has emerged as a promising technique to improve network throughput and bandwidth. However, due to high computational complexity, its practicability has remained to be a challenge. At the same time, applications accelerated by GPU are confined to GPU acting as a coprocessor to consume dataset transferred from CPU. Therefore, an aggressive parallel network coding is customized for GPU using CUDA (Compute Unified Device Architecture), in which dataset are partitioned for exploiting both thread-level parallelism and data-level parallelism, and collaboration between GPU and CPU is introduced to decoding with texture cache so that GPU can act as not only data consumer but also data producer. Moreover, random linear network coding is parallelizing on CUDA-enabled GPU to validate proposed techniques. Experimental results demonstrate that it is effective to parallelize network coding on GPU-accelerated system using proposed techniques.

  Info
Periodical
Edited by
Wenya Tian and Linli Xu
Pages
484-488
DOI
10.4028/www.scientific.net/AMR.186.484
Citation
X. B. Gan, L. Shen, Z. Y. Wang, X. Lai, Q. Zhu, "Parallelizing Network Coding Using CUDA", Advanced Materials Research, Vol. 186, pp. 484-488, 2011
Online since
January 2011
Export
Price
$32.00
Share

In order to see related information, you need to Login.

In order to see related information, you need to Login.

Authors: Václav Rek, Ivan Němec
Chapter 2: Dynamics and Stability of Technical Systems, Computational Modeling and Testing
Abstract:This paper introduces a procedure for parallel computing with the Dynamic Relaxation method (DR) on a Graphic Processing Unit (GPU).This...
331