High-Level Synthesis Pipeline Optimization Method for ANN Backpropagation Algorithm Using FPGA SoC

Article Preview

Abstract:

Research on the Backpropagation Artificial Neural Network (BANN) method continues to depend on generating hardware description language (HDL) directly from the algorithm. Converting the algorithm into a Hardware Description Language (HDL) that can be synthesized and implemented into a Field-Programmable Gate Array (FPGA) System-on-Chip (SoC) necessitates a complex and challenging endeavour. This study examines the outcomes of the BANN algorithm's High-Level Synthesis (HLS) on the Zynq7000 series XC7Z010CLG400-1 FPGA SoC family. Utilizing the Vivado HLS program showcases the accurate correlation between C's simulation and synthesis results, validating the consistency between the software and hardware components. The optimal optimization strategy involves utilizing feedforward connections that transmit information directly from the input to the hidden phase. The reverse stage involves adjusting the weights that connect the input unit to the hidden phase. The computational speed of the pipeline is roughly 5,340 nanoseconds (178 multiplied by 30 nanoseconds), which is 2.2 times (398 multiplied by 30 nanoseconds) quicker than the speed it would have without optimization. Approximately 55.28% of the latency, equivalent to 220 clock cycles, is reduced. The average device usage rate is 32.75%.

You might also be interested in these eBooks

Info:

Periodical:

Engineering Headway (Volume 27)

Pages:

142-149

Citation:

Online since:

October 2025

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2025 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] S. Park and T. Suh, "Speculative Backpropagation for CNN Parallel Training," in IEEE Access, vol. 8, pp.215365-215374, 2020.

DOI: 10.1109/ACCESS.2020.3040849

Google Scholar

[2] K. Liu, E. Börjeson, C. Häger and P. Larsson-Edefors, "FPGA Implementation of Multi-Layer Machine Learning Equalizer with OnChip Training," 2023 Optical Fiber Communications Conference and Exhibition (OFC), San Diego, CA, USA, 2023, pp.1-3, DOIi:.

DOI: 10.1364/OFC.2023.M1F.4

Google Scholar

[3] N. Afianah, A. E. Putra and A. Dharmawan, "High-Level Synthesize of Backpropagation Artificial Neural Network Algorithm on the FPGA," 2019 5th International Conference on Science and Technology (ICST), Yogyakarta, Indonesia, 2019, pp.1-5.

DOI: 10.1109/ICST47872.2019.9166209

Google Scholar

[4] Borhani, M. H. Goharinejad and H. R. Zarandi, "FAST: FPGA Acceleration of Neural Networks Training," 2022 12th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, Islamic Republic of, 2022, pp.492-497.

DOI: 10.1109/ICCKE57176.2022.9960109

Google Scholar

[5] P.G. Clem, M. Rodriguez, J.A. Voigt and C.S. Ashley, U.S. Patent 6,231,666. (2001) A. E. Putra and I. N. Rifai, "Floating-point Unit Implementation in Altera FLEX10K FPGA using VHDL,", in proceeding of The 1st International Seminar on Science and Technology (ISSTEC), Yogyakarta, 2009, ISBN: 978-979-19201-0-0.

Google Scholar

[6] P. Coussy and A. Morawiec, High-Level Synthesis from Algorithm to Digital Circuit, Netherlands: Springer, 2008, ISBN: 978-1-4020-8588- 8

DOI: 10.1007/978-1-4020-8588-8

Google Scholar

[7] Xilinx, "Vivado Design Suite User Guide High-Level Synthesis," XILINX corp., 2018.

Google Scholar