This thesis introduces a novel HLS-driven methodology for efficiently mapping Tree Tensor Networks (TTNs) onto FPGA hardware. TTNs provide a scalable representation for high-dimensional data and have proven valuable in quantum simulation and machine learning. An advantage of TTNs in machine learning applications is their explainability compared to other deep learning models that act as black boxes. Conventional implementations in Verilog or VHDL incur lengthy design iterations as network sizes grow. By contrast, our HLS approach dramatically shortens development time, enabling rapid exploration of varying TTN topologies and feature configurations. We leverage loop unrolling and pipelining to optimize tensor contractions and integrate parameterizable processing elements to accommodate different network depths. Experimental results on multiple machine-learning datasets show that the synthesized hardware matches software-level accuracy up to a reasonable quantization level. The primary application of the TTN model is the classification of high-energy physics data, which were also used to test the design. Overall, this work demonstrates that HLS not only accelerates design cycles but also produces hardware realizations of TTNs that meet performance and efficiency targets.

This thesis introduces a novel HLS-driven methodology for efficiently mapping Tree Tensor Networks (TTNs) onto FPGA hardware. TTNs provide a scalable representation for high-dimensional data and have proven valuable in quantum simulation and machine learning. An advantage of TTNs in machine learning applications is their explainability compared to other deep learning models that act as black boxes. Conventional implementations in Verilog or VHDL incur lengthy design iterations as network sizes grow. By contrast, our HLS approach dramatically shortens development time, enabling rapid exploration of varying TTN topologies and feature configurations. We leverage loop unrolling and pipelining to optimize tensor contractions and integrate parameterizable processing elements to accommodate different network depths. Experimental results on multiple machine-learning datasets show that the synthesized hardware matches software-level accuracy up to a reasonable quantization level. The primary application of the TTN model is the classification of high-energy physics data, which were also used to test the design. Overall, this work demonstrates that HLS not only accelerates design cycles but also produces hardware realizations of TTNs that meet performance and efficiency targets.

High Level Synthesis (HLS) based Hardware Implementation of Tree Tensor Networks (TTN)

GUPTA, PRATEEK
2024/2025

Abstract

This thesis introduces a novel HLS-driven methodology for efficiently mapping Tree Tensor Networks (TTNs) onto FPGA hardware. TTNs provide a scalable representation for high-dimensional data and have proven valuable in quantum simulation and machine learning. An advantage of TTNs in machine learning applications is their explainability compared to other deep learning models that act as black boxes. Conventional implementations in Verilog or VHDL incur lengthy design iterations as network sizes grow. By contrast, our HLS approach dramatically shortens development time, enabling rapid exploration of varying TTN topologies and feature configurations. We leverage loop unrolling and pipelining to optimize tensor contractions and integrate parameterizable processing elements to accommodate different network depths. Experimental results on multiple machine-learning datasets show that the synthesized hardware matches software-level accuracy up to a reasonable quantization level. The primary application of the TTN model is the classification of high-energy physics data, which were also used to test the design. Overall, this work demonstrates that HLS not only accelerates design cycles but also produces hardware realizations of TTNs that meet performance and efficiency targets.
2024
High Level Synthesis (HLS) based Hardware Implementation of Tree Tensor Networks (TTN)
This thesis introduces a novel HLS-driven methodology for efficiently mapping Tree Tensor Networks (TTNs) onto FPGA hardware. TTNs provide a scalable representation for high-dimensional data and have proven valuable in quantum simulation and machine learning. An advantage of TTNs in machine learning applications is their explainability compared to other deep learning models that act as black boxes. Conventional implementations in Verilog or VHDL incur lengthy design iterations as network sizes grow. By contrast, our HLS approach dramatically shortens development time, enabling rapid exploration of varying TTN topologies and feature configurations. We leverage loop unrolling and pipelining to optimize tensor contractions and integrate parameterizable processing elements to accommodate different network depths. Experimental results on multiple machine-learning datasets show that the synthesized hardware matches software-level accuracy up to a reasonable quantization level. The primary application of the TTN model is the classification of high-energy physics data, which were also used to test the design. Overall, this work demonstrates that HLS not only accelerates design cycles but also produces hardware realizations of TTNs that meet performance and efficiency targets.
Tree Tensor Networks
High Level Synthesis
FPGA
File in questo prodotto:
File Dimensione Formato  
GUPTA_PRATEEK.pdf

accesso aperto

Dimensione 8.81 MB
Formato Adobe PDF
8.81 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/87357