Tensor Train Multiplication (TTM)

Alexios A Michailidis, Christian Fenton, Martin Kiffner.

We present the Tensor Train Multiplication (TTM) algorithm for the elementwise multiplication of two tensor trains with bond dimension χ. The computational complexity and memory requirements of the TTM algorithm scale as χ3 and χ2, respectively. This represents a significant improvement compared with the conventional approach, where the computational complexity scales as χ4 and memory requirements scale as χ3.We benchmark the TTM algorithm using flows obtained from artificial turbulence generation and numerically demonstrate its improved runtime and memory scaling compared with the conventional approach. The TTM algorithm paves the way towards GPU accelerated tensor network simulations of computational fluid dynamics problems with large bond dimensions due to its dramatic improvement in memory scaling.

Cite as BibTeX

@misc{michailidis2024tensortrainmultiplication,
title={Tensor Train Multiplication},
author={Alexios A Michailidis and Christian Fenton and Martin Kiffner},
year={2024},
eprint={2410.19747},
archivePrefix={arXiv},
primaryClass={physics.comp-ph},
url={https://arxiv.org/abs/2410.19747},
}

AncoraThemes © 2025. All Rights Reserved.

QCFD © 2025. All Rights Reserved. PRIVACY POLICY

European Union Flag

The QCFD (Quantum Computational Fluid Dynamics) project is funded under the European Union’s Horizon Programme (HORIZON-CL4-2021-DIGITAL-EMERGING-02-10), Grant Agreement 101080085 QCFD.