Virtual Talk: Training quantum neural networks with an unbounded loss function

Maria Kieferova
University of Technology Sydney

Quantum neural networks (QNNs) are a framework for creating quantum algorithms that promises to combine the speedups of quantum computation with the widespread successes of machine learning. A major challenge in QNN development is a concentration of measure phenomenon known as a barren plateau that leads to exponentially small gradients for a range of QNNs models. In this work, we examine the assumptions that give rise to barren plateaus and show that an unbounded loss function can circumvent the existing no-go results. We propose a training algorithm that minimizes the maximal Renyi divergence of order two and present techniques for gradient computation. We compute the closed form of the gradients for Unitary QNNs and Quantum Boltzmann Machines and provide sufficient conditions for the absence of barren plateaus in these models. We demonstrate our approach in two use cases: thermal state learning and Hamiltonian learning. In our numerical experiments, we observed rapid convergence of our training loss function and frequently archived a 99% average fidelity in fewer than 100 epochs.


Back to Quantum Numerical Linear Algebra