Exponential Concentration in Quantum Generative Modeling and Quantum Kernel Methods

Zoe Holmes
EPFL (Ecole Polytechnique Fédérale de Lausanne)

It is by now well established that standard losses for variational quantum algorithms exponentially concentrate (in the number of qubits) towards some fixed value, leading to an exponential scaling of the number of measurements required for successful training. However, relatively less attention has been paid to the role of exponential concentration for other quantum machine learning models. Here we discuss the causes and consequences of exponential concentration in quantum generative modeling and quantum kernel methods. A common theme to both accounts is importance of explicitly considering the effect of shot noise when analysing exponential concentration.


Back to Workshop II: Mathematical Aspects of Quantum Learning