Fault-tolerant quantum computers offer the promise of dramatically improving machine learning through speed-ups in computation or improved model scalability. In the near-term, however, the benefits of Quantum Machine Learning (QML) are not so clear. Understanding expressibility and trainability of quantum models—and Quantum Neural Networks (QNNs) in particular—requires further investigation.
Researchers at ETH Zurich and IBM Research Zurich have used tools from information geometry to define a notion of expressibility for quantum and classical models.
The effective dimension, which depends on the Fisher information, is used to prove a novel generalisation bound and establish a robust measure of expressibility. They showed that QNNs are able to achieve a significantly better effective dimension than comparable classical neural networks. To then assess the trainability of quantum models, they connected the Fisher information spectrum to barren plateaus, the problem of vanishing gradients.
Importantly, certain quantum neural networks can show resilience to this phenomenon and train faster than classical models due to their favourable optimisation landscapes, captured by a more evenly spread Fisher information spectrum.
Their work is the first to demonstrate that well-designed quantum neural networks offer an advantage over classical neural networks through a higher effective dimension and faster training ability, which the team has verified on real quantum hardware.