top of page
Search
  • Arkaprava Choudhury

Towards realistic quantum advantage in learning tasks - QIP 2022


A series of papers in late-2021 presented at the 25th QIP conference last week demonstrated potential quantum advantage over classical algorithms in learning tasks and how implementing the additional quantum processing for some such tasks could be achieved using modern noisy devices.


In particular, Hsin-Yuan Huang, et. al., proved a better bound on the number of training points required for learning various properties about a system. In certain tasks, quantum machines can even learn from exponentially fewer experiments than those required otherwise. The exponential advantage holds in predicting properties of physical systems, performing quantum principal component analysis on noisy states, and learning approximate models of physical dynamics.


An interesting detail to note here is the problem of principal component analysis which was famously dequantized by Ewin Tang in 2018, and shown to be classically simulatable. Jordan Cotler, Hsin-Yuan Huang, and Jarrod McClean remark that this demonstrates an interesting relationship between the algorithmic techniques of sample-query (SQ) access and quantum algorithms, and discuss cases when it makes sense to compare these two approaches.


In a related work by Sitan Chen, et. al., with Hsin-Yuan Huang as a co-author, it was shown that there exists some property of quantum states which requires exponentially fewer measurements to be learnt upon relaxing the restrictions on the number of entangled measurements on replicas of the state. This presents a hierarchy of learning tasks based on the number of entangled measurements needed to perform the learning task efficiently.


Another recent development towards a realization of quantum advantage is presented in a paper by Matthias C. Caro, et. al., which suggests potential speed-ups in how to compile unitaries into a polynomial number of low-level gates which generally uses exponential-size training data. They provide a framework for how to achieve an acceptable level of generalization error incurred upon making predictions on a testing data set, despite training on a limited number of data points in a training data set. Potential applications for this technique are discussed for the problems of detecting phase transitions and quantum dynamical simulation.



1 view0 comments

Recent Posts

See All
bottom of page