Learning better models for computer vision, Thomas Pock
Sprache des Titels:
Until recently, computational imaging, learning was seldomly used in practical applications of machine vision. Recent progress in computing power as well as new algorithmic insights makes these techniques now feasible and exploitable. According to Bayes' theorem, the posterior distribution of a certain vision problem is proportional to the product of a prior distribution and a data likelihood distribution. The classical maximum a-posterior (MAP) estimate is given by the sample that maximizes the posterior probability, or equivalently minimizes the negative logarithm of the posterior probability. This leads to the minimization of a cost function that is given by the sum of a regularization term (prior) and a data fidelity term (data likelihood). Rather than using handcrafted models for these terms, we make use of machine learning techniques to learn "better" models. In a first application we show how to learn a powerful regularization term for high-quality image reconstruction from compressed sensing MRI. Our learned algorithm allows to speed-up the MRI acquisition time by a factor 4-6. In a second application, we show how to learn the data fidelity term for a stereo algorithm. Our learned stereo algorithm yields state-of-the-art results on a variety of depth estimation benchmarks while running in real-time.