Random vs. optimal data for high-dimensional approximation
Sprache des Vortragstitels:
Englisch
Original Kurzfassung:
In this talk we give an overview of some recent and not so recent developments in the area of approximation of functions based on function evaluations.
The emphasis is on information-based complexity and the worst-case setting, i.e., we ask for the minimal number of information (aka measurements) needed by any algorithm to achieve a prescribed error for all inputs, basically ignoring implementation issues.
However, it turned out that in many cases, certain (unregularized) least squares methods based on "random" information, like function evaluations, can catch up with arbitrary algorithms based on arbitrary linear information, i.e., the best we can do theoretically.
After a short introduction to the field, we will discuss the following:
(1) random data for L_2-approximation in Hilbert spaces,
(2) approximation in other norms for general classes of functions, and
(3) "Does random data contain optimal data?" (Spoiler: The answer is often: Yes!)
I'll try to introduce all the necessary concepts in detail and therefore think that no expertise is required to follow the talk.