A lot of attention was put on the field of learning theory to try and characterize what can and can't be learned. One of the directions was the combinatorial notion of sample compression.
The first to prove full equivalence between the two notion was Moran and Yehudayoff (2016).
We give an algorithmically efficient version of this learner-to-compression scheme conversion.
In extending this technique to real-valued function classes, we also obtain, for the first time (regardless of efficiency or boundedness), an efficient regression-to-sample-compression converter, guaranteeing uniform approximate reconstruction.
Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract real-valued learning algorithms
The talk assume no prior knowledge of learning theory
This is a joint work with Prof. Aryeh Kontorovich and Prof. Steve Hanneke