$$Events$$

Mar. 20, 2018
12:00
-14:00

Room 202 in building 37

​How does computational learning change when one cannot store all the examples one sees in memory? This question has seen a burst of interest in the past couple of years, leading to the surprising theorem that there exist simple concepts (parities) that require an extraordinary amount of time to learn unless one has quite a lot of memory. In this work we show that in fact most concepts cannot be learned without sufficient memory. This subsumes the aforementioned theorem and implies similar results for other concepts of interest. The new results follow from a general combinatorial framework that we developed to prove lower bounds for space bounded learning.

Joint work with Michal Moshkovitz, Hebrew University.

Bio:

Dana Moshkovitz in an associate professor of Computer Science at UT Austin. Her research is in Theoretical Computer Science. Much of it focuses on the limitations of approximation algorithms and probabilistic checking of proofs. Dana did her PhD at the Weizmann Institute in Israel. Her thesis co-won the Nessyahu Prize for best math PhD thesis in Israel in 2009, and part of this work was awarded the FOCS 2008 Best paper. Dana went on to spend a couple of years at Princeton University and the Institute of Advanced Study before joining MIT as an assistant professor. Dana is the recipient of the Jerome Saltzer teaching award of MIT EECS.