$$Events$$

04 יונ' 2024
12:00
-13:00

אולם 202, בנין 37

​​The Friedman awards committee is pleased to announce that this year's winners are:
Doron Cohen and Tom Yaacov

The award ceremony will be held in a festive colloquium on Tuesday 4 June at 12:00 - auditorium 202, Computer Science building no. 37
During which the winners will each describe a part of their research

Doron Cohen: Learning infinitely many coins simultaneously
Presented at COLT 2023, joint work with Aryeh Kontorovich

Abstract:

Inferring the bias of a single coin from independent flips is a well-understood problem, technically known as estimating the Bernoulli parameter p. In particular, how the sample size (number of flips) n, the precision ε, and the confidence δ constrain each other is known within tight upper and lower bounds. When we want to estimate the bias of d coins simultaneously, this problem is well-understood as well, at least in the worst case over the Bernoulli parameters pᵢ. What if we want to estimate infinitely many pᵢ's simultaneously?

 

A simple argument shows that this is impossible in the "worst case" over the pᵢ's; thus, any result must depend on their actual values. If we define M as the expected maximum deviation between any pᵢ and its estimate, we want to understand for which sequences pᵢ this quantity decays to zero and at what rate. We obtain tight, exhaustive answers to these questions.


Tom Yaacov:  Keeping Behavioral Programs Alive: Specifying and Executing Liveness Requirements
Authors: Tom Yaacov, Achiya Elyasaf, Gera Weiss
Abstract:
We present an enhancement to the behavioral programming (BP) modeling language designed to allow modular specifications of reactive systems that are directly aligned with their requirements. In addition to specifying what the system may, must, and must not do, we propose allowing modules to specify what must be completed. We show how this addition is needed to align the specification with common requirement patterns. Specifically, we demonstrate how safety requirements, such as "don't do X after Y," which can be modeled with existing BP idioms, can be complemented with liveness requirements, such as "do X at least three times," which can only be modeled with our new idioms. We provide formal semantics and two execution mechanisms for the new syntax, one based on a translation to Büchi automata and the other based on a Markov decision process (MDP). The latter approach offers the possibility of utilizing deep reinforcement learning (DRL) algorithms, which have the ability to effectively handle large software systems. The evaluation of our approach demonstrates that our idioms enable the direct specification of well-known requirement patterns from the literature. We also provide qualitative and quantitative assessments of the approach's effectiveness using a proof-of-concept tool.

 

 

Congratulations to the winners, and best wishes for continued fruitful and outstanding research