Seminar1.jpg

To receive updates on upcoming department seminars, join our mailiing list by clicking here or sending a blank e-mail to: cs_seminar-join@mlists.cs.bgu.ac.il.​
 



Title​

​When

​​​​Abstract​​

Trainable Highly-expressive Activation Functions
Lecturer: Irit Chelly
Host: Oren Freifeld
Nov 19, 2024 
14:00
Bldg 37 room 201
Nonlinear activation functions are pivotal to the success of deep
neural nets, and choosing the appropriate activation function can
significantly affect their performance. Most networks use fixed
activation functions (e.g., ReLU, GELU, etc.), and this choice might
limit their expressiveness. Furthermore, different layers may benefit
from diverse activation functions. Consequently, there has been a
growing interest in trainable activation functions. In this paper, we
introduce DiTAC, a trainable highly-expressive activation function
based on an efficient diffeomorphic transformation (called CPAB).
Despite introducing only a negligible number of trainable parameters,
DiTAC enhances model expressiveness and performance, often yielding
substantial improvements. It also outperforms existing activation
functions (regardless whether the latter are fixed or trainable) in
tasks such as semantic segmentation, image generation, regression
problems, and image classification. The talk is based on [Chelly et.
all, ECCV '24].

VisFocus: Prompt-Guided Vision Encoders for OCR-Free Dense
Document Understanding
Lecturer: Niv Nayman
Host: Aryeh Kontorovich​
Nov 19,2024 
Time: 12:00
​Bldg. 37, room 202
V​isual Document Understanding (VDU) aims to extract meaningful
information from digitized documents, such as PDFs or images,
extending beyond the scope of Optical Character Recognition (OCR).
This field encompasses various tasks, including Document Visual
Question Answering (DocVQA), Key-Value Identification in forms, Entity
Extraction and Document Classification. In recent years, notable
advancements have been made in the domain of visual document
understanding, with the prevailing architecture comprising a cascade
of vision and language models. The text component can either be
extracted explicitly with the use of external OCR models in OCR-based
approaches, or alternatively, the vision model can be endowed with
reading capabilities in OCR-free approaches. In this talk we will
review those two approaches, and focus on the later where typically,
the queries to the model are input exclusively to the language
component, necessitating the visual features to encompass the entire
document. In contrast, we follow an OCR-free approach and design a
method to better exploit the vision encoder's capacity by coupling it
directly with the language prompt. To do so, we replace the
down-sampling layers with layers that receive the input prompt and
allow highlighting relevant parts of the document, while disregarding
others. We pair the architecture enhancements with a novel
pre-training task, using language masking on a snippet of the document
text fed to the visual encoder in place of the prompt, to empower the
model with focusing capabilities. Consequently, the resulted models
learn to allocate its attention to text patches pertinent to the
provided prompt. Our experiments demonstrate that this prompt-guided
visual encoding approach significantly improves performance, achieving
state-of-the-art results on various benchmarks.

Link to paper: https://arxiv.org/abs/2407.12594

Bio: Niv Nayman is an applied scientist at Amazon Web Services AI Labs
and a Phd candidate at the Technion, working primarily on AutoML
research with applications in computer vision and document analysis.
His work has been integrated in products at scale and published at top
venues (NeurIPS, ICLR, ICML, ECCV, etc.). Before joining Amazon, Niv
served as a senior research scientist at Alibaba DAMO Academy after
completing a long service as an officer in the intelligence
technological unit - 81, where he performed a variety of roles through
the years, from hardware design, to algorithmic research and cyber
security. Niv holds BSc degrees both in Electrical Engineering and in
Physics (Cum Laude, 'Psagot' program) and a MSc in Optimization and
Machine Learning, all from the Technion.
​​​Scalable AI Safety and Computer Science
Lecturer: David Manheim
Host: Aryeh Kontorovich
Nov 12​,  2024
12:00
Bldg.37,​room 202
A​rtificial Intelligence is a rapidly growing field, and ensuring the
ethical and safe use of these systems is both an unsolved problem, and a critical issue. Despite that, a solution for safety of current systems is likely insufficient if and when more powerful future systems are developed and deployed. In this talk, I will provide a basic background about AI Safety relevant to future AI systems, and review areas of computer science research, and some open problems.
Topics which will be covered include cryptographic guarantees for AI systems, mechanistic interpretability, provability and guaranteed safety, cooperative and multi-agent safety, agent foundations, and
learning theoretic AI safety.

Bio:  David Manheim is a policy researcher and visiting lecturer at
the Technion, as well as founder and director of research and policy
for Alter.org.il, which does policy advocacy and research focusing
largely on risks from technology and long term resilience. He is
currently working on projects with UK's ARIA program on Guaranteed
Safe AI, as an expert for ISO JTC 1 SC42 standards on AI Control and
Oversight, as a contributor to NIST's AI Safety Isititute Consortium
working groups on reporting and on chem/bio risks from AI systems, and
on projects related to AI auditing and red-teaming for the RAND
Corporation. He's also a happily married father of 4 wonderful
children, and lives in Rehovot
Attacking and Defending Deep Neural Networks
Lecturer: Raz Lapid
Host: Moshe Sipper
Nov 11​, 2024
12:00
Bldg.37, room 201
This seminar presents an in-depth exploration of adversarial attacks,
defenses, and robustness in deep neural networks. The key
contributions include:
Evolution of Activation Functions: A coevolutionary algorithm designed for generating new activation functions for image-classification
tasks. Evolutionary Gradient-Free Black-Box Attack: A query-efficient algorithm designed for generating adversarial instances in convolutional neural networks without relying on gradients. This method optimizes adversarial attacks in a black-box setting.
Foiling Explanations in Neural Networks: Investigates the vulnerability of neural network explanations to adversarial manipulation, demonstrating how model interpretability can be compromised.
Patch of Invisibility: Introduces a naturalistic black-box adversarial attack that successfully bypasses object detectors using visually inconspicuous patches, challenging the robustness of detection models.
Gray-Box Attack on Image-to-Text Models: Explores the adversarial weaknesses of multimodal models, specifically targeting image-to-text systems using a gray-box approach.
Universal Black-Box Jailbreaking of LLMs: Proposes an attack
methodology that universally bypasses the safety mechanisms of large language models, highlighting potential security risks in natural language processing. XAI-Based Detection of Adversarial Attacks: Utilizes explainable AI techniques to detect adversarial attacks on deepfake detectors, reinforcing the defenses of AI models against synthetic media threats.
Fortify the Guardian, Not the Treasure: Suggests a novel strategy for adversarial training, focusing on enhancing the resilience of adversarial detectors rather than the primary models, thereby strengthening security in adversarial scenarios.
Robustness of Kolmogorov-Arnold Networks: Analyzes the robustness of Kolmogorov-Arnold Networks (KANs) compared to traditional multilayer perceptrons (MLPs) under various white-box adversarial attacks,
offering insights into their potential as a more resilient neural network architecture.

The thesis supervised by Prof. Moshe Sipper
Research Exposure Day for New M.Sc. Students
Lecturer:  Master students special event
Host: Or Sattath


Nov 5, 2024
12:00
Th​e research exposure day for new MSc students is a valuable
opportunity to dive into the department’s research landscape. Several
faculty members will present their ongoing research projects, offering
insights into various fields and methodologies. This event not only
provides an overview of potential research directions but also serves
as an excellent chance for students to network, engage in discussions,
and explore potential collaborations. It’s a great way to connect with
peers and professors, ask questions, and begin building a foundation
for your research journey.
Geometric Analysis and Data Augmentation Techniques for Enhancing Deep Learning Models in Time Series Forecasting and Regression Tasks
Lecturer: Ilya Kaufman
Host: Omri Azencot

​​

​Nov ​4, 2024
12:00

Geometric properties of deep learning models play a crucial role in
enhancing their performance, particularly in time series forecasting
and regression tasks. By investigating the curvature and intrinsic
dimensions of data manifolds, this work uncovers how these features
correlate with model generalization and performance in trained and
untrained networks. Additionally, two novel domain-independent data
augmentation methods are introduced to improve robustness by
leveraging manifold sampling techniques. These methods demonstrate
significant improvements in both in-distribution and
out-of-distribution generalization, offering new insights and
practical​.
Advances in Robust Statistical Learning Theory

Lecturer: Idan Attias
Host: Omri Azencot


Oct 31, 2024
16:00
Bldg.37, room 201​​
​We study theoretical questions regarding the development of robust machine learning models that can withstand test-time attacks on their inputs, commonly known as adversarial examples.

Our learning framework extends the traditional PAC learning model to address the challenges posed by adversarial attac​ks.

Within this framework, we investigate statistical and algorithmic aspects of classification and regression for both supervised and semi-supervised settings.

Along the way, we develop general techniques of potential interest to the statistical learning theory community, including estimates on complexity measures for aggregation rules of function classes and sample compression schemes.

Incorporation of Information Theoretically Secure Primitives into One-Way Function Candidates​
by: Paweł Cyprys
Sep 26, 2024
9:00
Meeting ID: 944 158 000 272 0 
Passcode: cqHE3W​

Click here to join the meeting 
​The existence of one-way functions—functions that are easy to compute but infeasible to invert—is a central question in complexity theory with profound implications for cryptography. Although proving their existence implies P != NP, which is the most famous open question in computer science, numerous functions have been proposed that are believed to be one-way "in practice." These functions are efficiently computable, but no known polynomial-time algorithms can invert them.

This thesis introduces a novel technique called "self-masking" to further strengthen the difficulty of inverting such functions. By strategically replacing critical parts of a function's output with their XOR, the self-masking mitigates known (inversion) attacks on one-way functions, particularly when applied to problems like the subset sum problem.

The thesis also explores constructing/modifying one-way functions by the usage of information functions based on information-theoretically secure primitives to design lightweight cryptographic systems.

The thesis is supervised by Prof. Dolev Shlomi

Attacking and Defending Deep Neural Networks
By: Raz Lapid 
Nov 11, 2024
13:00-14:00
Bldg.37, room 201

This seminar presents an in-depth exploration of adversarial attacks, defenses, and robustness in deep neural networks. The key contributions include: 

  • Evolutionary Gradient-Free Black-Box Attack: A query-efficient algorithm designed for generating adversarial instances in convolutional neural networks without relying on gradients. This method optimizes adversarial attacks in a black-box setting.
  • Foiling Explanations in Neural Networks: Investigates the vulnerability of neural network explanations to adversarial manipulation, demonstrating how model interpretability can be compromised.
  • Patch of Invisibility: Introduces a naturalistic black-box adversarial attack that successfully bypasses object detectors using visually inconspicuous patches, challenging the robustness of detection models.
  • Gray-Box Attack on Image-to-Text Models: Explores the adversarial weaknesses of multimodal models, specifically targeting image-to-text systems using a gray-box approach.
  • Universal Black-Box Jailbreaking of LLMs: Proposes an attack methodology that universally bypasses the safety mechanisms of large language models, highlighting potential security risks in natural language processing.
  • XAI-Based Detection of Adversarial Attacks: Utilizes explainable AI techniques to detect adversarial attacks on deepfake detectors, reinforcing the defenses of AI models against synthetic media threats.
  • Fortify the Guardian, Not the Treasure: Suggests a novel strategy for adversarial training, focusing on enhancing the resilience of adversarial detectors rather than the primary models, thereby strengthening security in adversarial scenarios.

Robustness of Kolmogorov-Arnold Networks: Analyzes the robustness of Kolmogorov-Arnold Networks (KANs) compared to traditional multilayer perceptrons (MLPs) under various white-box adversarial attacks, offering insights into their potential as a more resilient neural network architecture.​

The thesis supervised by Prof. Moshe Sipper

​Designing Language Models to Think Like Humans
 
By: Dr. Chen Shani​

July 16, 2024
13:00-14:00
Bldg. 9​6 room 001
​While language models (LMs) show impressive text manipulation capabilities, they also lack commonsense and reasoning abilities and are known to be brittle. In this talk, I will suggest a different LMs design paradigm, inspired by how humans understand it. I will present two papers, both shedding light on human-inspired NLP architectures aimed at delving deeper into the meaning beyond words.

The first paper [1] accounts for the lack of commonsense and reasoning abilities by proposing a paradigm shift in language understanding, drawing inspiration from embodied cognitive linguistics (ECL). In this position paper we propose a new architecture that treats language as inherently executable, grounded in embodied interaction, and driven by metaphoric reasoning.

The second paper [2] shows that LMs are brittle and far from human performance in their concept-understanding and abstraction capabilities. We argue this is due to their token-based objectives, and implement a concept-aware post-processing manipulation, showing it matches human intuition better. We then pave the way for more concept-aware training paradigms.

[1] Language (Re)modelling: Towards Embodied Language Understanding Ronen Tamari, Chen Shani, Tom Hope, Miriam R L Petruck, Omri Abend, and Dafna Shahaf. 2020. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 6268–6281, Online. Association for Computational Linguistics.

[2] Towards Concept-Aware Large Language Models Shani, Chen, Jilles Vreeken, and Dafna Shahaf. In Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 13158-13170. 2023.

Bio
Chen Shani is a post-doctoral researcher at Stanford's NLP group, collaborating with Prof. Dan Jurafsky. Previously, she pursued her Ph.D. at the Hebrew University under the guidance of Prof. Dafna Shahaf and worked at Amazon Research. Her focus lies at the intersection of humans and NLP, where she implements insights from human cognition to improve NLP systems.​

Prioritizing patients, resource allocation problems and hyperbolic wave physics

By: Prof. Eitan Bachmat (BGU)​



July 9, 2024
12:00-13:00
Bldg.37, room 202
I will discuss the convergence of several lines of research with an emphasis on the notion of lenses
​On geometric versions of Zarankiewicz's problem 

By: Shakhar Smorodinsky



July 2, 2024
12:00-13:00
Bldg.37, room 201
Extremal combinatorics poses a fundamental question: How large can a system be while avoiding certain configurations? A classic instance of this inquiry arises in extremal graph theory: Given a fixed graph H, what is the maximum number ex(n,H) of edges a graph G on n vertices can have if it excludes H as a subgraph? This problem remains widely open for H being a complete bipartite graph and is known as Zarankiewicz's problem.

Even when considering algebraic constraints on the hosting graph G, such as being the incidence graph of points and bi-variate polynomials of fixed degree, Zarankiewicz's problem remains notoriously challenging. This geometric interpretation of Zarankiewicz's problem has led to the emergence of Incidence Geometry.

In this talk, I will provide an overview of notable results in this domain and will introduce a novel approach to Zarankiewicz's problem.
Based on joint work with Chaya Keller.
The method of hypergraph containers


By: Wojciech Samotij, Tel-Aviv University​
​​​June 25, 2024
12:00-13:00
Bldg.37, room 202
The method of hypergraph containers is a widely-applicable technique in probabilistic combinatorics. The method enables one to gain control over the independent sets of many `interesting' hypergraphs by exploiting the fact that these sets exhibit a certain subtle clustering phenomenon. Since many problems studied in extremal and probabilistic combinatorics, Ramsey theory, and discrete geometry can be (naturally) phrased in terms of independent sets in auxiliary hypergraphs, good control over these sets has many `practical' consequences.

In the first part of this talk, I will provide a gentle introduction to the method, guided and motivated by the following elementary problem in Ramsey theory: Does there exist a graph G without a complete subgraph on four vertices that cannot be partitioned into two triangle-free graphs?

Bio: Wojciech Samotij studied at the University of Wrocław where in 2007 he obtained his Master of Science degrees in mathematics and computer science. He received his PhD in 2011 at University of Illinois at Urbana-Champaign on the basis of his dissertation titled Extremal Problems In Pseudo-random Graphs And Asymptotic Enumeration and written under the supervision of József Balogh. Between 2010 and 2014, he was a fellow of the Trinity College, Cambridge at the University of Cambridge. Currently, he is an associate professor at Tel Aviv University. Wojciech received the 2013 Kuratowski Prize, the 2013 European Prize in Combinatorics, the 2016 George Pólya Prize and the 2022 Erdős Prize. In 2024 he was awarded the Leroy P. Steele Prize for Seminal Contribution to Research jointly with József Balogh and Robert Morris.
Special Colloquium dedicated to Friedman Award Ceremony

By: Doron CohenTom Yaacov
​June 4, 2024
12:00-13:00
Bldg.37, room 202

​Talks will be given by recipients of Friedman Awards

Doron Cohen

Learning infinitely many coins simultaneously

Abstract:

Inferring the bias of a single coin from independent flips is a well-understood problem, technically known as estimating the Bernoulli parameter p. In particular, how the sample size (number of flips) n, the precision ε, and the confidence δ constrain each other is known within tight upper and lower bounds. When we want to estimate the bias of d coins simultaneously, this problem is well-understood as well, at least in the worst case over the Bernoulli parameters pᵢ. What if we want to estimate infinitely many pᵢ's simultaneously?

A simple argument shows that this is impossible in the "worst case" over the pᵢ's; thus, any result must depend on their actual values. If we define M as the expected maximum deviation between any pᵢ and its estimate, we want to understand for which sequences pᵢ this quantity decays to zero and at what rate. We obtain tight, exhaustive answers to these questions.

Presented at COLT 2023, joint work with Aryeh Kontorovich.

-------------------------------------------------------------------------

Tom Yaacov

Keeping Behavioral Programs Alive: Specifying and Executing Liveness Requirements

Abstract:

We present an enhancement to the behavioral programming (BP) modeling language designed to allow modular specifications of reactive systems that are directly aligned with their requirements. In addition to specifying what the system may, must, and must not do, we propose allowing modules to specify what must be completed. We show how this addition is needed to align the specification with common requirement patterns. Specifically, we demonstrate how safety requirements, such as "don't do X after Y," which can be modeled with existing BP idioms, can be complemented with liveness requirements, such as "do X at least three times," which can only be modeled with our new idioms. We provide formal semantics and two execution mechanisms for the new syntax, one based on a translation to Büchi automata and the other based on a Markov decision process (MDP). The latter approach offers the possibility of utilizing deep reinforcement learning (DRL) algorithms, which have the ability to effectively handle large software systems. The evaluation of our approach demonstrates that our idioms enable the direct specification of well-known requirement patterns from the literature. We also provide qualitative and quantitative assessments of the approach's effectiveness using a proof-of-concept tool.

Joint work with Achiya Elyasaf and Gera Weiss

Point selections and halving planes
​May 28, 2024
12:00-13:00
Bldg.37, room 202

>>>Z​oom Link

Abstract: In 1982, Boros and Füredi, then undergrad students in Budapest, established the following result, which became known as the First Selection Lemma: 
Let P be a set of n points in the Euclidean plane. Then one can always choose a point piercing almost an 2/9-fraction of all triangles that are spanned by P.
The result has been generalized in a number of ways, and underlies the modern notion of "geometric expanders".​

We will focus on the so-called Second Selection Lemma -- a particularly powerful statement dealing with arbitrary simplicial hypergraphs in Euclidean spaces of arbitrary albeit fixed dimension d, which was established in 1990 by Alon, Bárány, Füredi and Kleitman. Its most significant by-product is the first non-trivial bound for the famous Halving Plane problem of  Erdős and Lovász, which asks for the maximum number of ways a 2n-point set can be split by a plane into two parts of size n. Answering this question is quintessential for the analysis of many geometric algorithms. 

https://en.wikipedia.org/wiki/K-set_(geometry)

We will discuss the recent dramatic improvement in the exponent of the Second Selection Lemma, which immediately results in a sharper bound for halving planes in all Euclidean spaces of dimension 5 or higher. In contrast to the previous argument of Alon et al, the new bound is more intuitive, and derives from fairly general properties of geometrically defined hypergraphs.

​Understanding Generalization via a Bayes Factor 

By: 
Moshe Shenfeld


Host: Aryeh Kontorovich​


​May 21, 2024
12:00-13:00
Bldg.37, room 202

>>>Z​oom Link
​​Learning (as machine learning practitioners call it) or estimating (as statisticians would say) is the act of modeling population characteristics from properties observed in a dataset. Success hinges on avoiding overfitting and instead achieving a small generalization gap, meaning the model's fit to the data extends to similar performance on a broader population. Despite having more parameters than data points and dealing with noisy training data, modern complex models like deep neural networks often demonstrate effective generalization capabilities, far beyond what can be theoretically explained using complexity-based arguments such as VC dimension.

An alternative theoretical approach to understanding generalization uses algorithmic stability, the extent to which a typical change in the input dataset affects the quality of the output model, to bound the generalization gap. Unlike complexity-based arguments which rely on the properties of the model class, stability-based arguments can rely on the specific learning algorithm and the underlying distribution of the data, which allows for a more nuanced analysis.

In this talk, we demonstrate that the algorithmic stability (and by extension, the generalization gap) equals the covariance between a model's loss with respect to the data and a Bayes Factor term reflecting the sample-specific information encoded by the learned model. This observation yields new, straightforward proofs of various information-based techniques for ensuring generalization.

Leveraging this understanding, we provide new generalization guarantees within the adaptive data analysis framework, improving on previous methods that invoke differential privacy (which can be interpreted as a stability notion tailored to worst-case sensitivity). This perspective allows us to shrink the required sample size to be less worst-case-dependent and to depend instead on a standard deviation term. More broadly, our results offer a frame through which to revisit the literature on generalization and to advance our understanding of what it is and how it can be achieved.

 Based on joint work with Katrina Ligett.

 Bio: Moshe is a Ph.D. candidate in the computer science department of the Hebrew University in Jerusalem, where he also received his B.Sc. in Physics, Math, and computer science, and his M.Sc. in Computer science, and is advised by Katrina Ligett. His research fields are differential privacy and adaptive data analysis and their relation to machine learning.

For his research, Moshe received the Apple Scholars in AI/ML PhD fellowship.

Text to SQL with Semantic Decomposition and Recursive Composition

By: Michael Elhadad , BGU
​May 7, 2024
12:00-13:00
Bldg.37, room 202

>>>Z​oom Link
​Text to SQL is a semantic parsing task which consists of translating a natural language question regarding structured data into an executable SQL query.

Progress in Large Language Models (LLMs) has helped improve performance on this challenging task, but dealing with complex questions remains challenging even for very large models.

We propose a method based on the decomposition of complex SQL queries according to their operational semantics. We introduce an intermediary representation for SQL queries which is derived from the hierarchical execution plan of the query that we call Query Plan Language (QPL).  

We show that LLMs fine-tuned to learn how to map questions to QPL perform better than when generating SQL directly, especially for complex queries.

In this talk, I will present additional followup ongoing results that exploit the hierarchical nature of QPL programs to better translate questions into complex queries.

In the first experiment, we evaluate an ensemble model learning different syntactic variants for QPL and how LLMs learn to combine these different views.

In the second experiment, we analyze a recursive decomposition-composition strategy for complex questions. We exploit the QPL dataset we constructed to derive decompositions of questions into simpler natural language questions grounded in the semantics of the data retrieval operations. An LLM fine-tuned to perform question decomposition is then combined at inference time with a recursive question to QPL system.

Preliminary results indicate the potential of this modular approach to address complex semantic parsing tasks, with performance on standard text to SQL benchmark moving from ~75% to about 88% with a small 3B parameters LLM. 

I will conclude with considerations about reasoning and LLMs and how they relate to the general principle of

Randomized online learning​

By: Prof. Yuval FilmusThe Henry and Marilyn Taub Faculty of Computer Science
March 12, 2024
12:00-13:00​
Bldg.37, room 202

>>>Z​oom Link
​Online learning is a classical task in which a student is challenged by her teacher to answer a sequence of Yes/No questions. After each reply, the student is told the correct answer, and her goal is to minimize the number of mistakes that she makes. In order to make the task solvable, we assume that the teacher's answers are consistent with some hypothesis class. Littlestone defined in 1988 a combinatorial dimension named after him which captures the optimal mistake bound as a function of the hypothesis class.
His work assumed that the student chooses her answer deterministically.
We extend Littlestone's work to the more realistic scenario in which the student is allowed to toss coins. 
As an application, we determine the optimal randomized mistake bound for the classical problem of prediction with expert advice.

Joint work with Steve Hanneke, Idan Mehalel, and Shay Moran.

Bio:
 
https://yuvalfilmus.cs.technion.ac.il/
Spread regularity and applications

By: Shachar Lovett
March 5, 2024
12:00-13:00​
Bldg.37, room 202

>>>Z​oom Link
​Szemeredi's regularity lemma is a cornerstone in combinatorics. It shows that any graph can be partitioned into a few parts that mostly "look random". It has numerous applications, but its main caveat is the quantitative bounds it gives - to get an error epsilon, the number of parts needs to be a tower of height polynomial in 1/epsilon. This makes results obtained using Szemeredi's regularity lemma qualitative in nature, without any reasonable and effective bounds.
Faced with this challenge, Frieze and Kannan designed a "weak regularity" lemma. Their regularity lemma attains weaker guarantees than those of Szemeredi, but with the benefit that the number of parts needed is only exponential in the error parameter. Their weak regularity lemma is sufficient to get improved quantitative bounds for some applications where Szemeredi's regularity lemma was originally applied.

In this work, we introduce a new type of regularity lemma dubbed "spread regularity". Its main advantage is that the number of parts it requires is even smaller, merely quasi-polynomial in the error parameter. Translating from graphs to their adjacency matrices, we show that while individual spread matrices are not very "random looking", the product of two such matrices is very close to uniform.

We give two applications of our new spread regularity lemma. The first application is in communication complexity, where we construct an explicit function with a near optimal separation between randomized and deterministic protocols in the 3-player Number-On-Forehead model. The second application is in algorithms, where we design a combinatorial algorithm for matrix multiplication with a quasi-polynomial speedup over the naive algorithm.

Based on joint works with Amir Abboud, Nick Fischer, Zander Kelley and Raghu Meka.

Bio:  Shachar Lovett is a professor at UC San Diego. He has a broad interest in theoretical CS and combinatorics, with emphasis on structure and randomness, coding theory, algebraic constructions, and additive combinatorics. He is a recipient of an NSF career award, a Sloan fellowship and a Simons investigator award.

DNA-bases Storage Systems: Coding Theory and Algorithms

By: Yonatan Yehezkeally
Feb 20, 2024
12:00-13:00​
Bldg.37, room 202

>>>Z​oom Link​
​Storage (and retrieval) of information over DNA-based media raises unique challenges in contrast to contemporary digital standards, on many levels: algorithmic, computer/data scientific, information theoretic, (bio- and electrical-)engineering, biologic (and chemical). Thus, fresh (and renewed) focus on a number of rarely-studied problems in computation and information theory is necessitated, in coordination with partners from all of these disciplines. In this talk, we take an information-theoretic perspective and outline the general channel model. We will delve into a few of its components, presenting pertinent Coding-theoretic problems and the approaches that have been deployed to address them.

Bio: Yonatan Yehezkeally is the Carl Friedrich von Siemens Post-Doctoral Research Fellow of the Alexander von Humboldt Foundation, with the Associate Professorship of Coding and Cryptography (Prof. Wachter-Zeh), School of Computation, Information and Technology, Technical University of Munich. His research interests include Coding Theory and Algorithms, particularly with applications to novel storage media, with a focus on DNA-based storage and nascent sequencing technologies. They further include combinatorial analysis and structures, as well as algebraic structures. Yonatan received the B.Sc. degree (cum laude) in Mathematics and the M.Sc. (summa cum laude) and Ph.D. degrees in Electrical and Computer Engineering from Ben-Gurion University of the Negev, Israel, in 2013, 2017, and 2020 respectively.
Constrained Resource Allocation via Iterative Randomized Rounding

By: Ariel Kulik
Feb 13, 2024
12:00-13:00​
Bldg.37, room 202

>>>Z​oom Link
Constrained resource allocation problems can often be modeled as variants of the classic Bin Packing and Knapsack problems. The study of these problems has had a great impact on the development of algorithmic tools, ranging from simple dynamic programming to involved linear programs and rounding techniques. I will present a new and simple algorithmic approach for obtaining efficient approximations for such problems, based on iterative randomized rounding of Configuration LPs. This simple approach yields the state-of-the-art asymptotic approximation ratio for Vector Bin Packing and matches the best-known ratios for other variants of the Bin Packing problem. The technique can be used also to improve the existing results for Multiple Knapsack variants. While the algorithmic approach is simple and intuitive, its analysis requires sophisticated use of probabilistic tools and deep structural insights about the studied problems. 
  
Bio: Ariel Kulik is currently a postdoctoral research associate at the Technion, hosted by Roy Schwartz. Formerly, he was a postdoc at CISPA, hosted by Daniel Marx. Ariel completed his PhD studies in Computer Science at the Technion, supervised by Hadas Shachnai. His research focuses on the design and analysis of approximation algorithms for constrained submodular optimization and resource allocation problems. He also works on parameterized-approximation algorithms, with emphasis on tools for the analysis of branching algorithms. 

From Stochastic to Deterministic: learning dynamics of nonconvex models in high dimensions

By: Inbar Seroussi

Feb 6, 2024
12:00-13:00​
Bldg.37, room 202

>>>Z​oom Link
​Stochastic gradient descent (SGD) stands as a cornerstone of optimization and modern machine learning. However, understanding why SGD performs so well remains a major challenge. In this talk, I will present a theory for SGD in high dimensions when the number of samples and problem dimensions are large. We show that the dynamics of one-pass SGD applied to generalized linear models and multi-index problems with data possessing a general covariance matrix become deterministic in the large sample and dimensional limit. In particular, the limiting dynamics are governed by a set of low-dimensional ordinary differential equations (ODEs). Our setup encompasses many optimization problems, including linear regression, logistic regression, and two-layer neural networks. In addition, it unveils the implicit bias inherent in SGD. For each of these problems, the deterministic equivalent of SGD enables us to derive a close approximation of the generalization error (with explicit and vanishing error bounds). Furthermore, we leverage our theorem to establish explicit conditions on the step size, ensuring the convergence and stability of SGD within high-dimensional settings. 

 

Bio: Inbar Seroussi is a postdoctoral fellow in the Applied Mathematics department at Tel Aviv University and McGill University. Before that, she was a postdoctoral fellow in the Mathematics department at the Weizmann Institute of Science, hosted by Prof. Ofer Zeitouni. She completed her Ph.D. in the Applied Mathematics department at Tel-Aviv University under the supervision of Prof. Nir Sochen. During her Ph.D., she was a long-term intern at Microsoft Research (MSR). Her research interest is at the interface between machine learning, statistical physics, and high dimensional probability.​


Democr​atising Natural Language Processing: Overcoming Language and Domain Barriers in Low-Resource Environments

By: Yftah Ziser


jan 30, 2024
12:00-13:00​
Bldg.37, room 202

>>>Z​oom Link

Natural language processing (NLP) has been revolutionised in recent years to the point where it is an inseparable part of our daily lives. The transition to transformer-based models allows us to train models on vast amounts of text efficiently, proving that scale plays a crucial role in improving performance. Unfortunately, many people worldwide are marginalised from getting access to high-quality NLP models, as the language they speak and the domains they are interested in count for only a tiny fraction of current state-of-the-art models' training sets.
This talk will address the challenges, approaches, and opportunities for democratising NLP across different languages and domains by developing methods to improve NLP in low-resource scenarios. I will start by discussing how we can ease distribution mismatches to improve performance using representation learning.
However, as NLP models become increasingly present in our lives, improving other crucial aspects beyond their performance, such as their fairness, factuality, and our ability to understand their underlying mechanisms, is essential.
Therefore, I will also discuss using spectral methods to remove information from neural networks to reduce undesired attributes, such as bias, to increase fairness where sensitive data is scarce.
Finally, I will explore future directions for making these models accessible to a broader audience by improving the aspects mentioned above in low-resource scenarios.

Bio:Yftah Ziser is a Postdoctoral Researcher at the School of Informatics at Edinburgh University, hosted by Shay Cohen.
He focuses on Deep-Learning methods for dealing with the resource bottleneck, which seriously challenges the worldwide accessibility of NLP technology.
His research develops methods to improve low-resource models' performance, fairness, and factuality while developing analysis methods for deepening our understanding of them.
He co-organised the Domain Adaptation for NLP Workshop at EACL 2021.
Before joining the University of Edinburgh, Yftah worked as a research scientist at Amazon Alexa. Yftah obtained his PhD from the Technion, where Roi Reichart advised him.

What and how in algorithmic fairness

By: Inbal Livny
jan 23, 2024
12:00-13:00​
Bldg.37, room 202

>>>Z​oom Link
​Machine learning algorithms increasingly influence our lives, making fairness critical to prevent discrimination based on gender, ethnicity, or other factors. In this talk I am going to focus on two important topics in algorithmic fairness; what is a fair algorithm, and how to achieve notions of fairness.

 

There is no universal definition of a fair algorithm that applies in all situations. Instead, there are many different definitions, often contradictory, and the choice of the right definition for each setting is a complex policy question. In this talk I am going to expand on the importance of fairness definitions and talk about an ad auction setting for job ads, where there is more than a single fairness definition. I am going to show how changing the definition can have advantages in reaching the desired outcome.

 

Regarding the question of how to achieve fairness, I am going to talk about omnipredictors with fairness constraints. An omnipredictor is a predictor that can be efficiently post-processed to minimize many different loss functions. I am going to discuss my work that extends the notion of an omnipredictor to optimization problems with constraints. If the constraints or the loss are changed, then only the efficient post-processing needs to be changed. Since fairness constraints are a result of a policy, it is rather likely that they might change over time. This works allows handling changing fairness constraints efficiently.

 

Additionally, I'll touch on how algorithmic fairness principles apply beyond their usual scope, in complexity theory. I'll also briefly discuss a work on quantum error correction. Concluding the talk, I'll share my research interests and future directions.​​

Verification of Complex Hyperproperties 

By: Hadar Frenk​el
​jan 22, 2024
12:00-13:00​
Bldg.37, room 202

>>>Z​oom Link
​Hyperproperties are system properties that relate multiple execution traces to one another. Hyperproperties are essential to express a wide range of system requirements such as information flow and security policies; epistemic properties like knowledge in multi-agent systems; fairness; and robustness. 

With the aim of verifying program correctness, the two major challenges are (1) providing a specification language that can precisely express the desired properties; and (2) providing scalable verification algorithms. In this talk, I will give an overview of my recent work on addressing these two challenges. 

First, I will present the new logic Hyper^2LTL, which uses second-order quantification over sets of executions to express a wide range of complex hyperproperties such as common knowledge in multi-agent systems and asynchronous hyperproperties.

Second, I will present a (sound but necessarily incomplete) model-checking algorithm for Hyper^2LTL; While the verification of Hyper^2LTL is undecidable, we manage to handle undecidability by characterizing a rich fragment of the logic that allows for sound approximations, providing the first verification algorithm for Hyper^2LTL specifications. ​

Bio: Hadar Frenkel is a postdoctoral researcher at CISPA Helmholtz Center for Information Security in Saarbrücken, Germany, hosted by Prof. Bernd Finkbeiner. She obtained her PhD from the Technion, Israel Institute of Technology, in 2021, under the supervision of Prof. Orna Grumberg and Dr. Sarai Sheinvald. Hadar studies complex hyperproperties, such as knowledge, causality, and asynchronous hyperproperties, and searches for logical formalisms and verification algorithms for them. She also studies automata learning and its applications in program verification and repair.

​Scaling-Up Planning for Long-Lived Autonomous Robots

By: Khen Elimelech

Host: Aryeh Kontorovich
​jan 16, 2024
12:00-13:00​
Bldg.37, room 202

>>>Z​oom Link
​How can a search-and-rescue drone locate a person trapped in a demolished building it has never seen before? How can an autonomous car find its path to a desired destination? How can our new companion robot figure out the steps to clean up the bedroom? Answering all these questions requires long-horizon online planning. Automation of the planning process is the basis for enabling robot autonomy, which can benefit countless applications. Unfortunately, despite its fundamental importance, planning for realistic tasks remains a great challenge. As planning generally requires predictive search, its complexity grows exponentially. This often renders planning algorithms intractable for problems characterized by complex logical specifications, high-Degree-of-Freedom control, uncertainty, and non-monotonic solutions. Addressing this concern, Dr. Elimelech's research goal is to develop paradigms for robust solution of high-scale robotic problems. His work is characterized by fundamental algorithmic and theoretical contributions, which can be integrated with existing planners to increase their scalability. 

This talk will cover two of his recent research projects. The first project contributes to the efficient solution of “active Simultaneous Localization and Mapping (SLAM)," the fundamental problem behind long-duration mobile-robot navigation, which involves decision making under uncertainty in high-dimensional state spaces. As this work suggests, such problems can be automatically simplified, using predictive modification or sparsification of the robot's “belief" (uncertain estimate of the world state). This approach is supported by a theoretical framework allowing us to derive formal guarantees for the potential optimality loss. The second project supports long-lived autonomous robots by providing a theoretical and computational framework for planning experience reuse, allowing them to improve their cognitive abilities throughout their operation, without human intervention. This novel framework establishes techniques for online, automatic, lifelong encoding of individual experiences as generalizable “abstract skills," which can later be adapted for new contexts in real-time. This approach was demonstrated for robot manipulators performing object assembly, formulated as “task and motion planning."

 Bio: Khen Elimelech is a postdoctoral researcher and an instructor in the Department of Computer Science at Rice University, where he works with Professors Lydia Kavraki and Moshe Vardi. He also serves as the President of the Rice University Postdoctoral Association (RPA). Before joining Rice, he earned a B.Sc. in Applied Mathematics from Bar-Ilan University, and a Ph.D. from the Robotics and Autonomous Systems Program at the Technion–Israel Institute of Technology, under the supervision of Professor Vadim Indelman. For his Ph.D. thesis he won the national “Outstanding Ph.D. Research Award," on behalf of the Israeli Smart Transportation Research Center (ISTRC). For his postdoctoral work, he was honored as a finalist for the “Outstanding Postdoctoral Research Award," on behalf of Rice's School of Engineering. During his studies he was also recognized amongst the prestigious cohort of "Robotics: Science and Systems (R:SS) Pioneers" and amongst the “Top A.I. Student Ambassadors" by Intel. His research focuses on developing algorithms and theory allowing autonomous robots to robustly and efficiently perform cognitive tasks, such as planning, decision making, generalizing from experiences, and overcoming uncertainty.

​Learning from dependent data and its modeling through the Ising model

By: Yuval Dagan

Host: Aryeh Kontorovich
​jan 9, 2024
12:00-13:00
Bldg.37, room 202

>>>Z​oom Link
Abstract: I will present a theoretical framework for analyzing learning algorithms which rely on dependent, rather than independent, observations. While a common assumption is that the learning algorithm receives independent datapoints, such as unrelated images or texts, this assumption often does not hold. An example is data on opinions across a social network, where opinions of related people are often correlated, for example as a consequence of their interactions. I will present a line of work that models the dependence between such related datapoints using a probabilistic framework in which the observed datapoints are assumed to be sampled from some joint distribution, rather than sampled i.i.d. The joint distribution is modeled via the Ising model, which originated in the theory of Spin Glasses in statistical physics and was used in various research areas. We frame the problem of learning from dependent data as the problem of learning parameters of the Ising model, given a training set that consists of only a single sample from the joint distribution over all datapoints. We then propose using the Pseudo-MLE algorithm, and provide a corresponding analysis, improving upon the prior literature which necessitated multiple samples from this joint distribution. Our proof benefits from sparsifying a model's interaction network, conditioning on subsets of variables that make the dependencies in the resulting conditional distribution sufficiently weak. We use this sparsification technique to prove generic concentration and anti-concentration results for the Ising model, which have found applications beyond the scope of our work.

 Based on joint work with Constantinos Daskalakis, Anthimos Vardis Kandiros, Nishanth Dikkala, Siddhartha Jayanti, Surbhi Goel and Davin Choo.

Bio: Yuval Dagan is a postdoctoral researcher at the Simons Institute for the Theory of Computing at UC Berkeley and at the Foundations of Data Science Institute (FODSI). He received his PhD from the Electrical Engineering and Computer Science Department at MIT, advised by Professor Constantinos Daskalakis (2018-2023). He received his Bachelor's and Master's degrees from the Technion, where he was advised by Professor Yuval Filmus (2011-2017). During his PhD, he received the Meta Research Fellowship in Machine Learning (2021-2022). Further, he was a visitor of the Simons Foundation at the Causality program (2022) and a research intern at Google Mountain View, hosted by Vitaly Feldman (2019). Prior to his PhD, he was a research assistant of Professor Ohad Shamir at Weizmann Institute (2018).

​Efficient and Robust Deep Learning architectures for Real-World problems

By: Chaim Baskin, Center for Intelligent Systems, Technion, and Czech Technical University in Prague

Host: Aryeh Kontorovich
Jan 2, 2024​
12:00-13:00
Bldg.37, room 202
​The advancements in deep learning models and their ability to excel in various fields are awe-inspiring, but practical applications still face several challenges. From a data-centric perspective, Deep Neural Networks (DNNs) require a vast amount of precisely labelled data. From a model-centric perspective, DNNs tend to be amenable to malicious perturbations, have limited throughput, and struggle to process irregular data. Unfortunately, these limitations restrict the ability of deep learning to solve a wide range of real-world problems in domains such as Biology, Chemistry, Physics, 3D geometry, social networks, and recommendation systems.

In my talk, I will discuss four critical challenges in real-life deep learning.
Firstly, I will discuss a method for reducing the bandwidth of read/write memory interactions during model deployment while taking into account communication complexity constraints. Prominent applications include large-language models, transformer-based foundation models, and large-graph architectures.  
Secondly,  I will introduce innovative approaches that enable learning with noisy and limited annotations. The first approach facilitates self-supervised pre-training to detect noisy samples better. The second approach takes advantage of a small calibration set to train a teacher model in a bi-level optimization framework implicitly. In addition, I will describe how to use a small number of annotated labels while efficiently merging between modalities to handle deep learning's necessity for clean and large amounts of annotated data.

Thirdly,  I will describe adversarial attacks that can efficiently mislead any navigation algorithm. These attacks are a significant safety concern that disables deep learning models from being deployed in real-world platforms, such as autonomous vehicles.
Lastly, I will introduce the geometric deep learning paradigm and focus on learning graph data in the context of various real-world problems. I will delve into the importance of the adversarial robustness of these models and relate to their expressivity.
I will also discuss future directions on combining the presented approaches to design novel deep learning models that will efficiently merge between different modalities under relaxed assumptions on the quality and amount of annotated data, safe for use in real-world platforms, and meet the specifications of modern AI accelerators.

Bio: Chaim Baskin is a Senior Research Associate at the VISTA laboratory within the Center for Intelligent Systems in the Computer Science Department and a Visiting Assistant Professor in the Faculty of Data and Decision Science at Technion. In addition, Chaim holds a Visiting Scholar position at Czech Technical University in Prague. Chaim's research focuses on representation learning, geometric deep learning, and optimization of neural networks for efficiency. He obtained his Ph.D. from the Computer Science Department at Technion in 2021. Chaim held a post-doctoral position at the same department from 2021 to 2022. Moreover, he is a member of Technion's TechAI and TASP research hubs.​

​On Implicit Bias and Benign Overfitting in Neural Networks

By: Gal Vardi, TTI Chicago and Hebrew University

Host: Aryeh Kontorovich
Dec 26, 2023
12:00-13:00
Bldg.37, room 202
​When training large neural networks, there are typically many solutions that perfectly fit the training data. Nevertheless, gradient-based methods often have a tendency to reach those which generalize well, namely, perform well on test data, and understanding this "implicit bias" has been a subject of extensive research. Surprisingly, trained networks often generalize well even when perfectly fitting noisy training data (i.e., data with label noise), a phenomenon called “benign overfitting". In the first part of the talk, I will discuss the implicit bias and its implications. I will show how the implicit bias can lead to good generalization performance, but also has negative implications in the context of susceptibility to adversarial examples and privacy attacks. In the second part of the talk, I will discuss benign overfitting and the settings in which it occurs in neural networks.

Bio: Gal is a postdoctoral researcher at TTI-Chicago and the Hebrew University, hosted by Nati Srebro and Amit Daniely as part of the NSF/Simons Collaboration on the Theoretical Foundations of Deep Learning. Prior to that, he was a postdoc at the Weizmann Institute, hosted by Ohad Shamir, and a PhD student at the Hebrew University, advised by Orna Kupferman. His research focuses on theoretical machine learning, with an emphasis on deep-learning theory.​
Optimization and Generalization in Nonsmooth Nonconvex Machine Learning

By: Guy Kornowski, Weizmann Institute of Science

Host: Aryeh Kontorovich
​​​June 20, 2023
12:00-13:00
Bldg.37, room 202
​The unprecedented empirical success of deep learning poses major theoretical challenges. Modern neural networks (NNs) seem to defy classical wisdom, as their training requires solving large-scale nonconvex optimization problems, while a statistical perspective suggests that they are too complex to be able to generalize in a meaningful manner.

In this talk I'll discuss several recent theoretical results aiming at understanding these issues. In particular, we will discuss the merits and limitations of nonsmooth nonconvex optimization, as well as statistical generalization of nonsmooth models. Finally, we will discuss an issue overlooked by classical learning theory, namely why continuing the training process of NNs even after they fit the training data affects their generalization to unseen data.

Based on joint works with Steve Hanneke, Michael I. Jordan, Aryeh Kontorovich, Tianyi Lin, Ohad Shamir, Gilad Yehudai & Manolis Zampetakis. 

Bio: Guy Kornowski is a PhD student at the Weizmann Institute of Science, advised by Prof. Ohad Shamir. His research focuses on optimization and machine learning theory. He was granted an Azrieli fellowship.​

Recent developments in intersection searching

By: Esther Ezra, BIU

Host: Natan Rubin
​​May 23, 2023
12:00-13:00
Bldg.37, room 202


​Recently, polynomial partitioning became a central tool in solving fundamental algorithmic problems in computational geometry, including eliminating depth cycles, point location, and semi-algebraic range searching.

In this talk I will present several recent developments in the study of intersection searching - a family of range searching problems. I will first describe a solution to the ray-shooting problem amid triangles in 3-space, where polynomial partitioning serves as a main tool in order to obtain an efficient data structure supporting ray-shooting queries. Then, I will show an extension of this approach, resulting in efficient data structures for detecting intersections amid flat objects in 3-space and semi-algebraic arcs. These data structures rely on efficient constructions of polynomial partitioning, I will sketch such constructions and discuss their importance. 

 

Bio: Esther Ezra is currently an associate professor at the Computer Science department in Bar-Ilan university. Before joining Bar-Ilan, she was an assistant professor at the School of Math in Georgia Institute of Technology. She completed her PhD from Tel-Aviv university, and then continued her postdoctoral research in Duke university and later in NYU. Her research interests lie the area of discrete and computational geometry, in particular, her current work is focused on algebraic methods and their applications to point location and range searching. Her research also contains aspects from learning theory, probabilistic methods and discrepancy and how to apply them in geometric settings.

​Local Glivenko-Cantelli (or: estimating the mean in infinite dimensions)

By: Aryeh Kontorovich, BGU​
May 16, 2023
12:00-13:00
Bldg.37, room 202

>>>Zoom Link​
​​If μ is a distribution over the d-dimensional Boolean cube {0,1}d, our goal is to estimate its mean p[0,1]d based on n iid draws from μ. Specifically, we consider the empirical mean estimator n and study the maximal deviation M=maxj[d]n(j)-p(j)|. In the classical Universal Glivenko-Cantelli setting, we seek distribution-free (i.e., independent of μ) bounds on M. This regime is well-understood: for all μ, we have 𝔼[M]≲√log(d)/n up to universal constants, and the bound is tight.

Our present work seeks to establish dimension-free (i.e., without an explicit dependence on d) estimates on M, including those that hold for d=∞. As such bounds must necessarily depend on μ, we refer to this regime as Local Glivenko-Cantelli, and are aware of very few previous bounds of this type — which are quite sub-optimal. Already the special case of product measures μ is quite non-trivial. We give necessary and sufficient conditions on μ for 𝔼[M]→0, and discover a novel sub-Gamma-type maximal inequality for shifted Bernoullis. 

 A number of challenging open problems are posed for future research. Joint work with Doron Cohen. 

https://arxiv.org/abs/2209.04054

Bio: Aryeh Kontorovich received his undergraduate degree in mathematics with a certificate in applied mathematics from Princeton University in 2001. His M.Sc. and Ph.D. are from Carnegie Mellon University, where he graduated in 2007. After a postdoctoral fellowship at the Weizmann Institute of Science, he joined the Computer Science department at Ben-Gurion University of the Negev in 2009, where he is currently a full professor. His research interests are mainly in machine learning, with a focus on probability, statistics, Markov chains, and metric spaces.He served as the director of the Ben-Gurion University Data Science Research Center during 2021-2022.

A solution to Ringel's 1959 circle problem

By: Chaya Keller, Ariel University

Host: Natan Rubin
May 2, 2023
12:00-13:00
Bldg.37, room 202

​​
​In 1959, Gerhard Ringel posed the following problem: What is the maximal number of colors needed for coloring any collection of circles, no three tangent at a point, such that any two tangent circles get different colors? 

The special case where the circles are non-overlapping was shown long ago to be equivalent to the celebrated 4-color theorem. The general case has remained open; it was only known that 4 colors are not sufficient. In this talk we show that no finite number of colors can suffice, by constructing collections of circles whose tangency graphs have an arbitrarily large girth (so in particular, no three are tangent at a point) and an arbitrarily large chromatic number. Our construction, which is one of the first geometric constructions of graphs with a large girth and a large chromatic number, relies on a (multidimensional) version of Gallai's theorem with polynomial constraints, which may be of independent interest.

Joint work with James Davies, Linda Kleist, Shakhar Smorodinsky, and Bartosz Walczak.

Brief bio:

Chaya Keller is a Senior Lecturer at the School of Computer Science, Ariel University. Her research field is discrete and computational geometry, with a focus on convexity and on coloring problems in geometric graphs and hypergraphs. She won a number of awards, including the Arianne de Rothschild fellowship, the Hoffman fellowship and the Noriko Sakurai award.

On the Ringel Problem:

https://gilkalai.wordpress.com/2021/12/11/to-cheer-you-up-in-difficult-times-34-ringel-circle-problem-solved-by-james-davies-chaya-keller-linda-kleist-shakhar-smorodinsky-and-bartosz-walczak/


Everything you always wanted to know about Cardinality Estimation* (*but were afraid to ask)

By: Seth Pettie, University of Michigan, Ann Arbor

Host: Aryeh Kontorovich
April 18, 2023
12:00-13:00
Bldg.37, room 202

>>>Zoom link​
​The Cardinality Estimation/Distinct Elements problem is to approximate the number of distinct elements in a data stream using a small probabilistic data structure called a "sketch".  This problem has been studied for 40 years, has many industrial applications, and is featured prominently in most courses on Big Data algorithmics.  It is therefore a real puzzle to explain why research on this popular and fundamental problem has been unusually slow.


This talk presents a complete history of the Cardinality Estimation problem from Flajolet and Martin's seminal 1983 paper to the present, and includes an account of how the research community became fractured, delaying many natural developments by decades.  I will present our recent efforts to achieve information-theoretically optimal cardinality sketches, which draws on two notions of "information" developed in the 20th century: Fisher information (governing optimal point estimation) and Shannon entropy (governing optimal space/communication).

Joint work with Dingyu Wang:
(STOC 2021) https://arxiv.org/abs/2007.08051
(ICALP 2021) https://arxiv.org/abs/2008.08739
(PODS 2023) https://arxiv.org/abs/2208.10578

Short Bio: Seth Pettie is a Professor of Computer Science and Engineering at the University of Michigan.  He received his Ph.D. from UT-Austin in 2003, and was an Alexander von Humboldt Fellow at the Max Planck Institute for Computer Science from 2003-2006.  He has held Visiting Professor positions at Aarhus University (2013-14) and Bar-Ilan University (2023).

​The polynomial technique in geometry: Combinatorial past and algorithmic present

By: Micha Sharir, Tel-Aviv University

Host: Natan Rubin​
March 26, 2023
12:00-13:00
Bldg.37, room 202

>>>Zoom link​
​We review the "algebraic revolution" in combinatorial and computational geometry,

initiated by Guth and Katz, who obtained in a groundbreaking paper in 2010 an almost complete solution of Erdos' distinct distances problem, introducing the new powerful technique of polynomial partitioning.

This technique changed the landscape of the field, by facilitating a solution of a large number of difficult problems, which, before the revolution, were deemed too difficult to "solve in our lifetime". These problems involve distinct distances, incidences between points and lines, curves or surfaces in three and higher dimensions, and many other hard problems in combinatorial geometry.

Algorithmic applications of the technique were slower to materialize, but major advances on this front took place during the past five years, mainly for range searching with semi-algebraic sets and its diverse applications.

The talk will attempt to highlight the major achievements of the technique to date, both combinatorial and algorithmic.

 Short Bio: https://en.m.wikipedia.org/wiki​/Micha_Sharir 

Micha Sharir received his Ph.D. in Mathematics from Tel Aviv University in 1976, and then switched to Computer Science, doing his postdoctoral studies at the Courant Institute of New York University.

Micha is one of the co-founders of the Minerva Center for Geometry at Tel Aviv University.

His research interests are in computational and combinatorial geometry and their applications. He has pioneered (with Jack Schwartz) the study of algorithmic motion planning in robotics during the early 1980s, and has been involved in many fundamental research studies that have helped to shape the fields of computational and combinatorial geometry. Among his major achievements, in addition to his earlier work on robotics, are the study of Davenport-Schinzel sequences and their numerous geometric applications, the study of geometric arrangements and their applications, efficient algorithms in geometric optimization (including the introduction and analysis of generalized linear programming), and the study of combinatorial problems involving point configurations, including, since 2008, the application of algebraic techniques to problems involving incidences, distinct and repeated distances, and other related problems in combinatorial and computational geometry.

His work won him several prizes, including a Max-Planck research prize (1992, jointly with Emo Welzl),  the Feher Prize (1999), the Mif'al Hapais' Landau Prize (2002), and the EMET Prize (2007).

He is the incumbent of the Nizri Chair in computational geometry and robotics, a Fellow of the Association for Computing Machinery (since 1997), has an honorary doctorate degree from the University of Utrecht (1996), and is a member of the Israeli Academy of Sciences and Humanities (2018). He has supervised 27 Ph.D. students, many of which are now at various stages of an academic career, in Israel and abroad.

Cryptanalysis, via discrete Fourier analysis

By: Nathan Keller, Bar-Ilan University
 

Host: Natan Rubin
March 21, 2023
12:00-13:00
Bldg.37, room 202

>>>Zoom link​
Cryptanalysis studies the practical security of the encryption schemes we use. The central object in cryptanalysis is "attack techniques" – which are algorithms that allow an adversary to intercept communications, forge digital signatures, etc. While the discovered attack techniques have been very successful, and have allowed discarding insecure encryption schemes and developing strategies for better designs, their analysis has always been somewhat ad-hoc. For most of the techniques, we don't know whether they are optimal, and for some, we don't even know how to estimate their complexity well.

In this talk we propose a new approach to understanding cryptanalytic attacks, using seemingly unrelated techniques from discrete Fourier analysis. We will show that Fourier-analytic techniques can be helpful in addressing core questions, such as: "Can we prove that certain cryptanalytic attacks are optimal?" and "Is there a need for post-quantum secret-key cryptosystems?".  We will mostly concentrate on open questions, but will also show promising results following the new approach.

Joint work with Itai Dinur and Ohad Klein
​​
Shortest paths over random sampled points and their applications to information theory, entropy, manifold learning and high dimensional data analysis

By: Steven Damelin, U. Michigan

Host: Aryeh Kontorovich​​
​Jan 17, 2023
12:00-13:00
Bldg.37, room 202

>>>Zoom link​
The shortest path problem is of interest both in theory and in applications since it naturally arises in combinatorial optimization problems, such as optimal routing in communication networks, information theory, entropy, and high dimensional data analysis for ex​ample in manifold learning.
Many graph structures over Euclidean sample points have been studied in the context of the Beardwood-Halton-Hammersley (BHH) theorem and its extensions. The BHH theorem states that the law of large numbers (LLN) holds for certain spanning graphs over random samples. Such graph structures include the travelling salesman path (TSP), the minimal spanning tree (MST), and the nearest neighbor graphs.

In this talk, we will discuss  shortest paths over random sample points embedded in Euclidean and Riemannian spaces as well as the use of  power weighted shortest path distance functions for clustering high dimensional Euclidean data, under the assumption that the data is drawn from a collection of disjoint low dimensional manifolds. We will discuss connections of our work to information theory,  entropy and manifold learning.

This is joint work with Alfred Hero (University of Michigan), Sung Jin Wang (Google) and Daniel McKenzie (Colorado School of Mines).


Bio: Steve Damelin, completed his Ph.D under Doron Lubinsky, now at Georgia Tech where he solved several open problems in weighted polynomial approximation  related in part to work of Paul Erdos.  He has written 79 research papers, two books, the first published by Cambridge University Press and his second which will be published by John Wiley & Sons. He has edited two volumes. His research interests are both pure and interdisciplinary and include, approximation theory, signal processing, manifold learning, vision, shortest paths and information theory, optimal transport, harmonic analysis and mathematics education. His research papers have appeared in journals at the level of the Annals of Applied Probability.

His honors include a New Directions Professorship (only two awarded internationally per year) at the Institute for Mathematics and its Applications, University of Minnesota. He has held faculty and visiting positions at numerous academic institutions both in the United States and abroad including Penn State University, American Mathematical Society, Georgia Southern University and University of Minnesota. His work has been funded by many international  funding agencies including National Science Foundation, the United States Department of Defense and the British EPRSC. His work constantly promotes diversity for example in a large Center for High Computing flagship award for the University of the Witwatersrand, South Africa. He has worked with many research students and with over 30 research collaborators both in the United States and abroad, for example in Stanford, Israel and Germany. He currently is affiliated to the University of Michigan and lives in Ann Arbor Michigan, the United States.

On structure and performance in the era of (really) big data

By: Amit Levi, Huawei Noah's ark lab in Toronto

Host: Jonathan Mosheiff
​Jan 16, 2023
12:00-13:00
Bldg.37, room 202

>>>Zoom link
​The influx of data witnessed during the last decade gave rise to groundbreaking applications in data sciences and machine learning. However, due to hardware constraints, the volume of data grows much faster than the growth of the available computational resources. Such modern setting poses new challenges for algorithm design as more efficient methods are needed. One way to obtain such methods is by exploiting the underlying structure of the data. 
In this talk, we will discuss three examples, ranging from theory to practice, where structure can be leveraged to obtain performance. The first two examples are from sublinear computation, where the combinatorial/geometric structure of the data is used to obtain dramatic time and space savings, and the third coming from Graph Neural Networks, where local structure is exploited to obtain superior real-world models.

Bio: Amit Levi is a research scientist at Huawei Noah's ark lab in Toronto. He obtained his PhD from the University of Waterloo, where he was advised by Eric Blais. He is interested in developing a rigorous understanding of the interplay between structured data and performance of algorithms, with a particular focus on sub-linear computation and graph neural networks.

The Interconnection Between Approximation, Optimization and Generalization in Deep Learning Theory.

By: Itay Safran, Purdue University

Host: Aryeh Kontorovich
​Jan 10, 2023
12:00-13:00
Bldg.37, room 202

>>>Zoom link
The modern study of deep learning theory can be crudely partitioned into three major aspects: Approximation, which is concerned with the ability of a given neural network architecture to approximate various objective functions; optimization, which deals with when we can or cannot guarantee that a certain optimization algorithm will converge to a network with small empirical loss; and generalization, which asks how well the network we trained is able to generalize to previously unseen examples. Despite being studied independently for the most part, we will demonstrate how considering all aspects simultaneously gives rise to end-to-end learnability results, which will establish a rich interconnection between the three aspects. This highlights the importance of studying the individual pieces as a whole to better understand the bigger picture, and to improve our theoretical understanding of the unprecedented practical success of deep learning.

Bio: Itay Safran is a Post-Doctoral Research Associate at Purdue University, and a former Postdoctoral Research Fellow at Princeton University. He received his MSc and PhD from the Weizmann Institute of Science, and a dual BSc in mathematics and computer science from Ben-Gurion University of the Negev. Itay's research focuses on theoretical machine learning, with emphasis on the theory of deep learning. For his achievements, he was recognized with the Dan David Scholarship Prize for outstanding doctoral students of exceptional promise in the field of artificial intelligence, and an excellence postdoctoral scholarship awarded by the Council for Higher Education in Israel to highly distinguished PhDs in data science fields.
Complexity-Performance Tradeoffs in Mechanism Design

By: Moshe Babaioff, Microsoft Research

Host: Klim Efremenko
​Jan 3, 2023
12:00-13:00
Bldg.37, room 202

>>>Zoom link​​

​Online computational platforms that directly engage with users must account for the strategic behavior of self-interested individuals. The goal of mechanism design is to optimize an objective, such as efficiency or revenue, in such scenarios, i.e., when the agents that participate in the mechanisms act strategically. In many fundamental computational settings the theoretical optimal mechanisms are highly complex and thus are not practical. In this talk I will discuss the tradeoff between the complexity of mechanisms and their performance, illustrated on the fundamental problem of profit maximization when selling multiple goods. Optimal mechanisms for this problem are highly complex. Nevertheless, I will present a simple deterministic mechanism that guarantees a constant percentage of the optimal revenue. I will then present results regarding the complexity of mechanisms that are almost optimal, and regarding the tradeoff between performance and complexity for this problem.


Bio: Moshe Babaioff is a Senior Principal Researcher at Microsoft Research, located in Israel. His research focuses on establishing rigorous theoretical foundations of solutions to real-world problems in the intersection of Economics and Computation (EC), using tools and approaches from Computer Science, Game Theory, and Microeconomic Theory. Moshe has served on multiple leadership positions at the EC community, including serving as Program co-Chair of ACM-EC 2017, as General Chair of ACM-EC 2014, and as co-chair of the Economics, Monetization, and Online Markets Track of The Web Conference (2023).

Moshe has been at Microsoft Research (MSR) for 15 years, first at MSR Silicon Valley lab (2007-2014) and then at MSR in Israel (since 2014). Before joining MSR he was a postdoctoral scholar at the University of California, Berkeley. He received his PhD in Computer Science from the Hebrew University in Jerusalem.

​Data Tools for Accelerated Scientific Discoveries

By: Brit Youngmann, MIT

Host: Yuval Moskovitch
​Dec 27, 2022
12:00-13:00
Bldg.37, room 202

>>>Zoom link​
​Like general data scientists, scientists conducting empirical research rely on data and analyze it to extract meaningful insights. Yet, scientists have two qualities that distinguish them from general data scientists: (1) they rely on extensive scientific domain knowledge to make scientific discoveries, and (2) they aim to explain and understand, not simply predict the real world. These two qualities must be reflected in their data analysis tools to assist them and accelerate the process of making real-world discoveries. In this talk, I will present data tools for accelerated scientific discoveries. In particular, I will present tools that assist scientists in investigating their data using scientific knowledge and helping scientists acquire missing data and domain-specific knowledge required to understand real-world mechanisms and draw trustworthy conclusions.

 

Bio: Brit is a postdoc researcher at CSAIL MIT, working with Prof. Michael Cafarella. She received her Ph.D. at Tel-Aviv University under the supervision of Prof. Tova Milo. Her research is centered around informative and responsible data science and causal analysis. Brit is the recipient of several awards, including the data science fellowship for outstanding Ph.D. students of the planning and budgeting committee of the Israeli council for higher education (VATAT), the Schmidt postdoctoral award for women in mathematical and computing sciences, and the planning and budgeting committee of the Israeli council for higher education (VATAT) postdoctoral scholarship in Data Science. She served on multiple program committees, including at the SIGMOD and ICDE conferences.  ​

Experiment Design and Evaluation of Empirical Models in Natural Language Processing

By: Rotem Dror, UPENN

Host: Michael Elhadad
Dec 20, 2022
12:00-13:00
Bldg.37, room 202​

>>>Zoom link​
The research field of Natural Language Processing (NLP) puts a lot of emphasis on empirical results. It seems that models are reaching state-of-the-art and even “super-human" performance on language understanding tasks on a daily basis, thanks to large datasets and powerful models with billions of parameters. However, the existing evaluation methodologies are lacking in rigor, leaving the field susceptible to erroneous claims. In this talk, I will describe efforts to build a solid framework for evaluation and experiment design for diverse NLP tasks.

To begin, I describe how NLP researchers currently conduct experiments and measure model performance, highlighting some important limitations. I will present our work, in which we propose statistical analyses for comparing models that allow researchers to make credible and statistically valid claims in many experimental settings in NLP, such as experimenting with deep neural network models or reporting model performance across multiple languages.

Then, I will discuss challenges related to evaluating text-generation tasks, such as machine translation or automatic summarization. Evaluation of text-generation models is not straightforward because different texts can convey the same meaning. Therefore, comparing a model's output with a human-written reference may not reflect the true quality of the output. Researchers have designed metrics to automatically evaluate text-generation systems. However, none of them measures all of the aspects we wish to evaluate. Therefore, I will propose methods for estimating the quality of these evaluation metrics and introduce limitations for a certain type of evaluation technique known as reference-free. Finally, I will discuss future research directions and challenges in estimating the true performance of NLP models.

This talk is based on papers published in TACL (2017, 2021), ACL (2018, 2019), NAACL 2022, EMNLP 2022, and a book published by Morgan and Claypool Publishers in 2020.   ​

Bio: Rotem Dror is a postdoctoral researcher at the University of Pennsylvania Cognitive Computation Group, mentored by Prof. Dan Roth. She received her Ph.D. from the Technion - Israel Institute of Technology, where she was advised by Prof. Roi Reichart. With a focus on Natural Language Processing applications, her research involves developing statistically sound methodologies for empirical investigation and evaluation.​

SQL and Incomplete Information

By: Liat PeterfreundThe French National Centre for Scientific Research (CNRS)

Host: Yuval Moskovitch

Dec 19, 2022
12:00-13:00
Bldg.37, room 202

>>>Zoom link​

Handling incomplete information is a subject of key importance in databases, and the way it is handled in commercial database management systems has many well-known deficiencies. A key reason for that is a flaw in the design of SQL, the de-facto standard query language for relational databases. Instead of using the familiar two-valued Boolean logic, SQL reverts to three-valued logic with an additional truth value “unknown" to handle incomplete data. While being viewed as indispensable for SQL, this design decision has been criticized for leading to unintuitive behavior and being a source of programmers' mistakes.
In this talk, I will explain that, contrary to the widely held view, SQL could have been designed based on Boolean logic, without any loss of expressiveness and without giving up nulls (representing missing values). This applies to the core of the language including all of its key features (the 1999 SQL standard). I will also explain how this new version of SQL leads to better optimizations and more intuitive query results, which is further confirmed by a user study.  

I will conclude by discussing the possible impact on the design of new query languages and on the interaction with the standards committee, and by presenting a new direction of theoretical research on incomplete information that is better suitable for SQL analytical queries.

The talk is partly based on joint work with Leonid Libkin and will be presented in PODS 2023.

Bio: Liat Peterfreund is a permanent CNRS researcher (chargée de recherche) at University Gustave-Eiffel, Paris. Prior to that, she was a postdoctoral scholar at École Normale Supérieure (ENS) Paris and Edinburgh University hosted by Prof. Leonid Libkin. She obtained her PhD at the Technion in 2019 under the supervision of Prof. Benny Kimelfeld. Her research is mainly on the foundations of data management. During her PhD she has been working on algorithms for text-centric databases. Recently, she has been working mainly on the standardization and formal semantics of query languages, and on different aspects of handling incomplete information.

An Algorithmic Perspective to Collective Behavior​​

By: Amos Korman, The French National Centre for Scientific Research (CNRS)

Host: Natan Rubin


Dec 13, 2022
12:00-13:00
Bldg.37, room 202


​In this talk, I will present a new interdisciplinary approach that I have been developing in recent years, aiming to build a bridge between the fields of algorithm theory and collective (animal) behavior. Ideally, an algorithmic perspective on biological phenomena can provide a level of fundamental understanding that is difficult to achieve using typical computational tools employed in this area of research (e.g., differential equations or computer simulations). In turn, this fundamental understanding can provide both qualitative and quantitative predictions that can guide biological research in unconventional directions. I will demonstrate this novel approach by presenting a sequence of works on collective ant navigation (published in the biology journals eLife 2016 and eLife 2020, and the CS venues ESA 2018 and TALG 2021), whose experimental part was done in collaboration with the Feinerman ant lab at the Weizmann Institute. In the second part of the talk, I will present a recent result (published in Science Advances 2021) regarding the search efficiency of common animal movement patterns, addressing a long-standing open problem in the area of foraging. I will conclude the talk by discussing potential avenues to employ an algorithmic perspective in biological contexts. 

Bio: Amos Korman received his Ph.D. in computer science at the Weizmann Institute of Science in May 2006, under the guidance of David Peleg. His Ph.D. thesis won Dean's prize for Ph.D. students. He then continued for two years of postdoc at the Technion and has been a researcher at CNRS, France, since Nov. 2007. In 2015 Amos received his Habilitation and in 2017 he was promoted to the rank "Directeur de Recherche". Until 2014 Amos worked primarily in the areas of distributed computing and graph algorithms. Around that time, he made a shift in his research focus, aiming to study biological phenomena through an algorithmic perspective. To support this pioneering initiative Amos received an ERC Consolidator Grant that lasted during the years 2015 - 2021. In 2020 Amos received the SIROCCO Prize for Innovation in Distributed Computing. As mentioned in the laudatio, the award was given for "pioneering contributions to distributed computing methods for system biology''.

Recovering Data Semantics

By: Roee ShragaNortheastern University

Host: Gil Einziger
Dec 6, 2022
12:00-13:00​
Bldg. 37, room 202

>>>Zoom link
​In data science, it is increasingly the case that the main challenge is finding, curating, and understanding the data that is available to solve a problem at hand. Furthermore, modern-day data is challenging in that it lacks many forms of semantics ("meaning of data"). Metadata may be incomplete or unreliable, data sources are unknown, and data documentation rarely exists. To address these challenges, the objective of my research is to recover data semantics throughout data discovery, versioning, integration, and quality. In this talk, I will discuss current data science challenges and highlight two specific aspects of my research that assist with such challenges. In particular, I will present ALITE, the first scalable integration solution for tables that may have been discovered in data lakes (repositories of big data). ALITE relaxes previous assumptions that tables share common attribute names (which completely determine the join columns), are complete (without null values), and have acyclic join patterns. I will also introduce Explain-Da-V, a solution that explains dataset versions by generating data transformations that resolve data changes.

 

Bio: Roee Shraga​ is a Postdoctoral fellow at the Khoury College of Computer Science at Northeastern University in Boston. He received his PhD degree in 2020 from the Technion in the area of Data Science. Roee has published more than a dozen papers in leading journals and conferences on the topics of data integration, human-in-the-loop, machine learning, process mining, and information retrieval. He is a recipient of the Council for Higher Education [VATAT] scholarship for outstanding data science postdocs. He is also a recipient of several PhD fellowships including the Leonard and Diane Sherman Interdisciplinary Fellowship (2017), the Daniel Excellence Scholarship (2019), and the Miriam and Aaron Gutwirth Memorial Fellowship (2020).​

Coresets for Computer Vision with Robotics Applications

By: ​Dan Feldman, The Robotics & Big Data Labs, Computer Science Department, University of Haifa

Host: Natan Rubin
Nov 22, 2022
12:00-13:00
Bldg. 37, room 202

>>>Zoom link

A fundamental problem in robotics is to localize a robot in the real-world using only a monocular 2D RGB camera. Here, localization means 6 degrees of freedom: x,y,z-axes and rotations around them. Our group suggested the first solution with provable guarantees that can be applied in real-time on a very weak and lightweight micro-computer (RPI0, <5 grams, <$6). 

The motivation was to provide the first autonomous nano-drone of weight <100 grams, but we also used it for smart carts, and flying ads in the supermarket of Rami-Levi (near the lab). The main theoretical challenge was to provide the first provable approximation for the Perspective-n-Point problem: Compute a rotation and translation of a set of n points (in the 3d world) that minimize their sum of distances to n paired lines (2D pixels with missing depth). This is by proving that there is always a small subset of pairs (called core-set) that yields such an approximation and can be computed in O(n) time.

Based on papers with my awesome students:

- International Conference on Intelligence Robots and Systems (IROS'22), with Ibrahim Jubran, Fares Fares, Yuval        Alfassi, and Firas Ayoub.
- International Conference on Computer Vision (ICCV'21), with Ibrahim Jubran and Alaa Maalouf.

Bio: https://www.rbd-labs.com

Methods for Computer-Assisted Linearizability Verification of Implementations of Concurrent Data Structures

Presenter: Avi Hayoun

 

Advisors: Uri Abraham and Gera Weiss
Nov 20, 2022
14:00-15:00
Bldg. 37, room 201

>>>Zoom link
Since distributed implementations of data structures are at the core of critical software, verification of these implementations is needed. A central challenge in this is that verifying the correctness of concurrent software is difficult, requiring a mathematical background not usually available to engineers, not even to experts in verification technologies. This talk will focus on the practicality of verifying and validating the correctness of implementations of concurrent objects, presenting two orthogonal approaches. First, practical and efficient techniques for verifying and validating concurrent implementations of the fundamental snapshot object. Second, a structured approach to proving the correctness of concurrent binary-search trees, which do not fit the methodology in our first approach.
Learning to Ro​ute Under Demand Uncertainty

By: Michael Shapira, Rachel and Selim Benin School of Computer Science and Engineering, the Hebrew University of Jerusalem

Host: Natan Rubin
Nov 8, 2022
12:00-13:00
Bldg. 37, room 202

>>>Zoom link
​Routing in the presence of uncertainty about future traffic is a fact of life in many operational environments.Today, this challenge is typically addressed by attempting to predict future traffic demands and optimizing with respect to these. However, as we explain, this might give rise to both far-from-optimal routing and prohibitively expensive optimization runtimes. We explore an alternate approach to this fundamental challenge: directly learning high-performance routing via stochastic optimization. We exemplify the usefulness of our approach through the important use-case of traffic engineering across the backbone networks of large content providers. We demonstrate that our approach yields both superior quality solutions and significantly faster runtimes.

 Bio: Michael Schapira is professor of Computer Science at Hebrew University. His research focuses on the design and analysis of (Inter)network architectures and protocols and, in particular, on the interface of networking and machine learning. Prior to joining Hebrew U, he worked at Google NYC's Infrastructure Networking Group and was a postdoctoral researcher at UC Berkeley, Yale University, and Princeton University. He is a recipient of the Allon Fellowship, the Wolf Foundation's Krill Prize, an ERC Starting Grant, faculty awards from Microsoft, Google, and Facebook, IETF/IRTF Applied Networking Research Prizes, and the IEEE Communications Society William R. Bennett Prize.

Fear Not, Vote Truthfully: Secure E-Voting Protocols for Score- and Order-Based Rules

By: Tamir Tassa, Dept. of Mathematics and Computer Science, The Open University of Israel

Host: Gil Einziger

Oct 25, 2022
12:00-13:00
Bldg. 37, Room 202

>> ​Zoom Link
Abstract: Electronic voting systems are essential for holding virtual elections, and the need for such systems increases due to the COVID-19 pandemic and the social distancing that it mandates. One of the main challenges in e-voting systems is to secure the voting process: namely, to certify that the computed results are consistent with the cast ballots, and that the privacy of the voters is preserved. We propose secure voting protocols for elections that
are governed by two central families of voting rules: score-based and order-based rules. Our protocols offer perfect ballot secrecy, in the sense that they issue only the required output, while no other information on the cast ballots is revealed. Such perfect secrecy, which is achieved by employing secure multiparty computation tools, may increase the voters’ confidence and, consequently, encourage them to vote according to their true preferences.
The protocols&#39; high level of privacy, and their extremely lightweight nature, make them a most adequate and powerful tool for democracies of any size. If time permits, we will briefly describe a Python open source implementation of our protocol for score-based rules.

Joint work with Lihi Dery (Ariel U) and Avishay Yanai (VMware Research)

Bio. Professor Tamir Tassa is a faculty member in the Department of Mathematics and Computer Science at the Open University of Israel. Previously, he served as a lecturer and researcher in the School of Mathematical Sciences at Tel Aviv University, and in the Department of Computer Science at Ben Gurion University. During the years 1993-1996 he served as an assistant professor of Computational and Applied Mathematics at the University of California, Los Angeles. He earned his PhD in mathematics from Tel Aviv
University in 1993. His recent research interests include secure multiparty computation, privacy-preserving data publishing and data mining, and secret sharing.

Proactive and Online Distributed Load Balancing

By: Manish Kumar
Host: Shlomi Dolev

Sep 6, 2022
16:00-17:00
Bldg. 37, Room 202

>> ​Zoom Link
In this talk, I will present the main results of my Ph.D. research work.

● The load balancing problem is defined when there is an undirected network (graph) of computers (nodes). Each one is assigned a non-negative working load and would like to balance their loads. We study the classic load balancing problem on dynamic general graphs, where the graph changes arbitrarily between the computational rounds, remaining connected with no permanent cut.

We solve the load balancing problem in the most general setting (in relation to the previous work) and best-known efficiency. Our solutions are anytime, monotonic, and deterministic distributed algorithms based on a short local deal-agreement communication of proposal/deal in the neighborhood of each node and proposed synchronous, asynchronous, and self-stabilizing algorithms. [SSS 2020 and Theory of Computing Systems 2022]

● Another research direction focuses on the diverse solutions for optimization problems, such as finding the multi-criteria k-shortest paths, prioritized multi-criteria 2-disjoint shortest paths, and k disjoint all-criteria-shortest paths. These algorithms help in balancing the load efficiently in the communication network. [CSCML 2021 and submitted to SN Computer Science Journal]

● Additionally, we have initiated a study on testing randomness using randomness, which helps during prediction-based load balancing. [CSCML 2022 and will be submitted to Cryptography and Communication Journal]

My advisor is Shlomi Dolev. During my Ph.D., I collaborated with Daniel Berend and Yefim Dinitz.

Scalable Bayesian Nonparametric Mixtures for Computer Vision and Deep Learning

By: Or Dinari

(Host: Oren Freifeld)

Jul 5, 2022
12:00-13:00
Bldg. 37, Room 201

>> ​Zoom Link
In the unsupervised-learning task of clustering, Bayesian Nonparametric (BNP) mixtures provide a principled approach for inferring the model complexity (i.e., the number of clusters) together with the other parameters. Inference in such models, however, is more difficult than in the parametric case (namely, when the number of clusters is known). In particular, existing BNP methods usually scale poorly. This is a key obstacle in using such models in real-world problems in general and, more specially, in Computer Vision, where both the number of samples and the dimensions are usually high. In this talk, I will present a few of our recent works that aim to bridge this practicality gap, via new models, new inference algorithms, and efficient distributed implementations.

 

References:

1) [Or Dinari and Oren Freifeld, AISTATS 2022]: Sampling in Dirichlet Process Mixture Models for Clustering Streaming Data.

2) [Or Dinari and Oren Freifeld, UAI 2022]: Revisiting DP-Means: Fast Scalable Algorithms via Parallelism and Delayed Cluster Creation.

3) [Or Dinari*, Angel Yu, Oren Freifeld, and John Fisher III. HPML Workshop, 2019]: Distributed MCMC Inference in Dirichlet Process Mixture Models Using Julia.

4) [Or Dinari*, Raz Zamir*, John Fisher III, and Oren Freifeld, Currently Under Review]: CPU- and GPU-based Distributed Sampling in Dirichlet Process Mixtures for Large-scale Analysis.

 

About the speaker:

Or Dinari is a PhD student in the Department of Computer Science at Ben-Gurion University, working in the areas of computer vision and machine learning under the supervision of Dr. Oren Freifeld. His current research focus is on unsupervised/semi-supervised learning, clustering, Bayesian nonparametrics, deep learning, and computer-vision applications.

>> Meeting ​Re​cord​​ing​​

A study of privacy and compression in learning theory.

By: Meni Sadigurschi

(Host: Aryeh Kontrovich)

Jun 28, 2022
12:00-13:00
Bldg. 37, Room 201

>> ​Zoom Link
Motivated by the increasing awareness and demand for user privacy, the line of work on private learning aims to construct learning algorithms that provide privacy protections for their training data. It is especially desirable to achieve differential privacy, a privacy notion which has been widely adopted by the academic community, government agencies, and big corporations like Google, Apple, and Microsoft.

In our work we study the task of private learning from various directions. In this talk I will focus on two of these directions. The first is the fundamental problem of learning
Axis-Aligned-Rectangles with differential privacy, for which I will present a novel algorithm with optimal sample complexity. In the second part of the talk I will present our study on ptivacy preserving universally Bayes consistent learning, a notion of learning which is arguably more realistic than the classical PAC learning framework.

Implicit bias in machine learning

By: Ohad Shamir

(Host: Aryeh Kontrovich)

Jun 21, 2022
12:00-13:00
Bldg. 37, Room 201

>> ​Zoom Link
Most practical algorithms for supervised machine learning boil down to optimizing the average performance over a training dataset. However, it is increasingly recognized that although the optimization objective is the same, the manner in which it is optimized plays a decisive role in the properties of the resulting predictor. For example, when training large neural networks, there are generally many weight combinations that will perfectly fit the training data. However, gradient-based training methods somehow tend to reach those which, on the one hand, do not overfit, and on the other hand, are very brittle to adversarially crafted examples. Why the dynamics of these methods lead to such "implicit biases" is still far from being fully understood. In this talk, I'll describe several recent theoretical results related to this question, in the context of benign overfitting and adversarial examples.

Based on joint work with Gal Vardi and Gilad Yehudai.

Federated Learning: Collaborative Learning on Private Data

By: Friedman Award - Amit Portnoy

(Host: Danny Hendler)

Jun 14, 2022
12:00-13:00
Bldg. 37, Room 201

>> ​Zoom Link
Federated learning is a distributed machine learning paradigm where clients collaboratively train a model without sending their local data samples. Federated learning opens new opportunities for data partnerships at scale, with applications in healthcare, transportation, IoT, and more.

 

We introduce federated learning, discuss the communication challenges it poses, and present EDEN—a robust lossy compression technique tailored for its specific communication patterns and constraints. We discuss EDEN's appealing theoretical guarantees and present empirical results that demonstrate that it consistently improves over the state-of-the-art. EDEN will be presented at ICML '22. It is a generalization of our previous compression technique named DRIVE, presented at NeurIPS '21.

 

This talk is for a general CS audience.

 

Amit Portnoy is a Ph.D. student in the CS department, advised by Prof.

Danny Hendler. This is joint work with Ran Ben Basat, Yaniv Ben-Itzhak, Gal Mendelson, Michael Mitzenmacher, and Shay Vargaftik.

New Directions in Derandomization: Non-Black-Box Techniques, Superfast Algorithms

By: Roei Tell

(Host: Yuval Pinter)

Jun 7, 2022
15:20
zoom talk
Bldg. 37, Room 201

>>Zoom Link

This talk will present two new directions in the study of derandomization, which are related to each other.

The first direction is avoiding pseudorandom generators in favor of *non-black-box techniques*, thereby replacing the classical hardness-vs-randomness framework with a new framework. The non-black-box techniques extract pseudorandomness for a probabilistic machine from the code of that specific machine.

The second direction is trying to minimize the computational overhead when eliminating randomness, hoping for *free lunch results* in derandomization. This is possible both for probabilistic algorithms, and for interactive proof systems.

We will mention results from the last two years by myself and by others, most prominently from joint works of mine with Lijie Chen.
Matching vertices of correlated random graphs
By: Mark Rudelson
(Host: Aryeh Kontrovich)
​​May 24, 2022
12:00-13:00
Bldg. 37, Room 201

>>Zoom Link

Consider two copies of the same graph with a large number of vertices.
In each copy, we independently delete a few edges. Our aim is to match
exactly the vertices of the first and the second copy. This problem
originates in particular in social networks, where you know the
identities of users in one network and want to recover the identities
in another one. This problem is computationally hard in a
deterministic setting, so it is often considered for typical, i.e.,
random graphs. We will discuss a computationally efficient
probabilistic algorithm which allows an exact recovery with high
probability even if one deletes a small constant proportion of edges.

Joint work with Cheng Mao and Konstantin Tikhomirov.

Quantum Pseudorandomness and Classical Complexity
By: William Kretschmer
(Host: Dr. Or Sattath)

May 24, 2022
10:00-11:00
Bldg. 37, Room 201

>>Zoom Link

Pseudorandom quantum states (PRSs) are a recently-introduced primitive
that have found exciting new applications in quantum cryptography and
black hole physics. In this talk, I will explore some connections
between PRSs and structural complexity theory. I will present a
surprising and counterintuitive result: a quantum oracle relative to
which BQP = QMA but PRSs exist. On the other hand, I will show that
PRSs do not exist if BQP = PP, a result which holds relative to all
oracles. I will discuss some implications of these results, including
a new impossibility result for shadow tomography.

Based on
https://arxiv.org/abs/2103.09320

Homepage:
https://www.cs.utexas.edu/~kretsch/

>> Meeting ​Record​​ing​​

Access strategies for network caching
By: ​Itamar Cohen
(Host: Dr. Gil Einziger)
Apr. 12, 2022
12:00-13:00
Bldg. 37, Room 201

>> ​Zoom Link

Caching systems commonly advertise the cached content, thus allowing users to select which cache to query for the desired datum. The data structure used to advertise the cached content is called indicator. Due to bandwidth and energy constraints, a practical indicator is typically not a fresh, accurate list of all the items currently stored in the cache. Instead, the indicator is a stale approximation of the list of the currently cached items. As a result, indicators experience false indications. These false indications significantly increase costs, and may even result in a scenario where using indicators does more harm than good.

Our work introduces the cache selection problem, namely, the problem of selecting which cache to access in the face of such false indications. We formally model this problem as a cost minimization problem and introduce practical algorithms with guaranteed approximation ratios. We further perform an extensive simulation study with multiple real system traces. We show that our algorithms incur a significantly lower access cost than existing approaches or match the cost of these approaches while requiring an order of magnitude fewer resources (e.g., caching capacity or bandwidth).

Itamar Cohen received his B.Sc. degree in computer engineering and M.Sc. in electrical engineering from the Technion – I.I.T, and Ph.D. from the Ben-Gurion University of the Negev. He worked as a Chip Designer with EZchip Technologies, as a Test Engineer at Marvell, and as a Research Assistant at REMON—Israel 4G Mobile Consortium. He is currently a Post-Doctoral Fellow with the Politecnico di Torino, Italy. His research interests include network algorithms and resource allocation problems, and specifically issues related to scheduling, cooperative caching, and decision making under uncertainty.

 

Projection Methods, Superiorization and Applications

By: ​Aviv Gibali
(Host: Dr. E​ran Treister)

Mar. 31, 2022
12:00-13:00
Bldg. 37, Room 201

>> ​Zoom Link

Projection methods are iterative algorithms that use projections onto sets
while relying on the general principle that when a family of sets is present,
then projections onto the given individual sets are easier to perform than
projections onto other sets that are derived from the given individual sets.
Their robustness, low computational effort and their ability to handle
huge-size problems make them very useful for many convex and non-convex real-world problems
such as Image Reconstruction, Intensity-Modulated Radiation Therapy
(IMRT) Treatment Planning as well as Sudoku and 8 Queens Puzzle.
The Superiorization Methodology is a heuristic tool and its goal is to find certain good, or superior, solutions to feasibility and optimization problems. In many scenarios, solving the full problem can be rather demanding from the computational point of view, but solving part of it, say the feasibility part is, often, less demanding.
In recent years "superiorized" projection methods and other iterative methods have been developed and applied successfully to feasibility, single and multi-objective optimization.
In this talk I will provide an overview on the above concepts, present several theoretical and practical results and also potential direction for future research.

Generalization in Overparameterized Machine Learning

By: Yehuda Dar
(Host: Eran Treister)
Mar. 23, 2022
12:00-13:00
Bldg. 37, Room 201

>> ​Zoom Link
Deep neural networks are highly overparameterized models, i.e., they are highly complex with typically many more parameters than the number of training data examples. Such overparameterized models are usually learned to perfectly fit their training data; yet, in sharp contrast to conventional machine learning guidelines, they still generalize extremely well to inputs outside of their training dataset. This practical generalization performance motivates numerous foundational questions that have not yet been addressed even for much simpler models like linear regression.
This talk presents new analyses of the fundamental factors that affect generalization in overparameterized machine learning. Our results characterize the so-called “double descent phenomenon," which extends the classical bias-variance tradeoff, in several overparameterized learning settings. First, we consider a transfer learning process between source and target linear regression problems that are related and overparameterized. Our theory characterizes the generalization performance and hence the cases where transfer learning is beneficial. Second, we consider linear subspace learning problems for dimensionality reduction and data generation. We define semi- and fully supervised extensions to the common unsupervised forms of these subspace learning problems and demonstrate that overparameterization in conjunction with supervision can improve generalization performance.

Bio:
Yehuda Dar is a postdoctoral research associate in the Electrical and Computer Engineering Department at Rice University, working on topics in the foundational understanding of modern machine learning. Before that, he was a postdoctoral fellow in the Computer Science Department of the Technion — Israel Institute of Technology where he also received his PhD in 2018. Yehuda earned his MSc in Electrical Engineering and a BSc in Computer Engineering, both also from the Technion. His main research interests are in the fields of machine learning, signal and image processing, optimization, and data compression.

>> Li​​​n​k ​to ​L​ecture​​​​​​

Information geometry of Markov kernels

By: Shun Watanabe
(Host: Aryeh Kontorovich)
Mar. 22, 2022
12:00-13:00

>> ​Zoom Link
In this talk, we first review basic concepts on the information geometry,
such as the e-family , m-family, and the Pythagorean identity, in the context
of probability distribution. We will also discuss an application of information
geometry to the problem of statistical hypothesis testing. Then, we will
discuss how the concepts of information geometry are extended to Markov
kernels. Finally, we will present our recent result that the set of reversible
Markov kernels constitute both e-family and m-family of Markov kernels.
This talk is based on a joint work with Geoffrey Wolfer.

>> Li​​​n​k ​to​ ​L​ecture​​​

Seeing the Unseen - a Generalized Scheme for Missing Mass Estimation

By: Amichai Painsky
(Host: Aryeh Kontorovich)
Mar. 21, 2022
12:00-13:00
Bldg. 37, Room 201

>> ​Zoom Link
Consider a finite sample from an unknown distribution over a countable alphabet. The missing mass refers to the probability of symbols that do not appear in the sample. Estimating the missing mass is a basic problem in statistics, machine learning and related fields, which dates back to the early work of Laplace, and the more recent seminal contribution of Good and Turing. The missing mass problem has many applications in a variety of domains. For example, given the observed population of Covid mutations, what is the probability that we encounter a new variant that has not been seen before? In this work we introduce a generalized framework for missing mass estimation. Our framework provides novel risk bounds and improves upon currently known estimation schemes. Importantly, it is easy to apply and does not require additional modeling assumptions. This makes it a favorable choice for many practical setups. Furthermore, we show that by utilizing our scheme, we improve the estimation accuracy of large alphabet probability distributions.

Bio:
Amichai Painsky is a Senior Lecturer (Assistant Professor) at Tel Aviv University. Amichai received his B.Sc. in Electrical Engineering from Tel Aviv University (2007), his Masters in Electrical Engineering from Princeton University (2009) and his Ph.D. in Statistics from Tel Aviv University (2016). Following his graduation, Amichai joined Massachusetts Institute of Technology (MIT) as a postdoctoral fellow (2019). Amichai's research focuses on statistical inference and learning, and their connection to information theory.

>> Li​​​n​k ​to L​ecture​​​​​

Rank Aggregation with Proportionate Fairness

By: Baruch Schieber
(Host: Michael Elkin)
Mar. 15, 2022
12:00-13:00
Bldg. 37, Room 202

>> ​Zoom Link
Given multiple individual rank orders over a set of candidates or items, where the candidates belong to multiple (non-binary) protected groups, we study the classical rank aggregation problem under proportionate or p-fairness (RAPF in short), considering Kemeny distance.
We first study the problem of producing a closest p-fair ranking for an individual ranked order (IPF in short) considering Kendall-Tau distance, and present multiple solutions for IPF. We then present a deterministic algorithm and a randomized algorithm to solve RAF that leverage the solutions of IPF as a subroutine.
We make several algorithmic contributions: (i) we prove that when the group protected attribute is binary, IPF could be solved exactly using a greedy technique; (ii) when the group protected attribute is multi-valued, solving IPF is non-trivial: we present two different solutions for it, one is exact under the assumption that the number of attribute values is constant, and the other admits a 2 approximation factor; (iii) we present the guiding principle of designing the solution for RAPF with an approximation factor that is 2+ the approximation factor of the IPF solution, resulting in 3 and 4 approximation algorithms.
We run extensive experiments using multiple real world and large scale synthetic datasets and compare our proposed solutions against multiple state-of-the-art related works to demonstrate the effectiveness and efficiency of our studied problem and proposed solution.
This is joint work with Senjuti Basu Roy, Mouinul Md Islam, and Dong Wei
​.

>> Li​​​n​k ​to L​ecture​​​​

Sequential and distributed clustering algorithms

By: Tom Hess
(Supervisor: Prof. Sivan Sabato)
Mar. 1, 2022
12:00-13:00

>> ​Zoom Link
This seminar is about sequential and distributed clustering.
First, I introduce and study the setting of no-substitution sequential k-median clustering. In this setting, an i.i.d. sequenceof examples is observed. An example can be selected as a center only immediately after it is observed, and it cannot be substituted later.
The goal is to select a set of centers with a good k-median cost on the distribution generated the sequence.
I provide algorithms for this setting in a general bounded metric space and show that these algorithms obtain a similar cost guarantee for the best offline algorithm. In addition, I study the k-median clustering under the sequential no-substitution setting, without any structural assumption on the metric space. I give the first algorithm for this setting that obtains a constant approximation factor on the optimal cost, an exponential improvement over previous work. Moreover, the number of selected centers is only quasi-linear in k.
Second, I provide a new k-means algorithm for the distributed setting, where the data is distributed across many machines, and a coordinator communicates with these machines to calculate the output clustering. This algorithm guarantees a cost approximation factor and a number of communication rounds that depend only on the computational capacity of the coordinator. Moreover, the algorithm includes a built-in stopping mechanism, which allows it to use fewer communication rounds whenever possible. I show both theoretically and empirically that in many natural cases, indeed 1-4 rounds suffice.

>> Lin​k ​to L​ecture​​​

Optimal Linear Numeric Planning

By: Dr. Alexander Shleyfman
(Host: Prof. Ronen Brafman​)
Feb. 1, 2022
12:00-13:00

>> Zoom Link
Forward search, such as A*, equipped with an informative heuristic and a strong pruning technique, proved to be one of the most efficient methods to solve planning problems. Most of these heuristics and pruning techniques, however, are currently limited by an assumption that all variables involved have finite domains. While numeric variables play an important, sometimes central, role in many planning problems arising from real world scenarios, most of the currently available heuristic search planners either do not support such variables or impose heavy restrictions on them. In particular, most current planners do not support heuristics and pruning techniques that account for linear numeric effects. The contribution of our work is twofold.
First, we extend the symmetry breaking pruning methods such as DKS and Orbit Space Search, proposed by Domshlak et al. (2012, 2015) to support actions with linear conditions and effects.  Our experiments further show that symmetry reduction can yield a substantial reduction in search effort even if the algorithm is equipped with a strong heuristics.
Second, we adapt the well-known classical planning heuristic LM-cut (Helmert and Domshlak, 2009) to account for both additive constant and linear effects, where for linear effect we show a way to efficiently exploit specific variable structures. Empirical comparison shows that the proposed LM-cut heuristic favorably competes with the currently available state-of-the-art heuristics and achieves significant improvement in coverage.

Bio:
Alex did his Master's and PhD degrees in the field of  Artificial Intelligence at the Faculty of Industrial Engineering and Management at the Technion under the supervision of Prof. Carmel Domshlak. After that, he moved to Canada for a postdoctoral fellowship at the University of Toronto, where he was hosted by Prof. J. Christopher Beck, and worked on resource management for cloud services and numeric planning. Today, Alex is back in Israel. He is a postdoctoral fellow at the Technion, at the Cognitive Robotics lab, hosted by Prof. Erez Karpas.

>> Link ​to L​ecture​​

On the Role of Data in Algorithm Design

By: Tal Wagner
(Host: Prof. Sivan Sabato)
Jan. 11, 2022
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
​Recently, there has been a growing interest in harnessing the power of big datasets and modern machine learning for designing new scalable algorithms. This invites us to rethink the role of data in algorithm design: not just as the input to pre-designed algorithms, but also a factor that enters the algorithm design process itself, driving it in a strong and possibly automated manner. This talk will show how to leverage data and learning for better algorithm design in two fundamental areas: high-dimensional similarity search and efficient linear algebra.
In particular, I will show the following:
1. Using data-dependent compression, we obtain optimal compressed representations of high-dimensional Euclidean distances. This result marks the first improvement over classical data-oblivious compression schemes, which provably cannot match its performance.
2. Using neural networks, we show how to learn to construct better data structures for high-dimensional similarity search.
3. We then show how those techniques also give rise to fast algorithms for low rank approximation of matrices.
Our algorithms are both proven analytically and implemented and validated empirically, showing improvements over previous methods on standard benchmark datasets.

Bio:
Tal Wagner is a postdoctoral researcher in the Machine Learning Foundations group at Microsoft Research Redmond. His research interests are in designing algorithms for massive high-dimensional datasets and large-scale machine learning. He received his PhD from the EECS department at MIT in September 2020. Previously, he earned his MSc from the Weizmann Institute and BSc from the Technion. During his PhD, he spent time as a research intern in Microsoft, Amazon and VMware.​

Information in Mechanism Design

By: Alon Eden
(Host: Dr. Sigal Oren)
Jan. 4, 2022
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
The modern economy is becoming highly digital, demanding a combination of both economic and computational techniques. The abundance of data and the unique nature of digital markets highlight the special role of information in today's economic systems. In this talk, I will discuss two domains of interest. The first is auction design in the interdependent value (IDV) setting. In IDV settings, buyers hold partial information regarding the quality of the good being sold, but their actual values also depend on information that other buyers have about the good. The second is indirect mechanism design, in which the economic system can only observe partial information regarding the buyers' preferences, if any. In both domains, impossibilities are known for very basic settings. I will show how my research has pushed the boundary of what is possible via a computational approach. For IDV settings, I utilize techniques from approximation algorithms to obtain the first general positive results for combinatorial auctions. For indirect mechanisms, I initiate the use of reinforcement learning for the design of such mechanisms.

>> Link ​to Lecture​​​​

Learning with Less Data and Labels for Language Acquisition and Understanding

By: Elior Sulem
(Host: Prof. Michael Elhadad)
Jan. 2, 2022
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
Natural Language Processing has attracted a lot of interest in recent years and has seen large improvements with the use of contextualized neural language models. However, the progress is limited to specific tasks where large datasets are available and models are often brittle outside of these datasets. Also, current models are usually pretrained on extremely large unlabeled datasets, which limits our understanding of low-resource scenarios and makes their use unfeasible for many in the academia and industry. On the other hand, children learn with much less data, which makes the study of child language acquisition an appealing research direction to address these issues. In the first part of the talk, I will focus on pretraining on unlabeled data and show what can be learned with an input both quantitatively and qualitatively comparable to that of an average English-speaking 6 year old. In the second part of the talk I will focus on Natural Language Understanding and show how zero-shot and transfer learning strategies can be used to go beyond task-specific training. In particular, I will present a zero-shot approach to Event extraction, which is usually based on the annotation of large domain-specific corpora, using Textual Entailments (TE) and Question-Answering (QA) tasks. I will show the challenge of missing arguments in this context, presenting new models that identify unanswerable questions, leveraging different QA and TE tasks.

Bio:
Elior Sulem is a Postdoctoral Researcher at the Department of Computer and Information Science at the University of Pennsylvania, working with Prof. Dan Roth on computational semantics, event extraction and computational psycholinguistics. He completed his PhD. in the School of Computer Science and
Engineering at the Hebrew University of Jerusalem under the supervision of Prof. Ari Rappoport in 2019, working on the integration of semantic information into text simplification and machine translation. Before that, he graduated with a master's degree (magna cum laude) in Cognitive Science at the Hebrew University of Jerusalem, under the supervision of Prof. Ari Rappoport (Cognitive Sciences Department's prize for outstanding thesis). He previously completed a B.Sc. degree in Mathematics (extended program) and
Cognitive Sciences at the Hebrew University. He was a fellow of the Language, Logic and Cognition Center from 2012 to 2016. He received the Best Paper Award Runner Up at CoNLL 2021.​

>> Homepage

Links to related papers:
https://aclanthology.org/2021.conll-1.49/
https://aclanthology.org/2021.findings-emnlp.385.pdf

Informed Data Science

By: Amir Gilad
(Host: Prof. Ehud Gudes)
Dec. 28, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)​​
​Data science has become prevalent in various fields that affect day-to-day lives, such as healthcare, banking, and the job market. The process of developing data science applications usually consists of several automatic systems that manipulate and prepare the data in different manners. Examples of automatic data manipulations and preparations include generating synthetic data, exploring it, repairing the data, and labeling it for machine learning. These systems can be highly complex and even data scientists can find it difficult to understand and verify their output. Moreover, uninformed use of these systems can lead to errors that may affect the quality of the results of such applications.

In the talk, I will highlight prominent challenges in the data science process and present three approaches for addressing them. In particular, I will present a solution that generates natural language explanations for query results, a tool for generating synthetic linked data, and a solution that explains complex queries using abstract database instances.

Bio:

Amir Gilad is a postdoctoral researcher in the Database group at Duke University. He received his Ph.D. in Computer Science from Tel Aviv university under the supervision of Prof. Daniel Deutch. His work focuses on developing tools and algorithms that assist users in understanding and gaining insights into data and the systems that manipulate it. His research relates to classic database tools such as data provenance, as well as natural language processing, causal inference, and privacy.

Amir is the recipient of the VLDB best paper award, the SIGMOD research highlight award, and the Google Ph.D. fellowship for Structured Data and Database Management.

Sublinear-time graph algorithms: motif counting and uniform sampling

By: Talya Eden
(Host: ​​Prof. Michael Elkin)
Dec. 21, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
​In this talk I will survey recent developments in approximate subgraph-counting and subgraph-sampling in sublinear-time. Counting and sampling small subgraphs (aka motifs) are two of the most basic primitives in graph analysis, and have been studied extensively, both in theory and in practice. In my talk, I will present the sublinear-time computational model, where access to the input graph is given only through queries, and will explain some of the concepts that underlie my results. I will then explain how we can get improved bounds for a very natural graph family: the family of bounded arboricity graphs. Finally, I'll present a recent application, where again we take advantage of the unique structure of social networks in order to get improved bounds for a very basic and well-studied problem.


 

Bio: Talya Eden is a postdoctoral fellow jointly affiliated with MIT and Boston University. She works on sublinear-time graph algorithms and randomized algorithms for huge data sets. Her work focuses on theoretical and applied graph parameter estimation in sublinear-time, as well as graph algorithms in other related areas, such as the streaming model, the massively parallel computation model, and learning-augmented algorithms.

 Talya completed her PhD at the School of Electrical Engineering at Tel Aviv University under the supervision of Prof. Dana Ron, and then continued on to a postdoctoral fellowship with the Foundations of Data Science Institute at MIT.  Talya has won several awards for her work, including the EATCS Distinguished Dissertation Award in Theoretical Computer Science, a Rothschild Postdoctoral Fellowship, a Schmidt Postdoctoral Award, a Ben Gurion University postdoctoral fellowship, a Weinstein Graduate Studies Prize, and the Azrieli Fellows Scholarship.

>> Homepage

>> Link ​to Lecture​​​

Harnessing Scientific Literature for Boosting Discovery and Innovation

By: Tom Hope
(Host:​ Dr. Yuval Pinter)
Dec. 14, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
With millions of scientific papers coming out every year, researchers are forced to allocate their attention to increasingly narrow areas, creating isolated “research bubbles” that limit knowledge discovery. This fragmentation of science slows down progress and prevents the formation of cross-domain links that catalyze breakthroughs. Toward addressing these large-scale challenges for the future of science, my work develops new paradigms for searching, recommending and discovering scholarly knowledge.
In this talk, I will present methods and systems for recommending research inspirations and creating links across areas and ideas. My approach is based on extracting novel structural representations of scholarly literature, and leveraging them for "matchmaking" between problems and solutions and between scientists. I will also present a new task we introduced for jointly addressing diversity, ambiguity and hierarchy of language in scientific texts. In this new setting, the goal is to learn to construct cross-document coreference clusters along with a hierarchy defined over the clusters. As I will demonstrate, being able to automatically induce this graph structure can help unlock important applications in scientific discovery, but this remains a highly difficult open problem.

Bio:
Tom Hope is a postdoctoral researcher at The Allen Institute for AI (AI2) and The University of Washington, working with Daniel Weld on accelerating scientific discovery and closely collaborating with Eric Horvitz, CSO at Microsoft. Tom completed his PhD with Dafna Shahaf at the Hebrew University of Jerusalem in January 2020. His work has received four best paper awards, appeared in top venues (PNAS, KDD, EMNLP, NAACL, WSDM, CHI, AKBC, IEEE), and received media attention from Nature and Science on his systems for COVID-19 researchers. In parallel to his PhD studies Tom created and led an applied AI research team at Intel that published award-winning work. Tom was selected for the 2021 Global Young Scientists Summit and 2019 Heidelberg Laureate Forum, and was a member of the KDD 2020 Best Paper Selection Committee.

>> Link to Lecture​​
​​​
Large-scale multi-robot systems: From algorithmic foundations to smart-mobility applications

By: Kiril Solovey
(Host: Prof. Ronen Brafman)​​
Dec. 7, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
​Multi-robot systems are already playing a crucial role in various domains such as manufacturing and warehouse automation. In the future, these systems will be incorporated into our daily lives through drone-delivery services and smart-mobility systems that comprise thousands of autonomous vehicles, to give a few examples. The anticipated benefits of multi-robot systems are numerous, ranging from increased safety and efficiency, to broader societal facets such as sustainability. However, to reap those rewards we must develop algorithms that can adapt rapidly to unexpected changes on a massive scale. Importantly, these algorithms must capture (i) dynamical and collision-avoidance constraints of individual robots, (ii) interactions between multiple robots, and (iii), more broadly, the interaction of those systems with their environment. These considerations give rise to extremely complex and high-dimensional optimization problems that need to be solved in real-time. In this talk, I will present progress on the design of systematic control and decision-making mechanisms to allow the safe, effective, and societally-equitable deployment of multi-robot systems. I will highlight results on fundamental capabilities for multi-robot systems (e.g., motion planning and task allocation), as well as applications in smart-mobility systems. I will also discuss challenges and opportunities for smart mobility in addressing societal issues, including traffic congestion and fairness.

BIO: Kiril Solovey is a roboticist specializing in multi-robot systems and their applications to smart mobility. He is currently a Postdoctoral Scholar in the Faculty of Computer Science at the Technion. Prior to that, he was a postdoc in the Department of Aeronautics and Astronautics, Stanford University. He obtained a PhD in Computer Science from Tel Aviv University, where he was advised by Dan Halperin.

Kiril's research focuses on the design of scalable algorithmic approaches for multi-robot systems. His work draws upon ideas across the disciplines of computer science, engineering, and transportation science, to develop methods with substantial guarantees on solution quality and robustness. He received multiple awards, including the Clore Scholars and Fulbright Postdoctoral Fellowships, best paper awards and nominations (at Robotics: Science and Systems, International Conference on Robotics and Automation, International Symposium on Multi-Robot and Multi-Agent System, and European Control Conference), and teaching awards.

>> Homepage​

SELECTED PAPERS:
* Rahul Shome, Kiril Solovey, Andrew Dobson, Dan Halperin, Kostas E.
Bekris: dRRT*: Scalable and informed asymptotically-optimal multi-robot motion planning. Auton. Robots 44(3-4): 443-467 (2020)
* Kiril Solovey, Mauro Salazar, Marco Pavone: Scalable and Congestion-Aware Routing for Autonomous Mobility-On-Demand Via Frank-Wolfe Optimization. Robotics: Science and Systems 2019​
* Devansh Jalota, Kiril Solovey, Karthik Gopalakrishnan, Stephen Zoepf, Hamsa Balakrishnan, Marco Pavone: When Efficiency meets Equity in Congestion Pricing and Revenue Refunding Schemes. EAAMO 2021: 9:1-9:11

>> Link to Lecture​

Efficient Secure Multi-Party Computation: How Low Can We Go?

By: Ariel Nof
(Host: Prof. Amos Beimel)
Nov. 30, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
In the setting of secure multi-party computation, a set of parties with private inputs wish to compute a joint function of their inputs, without revealing anything but the output. This computation is carried-out in the presence of an adversary controlling a subset of the parties, who may try to learn private information or disrupt the computation. The problem of constructing efficient protocols for secure computation has gained significant interest in the last decade or so and progress has been extraordinarily fast, transforming the topic from a notion of theoretical interest only, into a technology that is even being commercialized by multiple companies. A useful metric for measuring efficiency of secure protocols is the ratio between the communication cost of the protocol, and that of the best known protocol with a “minimal" level of security, namely security against semi-honest (passive) adversaries, who act as prescribed by the protocol but try to learn additional information from messages they receive. In this talk, I will describe recent efforts to minimize this ratio and show, in the honest majority setting, new results that close (in the amortized sense) the communication gap between secure protocols against adversaries who act in any arbitrary manner, and protocols against semi-honest adversaries.​

A Theoretical Analysis of Generalization in Graph Convolutional Neural Networks

By: Ron Levie (LMU Munich)
(Host: Dr. Eran Treister)
Nov. 23, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
In recent years, the need to accommodate non-Euclidean structures in data science has brought a boom in deep learning methods on graphs, leading to many practical applications with commercial impact. In this talk, we will review the mathematical foundations of the generalization capabilities of graph convolutional neural networks (GNNs). We will focus mainly on spectral GNNs, where convolution is defined as element-wise multiplication in the frequency domain of the graph.

In machine learning settings where the dataset consists of signals defined on many different graphs, the trained GNN should generalize to graphs outside the training set. A GNN is called transferable if, whenever two graphs represent the same underlying phenomenon, the GNN has similar repercussions on both graphs. Transferability ensures that GNNs generalize if the graphs in the test set represent the same phenomena as the graphs in the training set. We will discuss the different approaches to mathematically model the notions of transferability, and derive corresponding transferability error bounds, proving that GNNs have good generalization capabilities.

Links to related papers:
>> https://arxiv.org/pdf/1907.12972.pdf
>> https://arxiv.org/pdf/1705.07664.pdf

Bio:
Ron Levie received the Ph.D. degree in applied mathematics in 2018, from Tel Aviv University, Israel. During 2018-2020, he was a postdoctoral researcher with the Research Group Applied Functional Analysis, Institute of Mathematics, TU Berlin, Germany. Since 2021 he is a researcher in the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence, Department of Mathematics, LMU Munich, Germany. Since 2021, he is also a consultant at the Huawei project Radio-Map Assisted Pathloss Prediction, at the Communications and Information Theory Chair, TU Berlin. He won excellence awards for his MSc and PhD studies, and a Postdoc Minerva Fellowship. He is a guest editor at Sampling Theory, Signal Processing, and Data Analysis (SaSiDa), and was a conference chair of the Online International Conference on Computational Harmonic Analysis (Online-ICCHA 2021).
His current research interests are in the theory of deep learning, geometric deep learning, interpretability of deep learning, deep learning in wireless communication, harmonic analysis, signal processing, wavelet theory, uncertainty principles, continuous frames, and randomized methods.
>> Homepage​

Towards Revealing Visual Relations Between Fixations: Modeling the Center Bias During Free-Viewing

By: Rotem Mairon
(Adviser: Prof. Ohad Ben Shahar)
Nov. 16, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
Understanding eye movements is essential for comprehending visual selection and has numerous applications in computational vision and human-computer interfaces. Eye-movement research has revealed a variety of behavioral biases that guide the eye regardless of visual content.
However, computational models of eye-movement behavior only partially account for these biases. The most well-known of these biases is the center bias, which refers to the tendency of observers to fixate on the stimulus's center. In this study, we investigate two aspects of the center bias to help distinguish it from visual content-driven behavior. One aspect concerns standard procedures for collecting eye-movement data during free viewing. Specifically, the requirement to fixate on the center of the display prior to the appearance of the stimulus, as well as the presentation of the stimulus in the display's center. Our findings support a fundamental shift in data collection and analysis procedures, all in the name of obtaining data that reflects more content-driven and less bias-related eye-movement behavior. The second aspect is how the center bias manifests during viewing, as opposed to how it is reflected in the spatial distribution of aggregated fixations, which is used in almost all computational models of eye-movement behavior. To that end, this work proposes an eye-movement data representation based on saccades rather than fixations. This representation not only demonstrates the dynamic nature of the center bias throughout viewing, but it also holistically demonstrates other eye-movement phenomena that were previously investigated exclusively. Finally, we demonstrate that the proposed representation allows for more accurate modeling of eye-movement biases, which is critical for establishing a baseline for computational models. Such a baseline paves the way for researchers to learn how, if at all, visual content guides the human eye from one fixation to the next while freely viewing natural images.

Mind Reading: Decoding Visual Experience from Brain Activity

By: Guy Gaziv (WIS)
(Host: Dr. Eran Treister)
Nov. 9, 2021
12:00-13:00
Bldg. 37, Room 202

Reconstructing and semantically classifying observed natural images from novel (unknown) fMRI brain recordings is a milestone for developing brain-machine interfaces and for the study of consciousness. Unfortunately, acquiring sufficient “paired” training examples (images with their corresponding fMRI recordings) to span the huge space of natural images and their semantic classes is prohibitive, even in the largest image-fMRI datasets. We present a self-supervised deep learning approach that overcomes this barrier, giving rise to better generalizing fMRI-to-image decoders. This is obtained by enriching the scarce paired training data with additional easily accessible “unpaired” data from both domains (i.e., images without fMRI, and fMRI without images). Our approach achieves state-of-the-art results in image reconstruction from fMRI responses, as well as unprecedented large-scale (1000-way) semantic classification of never-before-seen classes.

Bio:
Guy Gaziv is a postdoctoral fellow in the Michal Irani Computer Vision group at The Weizmann Institute of Science. This spring he will be starting his postdoc at the DiCarlo Lab at MIT. His PhD research focused on the intersection between machine and human vision, and specifically on decoding visual experience from brain activity. Guy earned his BSc in Electrical and Computer Engineering from The Hebrew University of Jerusalem, and his MSc in Physics and PhD in Computer Science from The Weizmann Institute of Science.

Research-oriented gathering of students and faculty members​​

By: Various Faculty Members
(Host: Dr. Eran Treister)
Nov. 2, 2021
12:00-13:00
Bldg. 37, Room 202

You are invited for a small research-oriented gathering of students and faculty members on Nov 2nd, at 12:00, room 202/37 (invitation attached). The gathering is mostly intended for new students who did not find an adviser yet, but everyone is welcome.

There will be six faculty members presenting their research briefly, two of them are new in the department. These faculty members and others that will also attend the meeting are looking for students, and this is a good opportunity to talk to them. We will also encourage faculty members to be available for personal meetings with you.​

So - if you haven't done so already, we encourage you to visit our department's research page

and follow the links according to your field(s) of interest. Try to read about the research fields and visit the web-pages of the relevant faculty members. Once you find a faculty member that looks relevant - email him/her and try to schedule a meeting (optionally, but not necessarily on that day).

Scaling laws for deep learning

By: Jonathan Rosenfeld​
(Host: Dr. 
Eran Treister)
Oct. 26, 2021
12:30-13:30

>> Zoom Link
​Running faster will only get you so far -- it is generally advisable to first understand where the roads lead, then get a car ...

The renaissance of machine learning (ML) and deep learning (DL) over the last decade is accompanied by an unscalable computational cost, limiting its advancement and weighing on the field in practice. In this talk, we take a systematic approach to address the algorithmic and methodological limitations at the root of these costs. We first demonstrate that DL training and pruning are predictable and governed by scaling laws -- for state of the art models and tasks, spanning image classification and language modeling, as well as for state of the art model compression via iterative pruning. Predictability, via the establishment of these scaling laws, provides the path for the principled design and trade-off reasoning, currently largely lacking in the field. We then continue to analyze the sources of the scaling laws, offering an approximation-theoretic view and showing through the exploration of a noiseless realizable case that DL is in fact dominated by error sources very far from the lower error limit. We conclude by building on the gained theoretical understanding of the scaling laws' origins. We present a conjectural path to eliminate one of the current dominant error sources -- through a data bandwidth limiting hypothesis and the introduction of Nyquist learners -- which can, in principle, reach the generalization error lower limit (e.g. 0 in the noiseless case), at finite dataset size.​

Expanding the hub and spoke model

By: Clara Shikhelman (Chaincode Labs)​
(Host: Dr. Or Sattath)
Oct. 19, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
When two parties transact often using banks, Bitcoin, or any other financial device that entails fees, they might choose to turn to second-layer solutions. In the​se solutions, both parties lock a certain amount as escrow in a channel and transact freely with reduced fees. This gives rise to a network of channels in which any two nodes can transact with each other.

A downside of these solutions, especially if they are decentralized, is the tendency of the network toward the hub and spoke model. Most nodes in this model, the “spokes", are connected to a single “hub", a high degree node that the spokes believe to be well connected. This graph topology is known to be prone to various attacks and privacy issues.

In this work, we present a solution that benefits the spokes, by saving on costs, and the network, by giving it higher connectivity and good expansion properties. We use probabilistic and spectral techniques for the proofs.​

Short Bio: Clara Shikhelman holds a PhD in mathematics from Tel Aviv University. Following a year at Simons institute in Berkeley and CalTech, she is currently a PostDoc at Chaincode Labs.