Seminar1.jpg

For updates about future department seminars, you are welcome to join our mailing list or se​nd an empty e-mail message to: cs_seminar-join@mlists.cs.bgu.ac.il​.



Title​

​When

​​​​Abstract​

Efficient Secure Multi-Party Computation: How Low Can We Go?

By: Ariel Nof
(Host: Prof. Amos Beimel)
Nov. 30, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
In the setting of secure multi-party computation, a set of parties with private inputs wish to compute a joint function of their inputs, without revealing anything but the output. This computation is carried-out in the presence of an adversary controlling a subset of the parties, who may try to learn private information or disrupt the computation. The problem of constructing efficient protocols for secure computation has gained significant interest in the last decade or so and progress has been extraordinarily fast, transforming the topic from a notion of theoretical interest only, into a technology that is even being commercialized by multiple companies. A useful metric for measuring efficiency of secure protocols is the ratio between the communication cost of the protocol, and that of the best known protocol with a “minimal" level of security, namely security against semi-honest (passive) adversaries, who act as prescribed by the protocol but try to learn additional information from messages they receive. In this talk, I will describe recent efforts to minimize this ratio and show, in the honest majority setting, new results that close (in the amortized sense) the communication gap between secure protocols against adversaries who act in any arbitrary manner, and protocols against semi-honest adversaries.​

A Theoretical Analysis of Generalization in Graph Convolutional Neural Networks

By: Ron Levie (LMU Munich)
(Host: Dr. Eran Treister)
Nov. 23, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
In recent years, the need to accommodate non-Euclidean structures in data science has brought a boom in deep learning methods on graphs, leading to many practical applications with commercial impact. In this talk, we will review the mathematical foundations of the generalization capabilities of graph convolutional neural networks (GNNs). We will focus mainly on spectral GNNs, where convolution is defined as element-wise multiplication in the frequency domain of the graph.

In machine learning settings where the dataset consists of signals defined on many different graphs, the trained GNN should generalize to graphs outside the training set. A GNN is called transferable if, whenever two graphs represent the same underlying phenomenon, the GNN has similar repercussions on both graphs. Transferability ensures that GNNs generalize if the graphs in the test set represent the same phenomena as the graphs in the training set. We will discuss the different approaches to mathematically model the notions of transferability, and derive corresponding transferability error bounds, proving that GNNs have good generalization capabilities.

Links to related papers:
>> https://arxiv.org/pdf/1907.12972.pdf
>> https://arxiv.org/pdf/1705.07664.pdf

Bio:
Ron Levie received the Ph.D. degree in applied mathematics in 2018, from Tel Aviv University, Israel. During 2018-2020, he was a postdoctoral researcher with the Research Group Applied Functional Analysis, Institute of Mathematics, TU Berlin, Germany. Since 2021 he is a researcher in the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence, Department of Mathematics, LMU Munich, Germany. Since 2021, he is also a consultant at the Huawei project Radio-Map Assisted Pathloss Prediction, at the Communications and Information Theory Chair, TU Berlin. He won excellence awards for his MSc and PhD studies, and a Postdoc Minerva Fellowship. He is a guest editor at Sampling Theory, Signal Processing, and Data Analysis (SaSiDa), and was a conference chair of the Online International Conference on Computational Harmonic Analysis (Online-ICCHA 2021).
His current research interests are in the theory of deep learning, geometric deep learning, interpretability of deep learning, deep learning in wireless communication, harmonic analysis, signal processing, wavelet theory, uncertainty principles, continuous frames, and randomized methods.
>> Homepage​

Towards Revealing Visual Relations Between Fixations: Modeling the Center Bias During Free-Viewing

By: Rotem Mairon
(Adviser: Prof. Ohad Ben Shahar)
Nov. 16, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
(Hybrid)
Understanding eye movements is essential for comprehending visual selection and has numerous applications in computational vision and human-computer interfaces. Eye-movement research has revealed a variety of behavioral biases that guide the eye regardless of visual content.
However, computational models of eye-movement behavior only partially account for these biases. The most well-known of these biases is the center bias, which refers to the tendency of observers to fixate on the stimulus's center. In this study, we investigate two aspects of the center bias to help distinguish it from visual content-driven behavior. One aspect concerns standard procedures for collecting eye-movement data during free viewing. Specifically, the requirement to fixate on the center of the display prior to the appearance of the stimulus, as well as the presentation of the stimulus in the display's center. Our findings support a fundamental shift in data collection and analysis procedures, all in the name of obtaining data that reflects more content-driven and less bias-related eye-movement behavior. The second aspect is how the center bias manifests during viewing, as opposed to how it is reflected in the spatial distribution of aggregated fixations, which is used in almost all computational models of eye-movement behavior. To that end, this work proposes an eye-movement data representation based on saccades rather than fixations. This representation not only demonstrates the dynamic nature of the center bias throughout viewing, but it also holistically demonstrates other eye-movement phenomena that were previously investigated exclusively. Finally, we demonstrate that the proposed representation allows for more accurate modeling of eye-movement biases, which is critical for establishing a baseline for computational models. Such a baseline paves the way for researchers to learn how, if at all, visual content guides the human eye from one fixation to the next while freely viewing natural images.

Mind Reading: Decoding Visual Experience from Brain Activity

By: Guy Gaziv (WIS)
(Host: Dr. Eran Treister)
Nov. 9, 2021
12:00-13:00
Bldg. 37, Room 202

Reconstructing and semantically classifying observed natural images from novel (unknown) fMRI brain recordings is a milestone for developing brain-machine interfaces and for the study of consciousness. Unfortunately, acquiring sufficient “paired” training examples (images with their corresponding fMRI recordings) to span the huge space of natural images and their semantic classes is prohibitive, even in the largest image-fMRI datasets. We present a self-supervised deep learning approach that overcomes this barrier, giving rise to better generalizing fMRI-to-image decoders. This is obtained by enriching the scarce paired training data with additional easily accessible “unpaired” data from both domains (i.e., images without fMRI, and fMRI without images). Our approach achieves state-of-the-art results in image reconstruction from fMRI responses, as well as unprecedented large-scale (1000-way) semantic classification of never-before-seen classes.

Bio:
Guy Gaziv is a postdoctoral fellow in the Michal Irani Computer Vision group at The Weizmann Institute of Science. This spring he will be starting his postdoc at the DiCarlo Lab at MIT. His PhD research focused on the intersection between machine and human vision, and specifically on decoding visual experience from brain activity. Guy earned his BSc in Electrical and Computer Engineering from The Hebrew University of Jerusalem, and his MSc in Physics and PhD in Computer Science from The Weizmann Institute of Science.

Research-oriented gathering of students and faculty members​​

By: Various Faculty Members
(Host: Dr. Eran Treister)
Nov. 2, 2021
12:00-13:00
Bldg. 37, Room 202

You are invited for a small research-oriented gathering of students and faculty members on Nov 2nd, at 12:00, room 202/37 (invitation attached). The gathering is mostly intended for new students who did not find an adviser yet, but everyone is welcome.

There will be six faculty members presenting their research briefly, two of them are new in the department. These faculty members and others that will also attend the meeting are looking for students, and this is a good opportunity to talk to them. We will also encourage faculty members to be available for personal meetings with you.​

So - if you haven't done so already, we encourage you to visit our department's research page

and follow the links according to your field(s) of interest. Try to read about the research fields and visit the web-pages of the relevant faculty members. Once you find a faculty member that looks relevant - email him/her and try to schedule a meeting (optionally, but not necessarily on that day).

Scaling laws for deep learning

By: Jonathan Rosenfeld
(Host: Dr. 
Eran Treister)
Oct. 26, 2021
12:30-13:30

>> Zoom Link
​Running faster will only get you so far -- it is generally advisable to first understand where the roads lead, then get a car ...

The renaissance of machine learning (ML) and deep learning (DL) over the last decade is accompanied by an unscalable computational cost, limiting its advancement and weighing on the field in practice. In this talk, we take a systematic approach to address the algorithmic and methodological limitations at the root of these costs. We first demonstrate that DL training and pruning are predictable and governed by scaling laws -- for state of the art models and tasks, spanning image classification and language modeling, as well as for state of the art model compression via iterative pruning. Predictability, via the establishment of these scaling laws, provides the path for the principled design and trade-off reasoning, currently largely lacking in the field. We then continue to analyze the sources of the scaling laws, offering an approximation-theoretic view and showing through the exploration of a noiseless realizable case that DL is in fact dominated by error sources very far from the lower error limit. We conclude by building on the gained theoretical understanding of the scaling laws' origins. We present a conjectural path to eliminate one of the current dominant error sources -- through a data bandwidth limiting hypothesis and the introduction of Nyquist learners -- which can, in principle, reach the generalization error lower limit (e.g. 0 in the noiseless case), at finite dataset size.​

Expanding the hub and spoke model

By: Clara Shikhelman (Chaincode Labs)
(Host: Dr. Or Sattath)
Oct. 19, 2021
12:00-13:00
Bldg. 37, Room 202

>> Zoom Link
When two parties transact often using banks, Bitcoin, or any other financial device that entails fees, they might choose to turn to second-layer solutions. In the​se solutions, both parties lock a certain amount as escrow in a channel and transact freely with reduced fees. This gives rise to a network of channels in which any two nodes can transact with each other.

A downside of these solutions, especially if they are decentralized, is the tendency of the network toward the hub and spoke model. Most nodes in this model, the “spokes", are connected to a single “hub", a high degree node that the spokes believe to be well connected. This graph topology is known to be prone to various attacks and privacy issues.

In this work, we present a solution that benefits the spokes, by saving on costs, and the network, by giving it higher connectivity and good expansion properties. We use probabilistic and spectral techniques for the proofs.​

Short Bio: Clara Shikhelman holds a PhD in mathematics from Tel Aviv University. Following a year at Simons institute in Berkeley and CalTech, she is currently a PostDoc at Chaincode Labs.