$$News and Reports$$

Oct. 02, 2017

​​​

The Department of Computer Science is happy to announce that three excellent new faculty members are joining the department, starting October 2017:

Dr. Meirav Zehavi, Dr. Or Sattath, and Dr. Klim Efremenko.
Below are short descriptions of their research and the new exciting classes they will teach this year. 

 

Dr. Meirav Zehavi

During September 2015 - October 2017, I have been a postdoctoral research fellow at the University of Bergen (Norway), Simons-Berkeley (US) and I-CORE (TAU). I obtained my PhD and MSc from the Technion in 2015 and 2013, respectively. My research interests lie in the field of Parameterized Complexity. I am an author of a book on Kernelization. During my PhD and postdoctoral studies, I was awarded several prizes and fellowships, such as three Best Student Paper awards. For more information about me, please visit https://sites.google.com/site/zehavimeirav/.

 

My research interests lie in the field of Parameterized Complexity. Specifically, I am interested in (i) the design of parameterized algorithms, (ii) the study of hardness and lower bounds with respect to parameterized algorithms, and (iii) the subfield of Kernelization. In what follows, let me elaborate on one of these interests, namely, the subfield of Kernelization, which might be less familiar than the other two. Preprocessing is an integral part of almost any application, ranging from lossless data compression and navigation systems to microarray data analysis for the classification of cancer types. A natural question in this regard is how to measure the quality of data reductions proposed for a specific problem. Yet, for a long time the mathematical analysis of polynomial-time preprocessing algorithms was neglected. With the advent of Parameterized Complexity, a sound paradigm (called Kernelization) has been developed specifically to analyze the theory behind the rich world of preprocessing. Roughly speaking, a problem admits a kernel if there exists a polynomial-time algorithm that, given any instance of the problem, translates it into an equivalent instance of the same problem of size f(k) for some function f that depends only on k. In particular, the function f is independent of the input size n! For example, if f(k) = k 2 , k = 10 and n is as large as can be desired, then in polynomial-time we can find anequivalent instance of size 100, possibly solvable by hand. The central question in Kernelization is which problems admit kernels of size f(k) where f is polynomial in k, called polynomial kernels? In case a problem Π is shown to admit a polynomial kernel, the second question is how small can this kernel be?

Teaching this year: 

Next semester I will be teaching a class about Parameterized Algorithms. In recent years, the field of Parameterized Complexity has been growing by leaps and bounds. We will learn the basics in this field as well as a few recent advances 

 

Dr. Or Sattath

Or Sattath did his PhD. at the Hebrew University, and two post-docs: one at U.C. Berkeley and the second at the Hebrew University and MIT. He received a Clore scholarship for his PhD. studies, and a Simons fellowship during his post-doc. 

His research area is quantum computing. Quantum mechanics is the most established physical theory we have describing nature, especially at the macroscopic scale. Quantum computers are computing devices which use quantum mechanics to their advantage. There is an heroic attempt to build these devices these days, and proof of concept "quantum calculators", i.e. very small quantum computers, have been successfully built in recent years. Building these devices are mainly the task for experimental physicists. Or is interested in understanding which computational tasks quantum computers are good for (such as fast algorithms for certain tasks, and cryptographic tasks which are impossible to achieve by a classical computer). Here is one neat example: there is no law of nature which makes the bills and coins we all have in our pockets hard to forge, as classical information, at least in principle, can be copied. Quantum money is a scheme, which has the three main properties we want from money: it is (computationally) easy for the central bank to issue new money, it is (computationally) easy to verify valid money by all the users, and it is (computationally) hard to forge it. The security of quantum money schemes is based on a fundamental result in quantum mechanics, known as the no cloning theorem: it is impossible to copy a quantum state. Bitcoin - a peer-to-peer network which gained lots of traction recently - uses a classical approach to solve the same problem, but suffers an enormous overhead in its efficiency: in order to keep people from cheating (double spending), all the nodes in the network have to keep track, and agree, on the set of all the transactions that have ever been made in that Bitcoin network. In other words, when Alice buys a cup of coffee from Bob, Charlie has to be informed about it. For this reason, and unlike quantum money, the Bitcoin network can only process 4 transactions per second.

Teaching this year: 

This fall Or will teach Complexity of Computations - a mandatory course in the Masters program during the first semester. During the spring, Or will teach quantum computation.

 

Dr. Klim Efremenko

I did my Ph.D. at Tel-Aviv University and I was a member of the Institute for advanced studies in Princeton, a Simons Fellow in the University of Chicago, and a postdoc in Simons Institute at Berkeley. 

My main focus of research is in Error Correcting Codes and I have a broad interest in theoretical computer science and mathematics. Most of my recent research is in the field of Interactive communication. 

Classic error correcting codes are designed to encode messages sent from one party to another. They are optimized to correct a large number of errors, while still having efficient encoding and decoding algorithms. Most modern communication is interactive, where two or more parties are communicating and responding to each other's messages. However, the coding schemes being used are still those which were optimized decades ago for single messages. This creates the opportunity to design better codes for interactive communication. To do so we analyze the capacity of interactive channels and the limitations of coding in interactive settings.

Teaching this year: 

This fall I will teach Information Theory. During this class I hope to make emphasis on how one can define information for protocols and its usage.