The Friedman Award is granted by the department each year, in the name of Dr. Yossi Friedman, graduate of the department and one of its significant donors, to PhD students who presented outstanding research achievements.
This year's winner is: Moshe Eliasof, Ph.d Student and M.Sc graduate of the department.
The award ceremony will be held in a festive colloquium on Tuesday 1 June at 12:00, during which Moshe will describe some of his research.
Congratulations to Moshe, and to his research adviser Eran Treister, and best wishes for continued fruitful and impressive research.
Moshe's research abstract, under supervision of Dr. Eran Treister:My research interest is the representation of data in Deep Learning Frameworks - from geometric data like point-clouds and meshes to biological structures such as proteins, as well as the learned feature space.Specifically, I study the field of Graph Neural Networks and Lightweight Convolutional Neural Networks by harnessing ideas from the field of partial differential equations and Multigrid methods, allowing for more efficient and explainable learning frameworks.In my talk I will present the following work:MGIC: Multigrid-in-Channels Neural Network ArchitecturesConvolutional Neural Networks are famous for their success in various fields and applications. However, applying CNNs typically comes at the price of computationally expensive systems, consuming large amounts of energy and time. This restriction is undesired, especially for edge devices like smartphones and vehicles.To this end, CNN architectures like MobileNets and ShuffleNets were proposed as lightweight alternatives to the standard CNN blocks, reducing the computational cost while retaining similar accuracy. However, the frequent use of 1x1 convolutions in such networks still imposes a quadratic growth of the number of parameters with respect to the number of channels (width) of the network.In this work, we address the redundancy in CNN and quadratic scaling problems by introducing a multigrid-in-channels (MGIC) approach.Our MGIC architectures replace each CNN block with an MGIC counterpart that utilizes a hierarchy of nested grouped convolutions of small group size to address the problems above. We show that our proposed architectures scale linearly with respect to the network's width while retaining full coupling of the channels as in standard CNNs.Our extensive experiments on image classification, segmentation, and point cloud classification show that applying this strategy to different architectures like ResNet and MobileNetV3 reduces the number of parameters while obtaining similar or better accuracy. This is joint work with Jonathan Ephrath, Lars Ruthotto, and Eran Treister.I presented this work at the SIAM Conference on Multigrid Methods 2021, where it was nominated for the best student paper.
>> More information about the Friedman Prize