Lewiner Institute for Theoretical Physics, room 407
Research interests: I’m interested in condensed matter theory and, in particular, topological phases as well as application of deep learning and self-supervised learning in physics. I also have a Telegram channel where I post links to research I find interesting.
Previously: I finished CS M.Sc. in Technion, advised by Alex Bronstein, Avi Mendelson, and Chaim Baskin, and studied reduced supervision in computer vision (in particular, self-supervised and semi-supervised learning). I was a research intern in Creative Vision team in Snap Research in Summer 2020, working on 3D reconstruction trained on single 2D views. Before that, I was part of Rothschild Technion Program for Excellence and received double B.Sc. (CS and Physics+Math) (Cum Laude) from Technion.
|Jan 13, 2022||Our paper “Weakly Supervised Recovery of Semantic Attributes” got accepted to CLeaR 2022.|
|Oct 4, 2021||Our paper “Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels” (C2D) got accepted to WACV 2022.|
|Aug 26, 2021||Our paper “CAT: Compression-Aware Training for Bandwidth Reduction” got accepted to JMLR.|
|Jan 10, 2021||Our paper “Early-stage neural network hardware performance analysis” got accepted to Special Issue “Energy-Efficient Computing Systems for Deep Learning” of Sustainability journal.|
|Oct 31, 2020||Our paper “Self-Supervised Learning for Large-Scale Unsupervised Image Clustering” got accepted to Self-Supervised Learning - Theory and Practice workshop at NeurIPS 2020.|
arXivEnd-to-End Referring Video Object Segmentation with Multimodal TransformersarXiv pre-print Nov 2021
Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy LabelsIn IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Jan 2022
workshopSelf-Supervised Learning for Large-Scale Unsupervised Image ClusteringNeurIPS Self-Supervised Learning Workshop Aug 2020