publications
List of my publications grouped by year of publication and sorted by first appearance. generated by jekyll-scholar.
2025
- arXiv
Humanity’s Last ExamLong Phan, Alice Gatti, Ziwen Han, Nathaniel Li, Josephina Hu, Hugh Zhang, Sean Shi, Michael Choi, Anish Agrawal, Arnav Chopra, Adam Khoja, Ryan Kim, Jason Hausenloy, Oliver Zhang, Mantas Mazeika, and 647 more authorsFeb 2025@misc{phan2025hle, title = {Humanity's Last Exam}, author = {Phan, Long and Gatti, Alice and Han, Ziwen and Li, Nathaniel and Hu, Josephina and Zhang, Hugh and Shi, Sean and Choi, Michael and Agrawal, Anish and Chopra, Arnav and Khoja, Adam and Kim, Ryan and Hausenloy, Jason and Zhang, Oliver and Mazeika, Mantas and Anderson, Daron and Nguyen, Tung and Mahmood, Mobeen and Feng, Fiona and Feng, Steven Y. and Zhao, Haoran and Yu, Michael and Gangal, Varun and Zou, Chelsea and Wang, Zihan and Wang, Jessica P. and Kumar, Pawan and Pokutnyi, Oleksandr and Gerbicz, Robert and Popov, Serguei and Levin, John-Clark and Kazakov, Mstyslav and Schmitt, Johannes and Galgon, Geoff and Sanchez, Alvaro and Lee, Yongki and Yeadon, Will and Sauers, Scott and Roth, Marc and Agu, Chidozie and Riis, Søren and Giska, Fabian and Utpala, Saiteja and Giboney, Zachary and Goshu, Gashaw M. and Xavier, Joan of Arc and Crowson, Sarah-Jane and Naiya, Mohinder Maheshbhai and Burns, Noah and Finke, Lennart and Cheng, Zerui and Park, Hyunwoo and Fournier-Facio, Francesco and Wydallis, John and Nandor, Mark and Singh, Ankit and Gehrunger, Tim and Cai, Jiaqi and McCarty, Ben and Duclosel, Darling and Nam, Jungbae and Zampese, Jennifer and Hoerr, Ryan G. and Bacho, Aras and Loume, Gautier Abou and Galal, Abdallah and Cao, Hangrui and Garretson, Alexis C and Sileo, Damien and Ren, Qiuyu and Cojoc, Doru and Arkhipov, Pavel and Qazi, Usman and Li, Lianghui and Motwani, Sumeet and Witt, Christian Schroeder de and Taylor, Edwin and Veith, Johannes and Singer, Eric and Hartman, Taylor D. and Rissone, Paolo and Jin, Jaehyeok and Shi, Jack Wei Lun and Willcocks, Chris G. and Robinson, Joshua and Mikov, Aleksandar and Prabhu, Ameya and Tang, Longke and Alapont, Xavier and Uro, Justine Leon and Zhou, Kevin and Santos, Emily de Oliveira and Maksimov, Andrey Pupasov and Vendrow, Edward and Zenitani, Kengo and Guillod, Julien and Li, Yuqi and Vendrow, Joshua and Kuchkin, Vladyslav and Ze-An, Ng and Marion, Pierre and Efremov, Denis and Lynch, Jayson and Liang, Kaiqu and Gritsevskiy, Andrew and Martinez, Dakotah and Pageler, Ben and Crispino, Nick and Zvonkine, Dimitri and Fraga, Natanael Wildner and Soori, Saeed and Press, Ori and Tang, Henry and Salazar, Julian and Green, Sean R. and Brüssel, Lina and Twayana, Moon and Dieuleveut, Aymeric and Rogers, T. Ryan and Zhang, Wenjin and Li, Bikun and Yang, Jinzhou and Rao, Arun and Loiseau, Gabriel and Kalinin, Mikhail and Lukas, Marco and Manolescu, Ciprian and Mishra, Subrata and Kamdoum, Ariel Ghislain Kemogne and Kreiman, Tobias and Hogg, Tad and Jin, Alvin and Bosio, Carlo and Sun, Gongbo and Coppola, Brian P and Tarver, Tim and Heidinger, Haline and Sayous, Rafael and Ivanov, Stefan and Cavanagh, Joseph M and Shen, Jiawei and Imperial, Joseph Marvin and Schwaller, Philippe and Senthilkuma, Shaipranesh and Bran, Andres M and Dehghan, Ali and Algaba, Andres and Verbeken, Brecht and Noever, David and V, Ragavendran P and Schut, Lisa and Sucholutsky, Ilia and Zheltonozhskii, Evgenii and Lim, Derek and Stanley, Richard and Sivarajan, Shankar and Yang, Tong and Maar, John and Wykowski, Julian and Oller, Martí and Sandlin, Jennifer and Sahu, Anmol and Hu, Yuzheng and Fish, Sara and Heydari, Nasser and Apronti, Archimedes and Rawal, Kaivalya and Vilchis, Tobias Garcia and Zu, Yuexuan and Lackner, Martin and Koppel, James and Nguyen, Jeremy and Antonenko, Daniil S. and Chern, Steffi and Zhao, Bingchen and Arsene, Pierrot and Goldfarb, Alan and Ivanov, Sergey and Poświata, Rafał and Wang, Chenguang and Li, Daofeng and Crisostomi, Donato and Achilleos, Andrea and Myklebust, Benjamin and Sen, Archan and Perrella, David and Kaparov, Nurdin and Inlow, Mark H and Zang, Allen and Thornley, Elliott and Orel, Daniil and Poritski, Vladislav and Ben-David, Shalev and Berger, Zachary and Whitfill, Parker and Foster, Michael and Munro, Daniel and Ho, Linh and Hava, Dan Bar and Kuchkin, Aleksey and Lauff, Robert and Holmes, David and Sommerhage, Frank and Schneider, Keith and Kazibwe, Zakayo and Stambaugh, Nate and Singh, Mukhwinder and Magoulas, Ilias and Clarke, Don and Kim, Dae Hyun and Dias, Felipe Meneguitti and Elser, Veit and Agarwal, Kanu Priya and Vilchis, Victor Efren Guadarrama and Klose, Immo and Demian, Christoph and Anantheswaran, Ujjwala and Zweiger, Adam and Albani, Guglielmo and Li, Jeffery and Daans, Nicolas and Radionov, Maksim and Rozhoň, Václav and Ma, Ziqiao and Stump, Christian and Berkani, Mohammed and Platnick, Jacob and Nevirkovets, Volodymyr and Basler, Luke and Piccardo, Marco and Jeanplong, Ferenc and Cohen, Niv and Tkadlec, Josef and Rosu, Paul and Padlewski, Piotr and Barzowski, Stanislaw and Montgomery, Kyle and Menezes, Aline and Patel, Arkil and Wang, Zixuan and Tucker-Foltz, Jamie and Stade, Jack and Goertzen, Tom and Kazemi, Fereshteh and Milbauer, Jeremiah and Ambay, John Arnold and Shukla, Abhishek and Labrador, Yan Carlos Leyva and Givré, Alan and Wolff, Hew and Rossbach, Vivien and Aziz, Muhammad Fayez and Kaddar, Younesse and Chen, Yanxu and Zhang, Robin and Pan, Jiayi and Terpin, Antonio and Muennighoff, Niklas and Schoelkopf, Hailey and Zheng, Eric and Carmi, Avishy and Jones, Adam and Shah, Jainam and Brown, Ethan D. L. and Zhu, Kelin and Bartolo, Max and Wheeler, Richard and Ho, Andrew and Barkan, Shaul and Wang, Jiaqi and Stehberger, Martin and Kretov, Egor and Sridhar, Kaustubh and EL-Wasif, Zienab and Zhang, Anji and Pyda, Daniel and Tam, Joanna and Cunningham, David M. and Goryachev, Vladimir and Patramanis, Demosthenes and Krause, Michael and Redenti, Andrew and Bugas, Daniel and Aldous, David and Lai, Jesyin and Coleman, Shannon and Bahaloo, Mohsen and Xu, Jiangnan and Lee, Sangwon and Zhao, Sandy and Tang, Ning and Cohen, Michael K. and Carroll, Micah and Paradise, Orr and Kirchner, Jan Hendrik and Steinerberger, Stefan and Ovchynnikov, Maksym and Matos, Jason O. and Shenoy, Adithya and Junior, Benedito Alves de Oliveira and Wang, Michael and Nie, Yuzhou and Giordano, Paolo and Petersen, Philipp and Sztyber-Betley, Anna and Shukla, Priti and Crozier, Jonathan and Pinto, Antonella and Verma, Shreyas and Joshi, Prashant and Yong, Zheng-Xin and Tee, Allison and Andréoletti, Jérémy and Weller, Orion and Singhal, Raghav and Zhang, Gang and Ivanov, Alexander and Khoury, Seri and Mostaghimi, Hamid and Thaman, Kunvar and Chen, Qijia and Khánh, Trần Quốc and Loader, Jacob and Cavalleri, Stefano and Szlyk, Hannah and Brown, Zachary and Roberts, Jonathan and Alley, William and Sun, Kunyang and Stendall, Ryan and Lamparth, Max and Reuel, Anka and Wang, Ting and Xu, Hanmeng and Raparthi, Sreenivas Goud and Hernández-Cámara, Pablo and Martin, Freddie and Malishev, Dmitry and Preu, Thomas and Korbak, Tomek and Abramovitch, Marcus and Williamson, Dominic and Chen, Ziye and Bálint, Biró and Bari, M Saiful and Kassani, Peyman and Wang, Zihao and Ansarinejad, Behzad and Goswami, Laxman Prasad and Sun, Yewen and Elgnainy, Hossam and Tordera, Daniel and Balabanian, George and Anderson, Earth and Kvistad, Lynna and Moyano, Alejandro José and Maheshwari, Rajat and Sakor, Ahmad and Eron, Murat and McAlister, Isaac C. and Gimenez, Javier and Enyekwe, Innocent and D.O., Andrew Favre and Shah, Shailesh and Zhou, Xiaoxiang and Kamalov, Firuz and Clark, Ronald and Abdoli, Sherwin and Santens, Tim and Meer, Khalida and Wang, Harrison K and Ramakrishnan, Kalyan and Chen, Evan and Tomasiello, Alessandro and Luca, G. Bruno De and Looi, Shi-Zhuo and Le, Vinh-Kha and Kolt, Noam and Mündler, Niels and Semler, Avi and Rodman, Emma and Drori, Jacob and Fossum, Carl J and Jagota, Milind and Pradeep, Ronak and Fan, Honglu and Shah, Tej and Eicher, Jonathan and Chen, Michael and Thaman, Kushal and Merrill, William and Harris, Carter and Gross, Jason and Gusev, Ilya and Sharma, Asankhaya and Agnihotri, Shashank and Zhelnov, Pavel and Usawasutsakorn, Siranut and Mofayezi, Mohammadreza and Bogdanov, Sergei and Piperski, Alexander and Carauleanu, Marc and Zhang, David K. and Ler, Dylan and Leventov, Roman and Soroko, Ignat and Jansen, Thorben and Lauer, Pascal and Duersch, Joshua and Taamazyan, Vage and Morak, Wiktor and Ma, Wenjie and Held, William and Huy, Tran Đuc and Xian, Ruicheng and Zebaze, Armel Randy and Mohamed, Mohanad and Leser, Julian Noah and Yuan, Michelle X and Yacar, Laila and Lengler, Johannes and Shahrtash, Hossein and Oliveira, Edson and Jackson, Joseph W. and Gonzalez, Daniel Espinosa and Zou, Andy and Chidambaram, Muthu and Manik, Timothy and Haffenden, Hector and Stander, Dashiell and Dasouqi, Ali and Shen, Alexander and Duc, Emilien and Golshani, Bita and Stap, David and Uzhou, Mikalai and Zhidkovskaya, Alina Borisovna and Lewark, Lukas and Vincze, Mátyás and Wehr, Dustin and Tang, Colin and Hossain, Zaki and Phillips, Shaun and Muzhen, Jiang and Ekström, Fredrik and Hammon, Angela and Patel, Oam and Remy, Nicolas and Farhidi, Faraz and Medley, George and Mohammadzadeh, Forough and Peñaflor, Madellene and Kassahun, Haile and Friedrich, Alena and Sparrow, Claire and Sakal, Taom and Dhamane, Omkar and Mirabadi, Ali Khajegili and Hallman, Eric and Battaglia, Mike and Maghsoudimehrabani, Mohammad and Hoang, Hieu and Amit, Alon and Hulbert, Dave and Pereira, Roberto and Weber, Simon and Mensah, Stephen and Andre, Nathan and Peristyy, Anton and Harjadi, Chris and Gupta, Himanshu and Malina, Stephen and Albanie, Samuel and Cai, Will and Mehkary, Mustafa and Reidegeld, Frank and Dick, Anna-Katharina and Friday, Cary and Sidhu, Jasdeep and Kim, Wanyoung and Costa, Mariana and Gurdogan, Hubeyb and Weber, Brian and Kumar, Harsh and Jiang, Tong and Agarwal, Arunim and Ceconello, Chiara and Vaz, Warren S. and Zhuang, Chao and Park, Haon and Tawfeek, Andrew R. and Aggarwal, Daattavya and Kirchhof, Michael and Dai, Linjie and Kim, Evan and Ferret, Johan and Wang, Yuzhou and Yan, Minghao and Burdzy, Krzysztof and Zhang, Lixin and Franca, Antonio and Pham, Diana T. and Loh, Kang Yong and Robinson, Joshua and Gul, Shreen and Chhablani, Gunjan and Du, Zhehang and Cosma, Adrian and White, Colin and Riblet, Robin and Saxena, Prajvi and Votava, Jacob and Vinnikov, Vladimir and Delaney, Ethan and Halasyamani, Shiv and Shahid, Syed M. and Mourrat, Jean-Christophe and Vetoshkin, Lavr and Bacho, Renas and Ginis, Vincent and Maksapetyan, Aleksandr and Rosa, Florencia de la and Li, Xiuyu and Malod, Guillaume and Lang, Leon and Laurendeau, Julien and Adesanya, Fatimah and Portier, Julien and Hollom, Lawrence and Souza, Victor and Zhou, Yuchen Anna and Yalın, Yiğit and Obikoya, Gbenga Daniel and Arnaboldi, Luca and Pokorny), Rai (Michael and Bigi, Filippo and Bacho, Kaniuar and Clavier, Pierre and Recchia, Gabriel and Popescu, Mara and Shulga, Nikita and Tanwie, Ngefor Mildred and Lux, Thomas C.H. and Rank, Ben and Ni, Colin and Yakimchyk, Alesia and Liu, Huanxu (Quinn) and Häggström, Olle and Verkama, Emil and Narayan, Himanshu and Gundlach, Hans and Brito-Santana, Leonor and Amaro, Brian and Vajipey, Vivek and Grover, Rynaa and Fan, Yiyang and Silva, Gabriel Poesia Reis e and Xin, Linwei and Kratish, Yosi and Łucki, Jakub and Li, Wen-Ding and Xu, Justin and Scaria, Kevin Joseph and Vargus, Freddie and Habibi, Farzad and Lian, Long (Tony) and Rodolà, Emanuele and Robins, Jules and Cheng, Vincent and Grabb, Declan and Bosio, Ida and Fruhauff, Tony and Akov, Ido and Lo, Eve J. Y. and Qi, Hao and Jiang, Xi and Segev, Ben and Fan, Jingxuan and Martinson, Sarah and Wang, Erik Y. and Hausknecht, Kaylie and Brenner, Michael P. and Mao, Mao and Jiang, Yibo and Zhang, Xinyu and Avagian, David and Scipio, Eshawn Jessica and Siddiqi, Muhammad Rehan and Ragoler, Alon and Tan, Justin and Patil, Deepakkumar and Plecnik, Rebeka and Kirtland, Aaron and Montecillo, Roselynn Grace and Durand, Stephane and Bodur, Omer Faruk and Adoul, Zahra and Zekry, Mohamed and Douville, Guillaume and Karakoc, Ali and Santos, Tania C. B. and Shamseldeen, Samir and Karim, Loukmane and Liakhovitskaia, Anna and Resman, Nate and Farina, Nicholas and Gonzalez, Juan Carlos and Maayan, Gabe and Hoback, Sarah and Pena, Rodrigo De Oliveira and Sherman, Glen and Mariji, Hodjat and Pouriamanesh, Rasoul and Wu, Wentao and Demir, Gözdenur and Mendoza, Sandra and Alarab, Ismail and Cole, Joshua and Ferreira, Danyelle and Johnson, Bryan and Milliron, Hsiaoyun and Safdari, Mohammad and Dai, Liangti and Arthornthurasuk, Siriphan and Pronin, Alexey and Fan, Jing and Ramirez-Trinidad, Angel and Cartwright, Ashley and Pottmaier, Daphiny and Taheri, Omid and Outevsky, David and Stepanic, Stanley and Perry, Samuel and Askew, Luke and Rodríguez, Raúl Adrián Huerta and Dendane, Abdelkader and Ali, Sam and Lorena, Ricardo and Iyer, Krishnamurthy and Salauddin, Sk Md and Islam, Murat and Gonzalez, Juan and Ducey, Josh and Campbell, Russell and Somrak, Maja and Mavroudis, Vasilios and Vergo, Eric and Qin, Juehang and Borbás, Benjámin and Chu, Eric and Lindsey, Jack and Radhakrishnan, Anil and Jallon, Antoine and McInnis, I.M.J. and Hoover, Alex and Möller, Sören and Bian, Song and Lai, John and Patwardhan, Tejal and Yue, Summer and Wang, Alexandr and Hendrycks, Dan}, year = {2025}, month = feb, journal = {arXiv}, }
2024
- Identifying the topological order of quantized half-filled Landau levels through their daughter statesEvgenii Zheltonozhskii, Ady Stern, and Netanel H. LindnerPhysical Review B, Dec 2024
Editor’s suggestion
Fractional quantum Hall states at a half-filled Landau level are believed to carry an integer number C of chiral Majorana edge modes, reflected in their thermal Hall conductivity. We show that this number determines the primary series of Abelian fractional quantum Hall states that emerge above and below the half-filling point. On a particular side of half-filling each series may originate from two consecutive values of C, but the combination of the series above and below half-filling uniquely identifies C. We analyze these states both by a hierarchy approach and by a composite fermion approach. In the latter, we map electrons near a half-filled Landau level to composite fermions at a weak magnetic field and show that a bosonic integer quantum Hall state is formed by pairs of composite fermions and plays a crucial role in the state’s Hall conductivity.
@article{zheltonozhskii2024identifying, title = {Identifying the topological order of quantized half-filled Landau levels through their daughter states}, author = {Zheltonozhskii, Evgenii and Stern, Ady and Lindner, Netanel H.}, year = {2024}, month = dec, journal = {Physical Review B}, publisher = {American Physical Society}, volume = {110}, pages = {245140}, issue = {24}, numpages = {7}, doi = {10.1103/PhysRevB.110.245140}, url = {https://link.aps.org/doi/10.1103/PhysRevB.110.245140}, eprint = {2405.03780}, archiveprefix = {arXiv}, primaryclass = {cond-mat.mes-hall}, }
- arXiv
StarCoder 2 and The Stack v2: The Next GenerationAnton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, and 51 more authorsFeb 2024The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data.
@misc{lozhkov2024starcoder, title = {StarCoder 2 and The Stack v2: The Next Generation}, author = {Lozhkov, Anton and Li, Raymond and Allal, Loubna Ben and Cassano, Federico and Lamy-Poirier, Joel and Tazi, Nouamane and Tang, Ao and Pykhtar, Dmytro and Liu, Jiawei and Wei, Yuxiang and Liu, Tianyang and Tian, Max and Kocetkov, Denis and Zucker, Arthur and Belkada, Younes and Wang, Zijian and Liu, Qian and Abulkhanov, Dmitry and Paul, Indraneil and Li, Zhuang and Li, Wen-Ding and Risdal, Megan and Li, Jia and Zhu, Jian and Zhuo, Terry Yue and Zheltonozhskii, Evgenii and Dade, Nii Osae Osae and Yu, Wenhao and Krauß, Lucas and Jain, Naman and Su, Yixuan and He, Xuanli and Dey, Manan and Abati, Edoardo and Chai, Yekun and Muennighoff, Niklas and Tang, Xiangru and Oblokulov, Muhtasham and Akiki, Christopher and Marone, Marc and Mou, Chenghao and Mishra, Mayank and Gu, Alex and Hui, Binyuan and Dao, Tri and Zebaze, Armel and Dehaene, Olivier and Patry, Nicolas and Xu, Canwen and McAuley, Julian and Hu, Han and Scholak, Torsten and Paquet, Sebastien and Robinson, Jennifer and Anderson, Carolyn Jane and Chapados, Nicolas and Patwary, Mostofa and Tajbakhsh, Nima and Jernite, Yacine and Ferrandis, Carlos Muñoz and Zhang, Lingming and Hughes, Sean and Wolf, Thomas and Guha, Arjun and von Werra, Leandro and de Vries, Harm}, year = {2024}, month = feb, journal = {arXiv pre-print}, url = {https://arxiv.org/abs/2402.19173}, eprint = {2402.19173}, archiveprefix = {arXiv}, primaryclass = {cs.SE}, }
- Semi-Supervised Semantic Segmentation via Marginal Contextual InformationMoshe Kimhi, Shai Kimhi, Evgenii Zheltonozhskii, Or Litany, and Chaim BaskinTransactions on Machine Learning Research, May 2024
We present a novel confidence refinement scheme that enhances pseudo-labels in semi-supervised semantic segmentation. Unlike current leading methods, which filter pixels with low-confidence predictions in isolation, our approach leverages the spatial correlation of labels in segmentation maps by grouping neighboring pixels and considering their pseudo-labels collectively. With this contextual information, our method, named S4MC, increases the amount of unlabeled data used during training while maintaining the quality of the pseudo-labels, all with negligible computational overhead. Through extensive experiments on standard benchmarks, we demonstrate that S4MC outperforms existing state-of-the-art semi-supervised learning approaches, offering a promising solution for reducing the cost of acquiring dense annotations. For example, S4MC achieves a 1.29 mIoU improvement over the prior state-of-the-art method on PASCAL VOC 12 with 366 annotated images. The code to reproduce our experiments is available at this https URL
@article{kimhi2024semisupervised, title = {Semi-Supervised Semantic Segmentation via Marginal Contextual Information}, author = {Kimhi, Moshe and Kimhi, Shai and Zheltonozhskii, Evgenii and Litany, Or and Baskin, Chaim}, year = {2024}, month = may, journal = {Transactions on Machine Learning Research}, issn = {2835-8856}, url = {https://openreview.net/forum?id=i5yKW1pmjW}, eprint = {2308.13900}, archiveprefix = {arXiv}, primaryclass = {cs.CV}, }
2023
- StarCoder: may the source be with you!Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, and 52 more authorsTransactions on Machine Learning Research, May 2023Reproducibility Certification
The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.
@article{li2023starcoder, title = {{StarCoder:} may the source be with you!}, author = {Li, Raymond and Allal, Loubna Ben and Zi, Yangtian and Muennighoff, Niklas and Kocetkov, Denis and Mou, Chenghao and Marone, Marc and Akiki, Christopher and Li, Jia and Chim, Jenny and Liu, Qian and Zheltonozhskii, Evgenii and Zhuo, Terry Yue and Wang, Thomas and Dehaene, Olivier and Davaadorj, Mishig and Lamy-Poirier, Joel and Monteiro, João and Shliazhko, Oleh and Gontier, Nicolas and Meade, Nicholas and Zebaze, Armel and Yee, Ming-Ho and Umapathi, Logesh Kumar and Zhu, Jian and Lipkin, Benjamin and Oblokulov, Muhtasham and Wang, Zhiruo and Murthy, Rudra and Stillerman, Jason and Patel, Siva Sankalp and Abulkhanov, Dmitry and Zocca, Marco and Dey, Manan and Zhang, Zhihan and Fahmy, Nour and Bhattacharyya, Urvashi and Yu, Wenhao and Singh, Swayam and Luccioni, Sasha and Villegas, Paulo and Kunakov, Maxim and Zhdanov, Fedor and Romero, Manuel and Lee, Tony and Timor, Nadav and Ding, Jennifer and Schlesinger, Claire and Schoelkopf, Hailey and Ebert, Jan and Dao, Tri and Mishra, Mayank and Gu, Alex and Robinson, Jennifer and Anderson, Carolyn Jane and Dolan-Gavitt, Brendan and Contractor, Danish and Reddy, Siva and Fried, Daniel and Bahdanau, Dzmitry and Jernite, Yacine and Ferrandis, Carlos Muñoz and Hughes, Sean and Wolf, Thomas and Guha, Arjun and von Werra, Leandro and de Vries, Harm}, year = {2023}, month = may, journal = {Transactions on Machine Learning Research}, issn = {2835-8856}, url = {https://openreview.net/forum?id=KoFOg41haE}, eprint = {2305.06161}, archiveprefix = {arXiv}, primaryclass = {cs.CL}, note = {Reproducibility Certification}, }
- Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language modelsAarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, and 435 more authorsTransactions on Machine Learning Research, Apr 2023
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG- bench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood develop- ment, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI’s GPT models, Google- internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.
@article{bigbench2022, title = {Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models}, author = {Srivastava, Aarohi and Rastogi, Abhinav and Rao, Abhishek and Shoeb, Abu Awal Md and Abid, Abubakar and Fisch, Adam and Brown, Adam R. and Santoro, Adam and Gupta, Aditya and Garriga-Alonso, Adri{\`a} and Kluska, Agnieszka and Lewkowycz, Aitor and Agarwal, Akshat and Power, Alethea and Ray, Alex and Warstadt, Alex and Kocurek, Alexander W. and Safaya, Ali and Tazarv, Ali and Xiang, Alice and Parrish, Alicia and Nie, Allen and Hussain, Aman and Askell, Amanda and Dsouza, Amanda and Slone, Ambrose and Rahane, Ameet and Iyer, Anantharaman S. and Andreassen, Anders Johan and Madotto, Andrea and Santilli, Andrea and Stuhlm{\"u}ller, Andreas and Dai, Andrew M. and La, Andrew and Lampinen, Andrew and Zou, Andy and Jiang, Angela and Chen, Angelica and Vuong, Anh and Gupta, Animesh and Gottardi, Anna and Norelli, Antonio and Venkatesh, Anu and Gholamidavoodi, Arash and Tabassum, Arfa and Menezes, Arul and Kirubarajan, Arun and Mullokandov, Asher and Sabharwal, Ashish and Herrick, Austin and Efrat, Avia and Erdem, Aykut and Karaka{\c{s}}, Ayla and Roberts, B. Ryan and Loe, Bao Sheng and Zoph, Barret and Bojanowski, Bart{\l}omiej and {\"O}zyurt, Batuhan and Hedayatnia, Behnam and Neyshabur, Behnam and Inden, Benjamin and Stein, Benno and Ekmekci, Berk and Lin, Bill Yuchen and Howald, Blake and Orinion, Bryan and Diao, Cameron and Dour, Cameron and Stinson, Catherine and Argueta, Cedrick and Ferri, Cesar and Singh, Chandan and Rathkopf, Charles and Meng, Chenlin and Baral, Chitta and Wu, Chiyu and Callison-Burch, Chris and Waites, Christopher and Voigt, Christian and Manning, Christopher D and Potts, Christopher and Ramirez, Cindy and Rivera, Clara E. and Siro, Clemencia and Raffel, Colin and Ashcraft, Courtney and Garbacea, Cristina and Sileo, Damien and Garrette, Dan and Hendrycks, Dan and Kilman, Dan and Roth, Dan and Freeman, C. Daniel and Khashabi, Daniel and Levy, Daniel and Gonz{\'a}lez, Daniel Mosegu{\'\i} and Perszyk, Danielle and Hernandez, Danny and Chen, Danqi and Ippolito, Daphne and Gilboa, Dar and Dohan, David and Drakard, David and Jurgens, David and Datta, Debajyoti and Ganguli, Deep and Emelin, Denis and Kleyko, Denis and Yuret, Deniz and Chen, Derek and Tam, Derek and Hupkes, Dieuwke and Misra, Diganta and Buzan, Dilyar and Mollo, Dimitri Coelho and Yang, Diyi and Lee, Dong-Ho and Schrader, Dylan and Shutova, Ekaterina and Cubuk, Ekin Dogus and Segal, Elad and Hagerman, Eleanor and Barnes, Elizabeth and Donoway, Elizabeth and Pavlick, Ellie and Rodol{\`a}, Emanuele and Lam, Emma and Chu, Eric and Tang, Eric and Erdem, Erkut and Chang, Ernie and Chi, Ethan A and Dyer, Ethan and Jerzak, Ethan and Kim, Ethan and Manyasi, Eunice Engefu and Zheltonozhskii, Evgenii and Xia, Fanyue and Siar, Fatemeh and Mart{\'\i}nez-Plumed, Fernando and Happ{\'e}, Francesca and Chollet, Francois and Rong, Frieda and Mishra, Gaurav and Winata, Genta Indra and de Melo, Gerard and Kruszewski, Germ{\'a}n and Parascandolo, Giambattista and Mariani, Giorgio and Wang, Gloria Xinyue and Jaimovitch-Lopez, Gonzalo and Betz, Gregor and Gur-Ari, Guy and Galijasevic, Hana and Kim, Hannah and Rashkin, Hannah and Hajishirzi, Hannaneh and Mehta, Harsh and Bogar, Hayden and Shevlin, Henry Francis Anthony and Schuetze, Hinrich and Yakura, Hiromu and Zhang, Hongming and Wong, Hugh Mee and Ng, Ian and Noble, Isaac and Jumelet, Jaap and Geissinger, Jack and Kernion, Jackson and Hilton, Jacob and Lee, Jaehoon and Fisac, Jaime Fern{\'a}ndez and Simon, James B and Koppel, James and Zheng, James and Zou, James and Kocon, Jan and Thompson, Jana and Wingfield, Janelle and Kaplan, Jared and Radom, Jarema and Sohl-Dickstein, Jascha and Phang, Jason and Wei, Jason and Yosinski, Jason and Novikova, Jekaterina and Bosscher, Jelle and Marsh, Jennifer and Kim, Jeremy and Taal, Jeroen and Engel, Jesse and Alabi, Jesujoba and Xu, Jiacheng and Song, Jiaming and Tang, Jillian and Waweru, Joan and Burden, John and Miller, John and Balis, John U. and Batchelder, Jonathan and Berant, Jonathan and Frohberg, J{\"o}rg and Rozen, Jos and Hernandez-Orallo, Jose and Boudeman, Joseph and Guerr, Joseph and Jones, Joseph and Tenenbaum, Joshua B. and Rule, Joshua S. and Chua, Joyce and Kanclerz, Kamil and Livescu, Karen and Krauth, Karl and Gopalakrishnan, Karthik and Ignatyeva, Katerina and Markert, Katja and Dhole, Kaustubh and Gimpel, Kevin and Omondi, Kevin and Mathewson, Kory Wallace and Chiafullo, Kristen and Shkaruta, Ksenia and Shridhar, Kumar and McDonell, Kyle and Richardson, Kyle and Reynolds, Laria and Gao, Leo and Zhang, Li and Dugan, Liam and Qin, Lianhui and Contreras-Ochando, Lidia and Morency, Louis-Philippe and Moschella, Luca and Lam, Lucas and Noble, Lucy and Schmidt, Ludwig and He, Luheng and Oliveros-Col{\'o}n, Luis and Metz, Luke and Senel, L{\"u}tfi Kerem and Bosma, Maarten and Sap, Maarten and Hoeve, Maartje Ter and Farooqi, Maheen and Faruqui, Manaal and Mazeika, Mantas and Baturan, Marco and Marelli, Marco and Maru, Marco and Ramirez-Quintana, Maria Jose and Tolkiehn, Marie and Giulianelli, Mario and Lewis, Martha and Potthast, Martin and Leavitt, Matthew L and Hagen, Matthias and Schubert, M{\'a}ty{\'a}s and Baitemirova, Medina Orduna and Arnaud, Melody and McElrath, Melvin and Yee, Michael Andrew and Cohen, Michael and Gu, Michael and Ivanitskiy, Michael and Starritt, Michael and Strube, Michael and Sw{\k{e}}drowski, Micha{\l} and Bevilacqua, Michele and Yasunaga, Michihiro and Kale, Mihir and Cain, Mike and Xu, Mimee and Suzgun, Mirac and Walker, Mitch and Tiwari, Mo and Bansal, Mohit and Aminnaseri, Moin and Geva, Mor and Gheini, Mozhdeh and T, Mukund Varma and Peng, Nanyun and Chi, Nathan Andrew and Lee, Nayeon and Krakover, Neta Gur-Ari and Cameron, Nicholas and Roberts, Nicholas and Doiron, Nick and Martinez, Nicole and Nangia, Nikita and Deckers, Niklas and Muennighoff, Niklas and Keskar, Nitish Shirish and Iyer, Niveditha S. and Constant, Noah and Fiedel, Noah and Wen, Nuan and Zhang, Oliver and Agha, Omar and Elbaghdadi, Omar and Levy, Omer and Evans, Owain and Casares, Pablo Antonio Moreno and Doshi, Parth and Fung, Pascale and Liang, Paul Pu and Vicol, Paul and Alipoormolabashi, Pegah and Liao, Peiyuan and Liang, Percy and Chang, Peter W and Eckersley, Peter and Htut, Phu Mon and Hwang, Pinyu and Mi{\l}kowski, Piotr and Patil, Piyush and Pezeshkpour, Pouya and Oli, Priti and Mei, Qiaozhu and Lyu, Qing and Chen, Qinlang and Banjade, Rabin and Rudolph, Rachel Etta and Gabriel, Raefer and Habacker, Rahel and Risco, Ramon and Milli{\`e}re, Rapha{\"e}l and Garg, Rhythm and Barnes, Richard and Saurous, Rif A. and Arakawa, Riku and Raymaekers, Robbe and Frank, Robert and Sikand, Rohan and Novak, Roman and Sitelew, Roman and Bras, Ronan Le and Liu, Rosanne and Jacobs, Rowan and Zhang, Rui and Salakhutdinov, Russ and Chi, Ryan Andrew and Lee, Seungjae Ryan and Stovall, Ryan and Teehan, Ryan and Yang, Rylan and Singh, Sahib and Mohammad, Saif M. and Anand, Sajant and Dillavou, Sam and Shleifer, Sam and Wiseman, Sam and Gruetter, Samuel and Bowman, Samuel R. and Schoenholz, Samuel Stern and Han, Sanghyun and Kwatra, Sanjeev and Rous, Sarah A. and Ghazarian, Sarik and Ghosh, Sayan and Casey, Sean and Bischoff, Sebastian and Gehrmann, Sebastian and Schuster, Sebastian and Sadeghi, Sepideh and Hamdan, Shadi and Zhou, Sharon and Srivastava, Shashank and Shi, Sherry and Singh, Shikhar and Asaadi, Shima and Gu, Shixiang Shane and Pachchigar, Shubh and Toshniwal, Shubham and Upadhyay, Shyam and Debnath, Shyamolima Shammie and Shakeri, Siamak and Thormeyer, Simon and Melzi, Simone and Reddy, Siva and Makini, Sneha Priscilla and Lee, Soo-Hwan and Torene, Spencer and Hatwar, Sriharsha and Dehaene, Stanislas and Divic, Stefan and Ermon, Stefano and Biderman, Stella and Lin, Stephanie and Prasad, Stephen and Piantadosi, Steven and Shieber, Stuart and Misherghi, Summer and Kiritchenko, Svetlana and Mishra, Swaroop and Linzen, Tal and Schuster, Tal and Li, Tao and Yu, Tao and Ali, Tariq and Hashimoto, Tatsunori and Wu, Te-Lin and Desbordes, Th{\'e}o and Rothschild, Theodore and Phan, Thomas and Wang, Tianle and Nkinyili, Tiberius and Schick, Timo and Kornev, Timofei and Tunduny, Titus and Gerstenberg, Tobias and Chang, Trenton and Neeraj, Trishala and Khot, Tushar and Shultz, Tyler and Shaham, Uri and Misra, Vedant and Demberg, Vera and Nyamai, Victoria and Raunak, Vikas and Ramasesh, Vinay Venkatesh and vinay uday prabhu and Padmakumar, Vishakh and Srikumar, Vivek and Fedus, William and Saunders, William and Zhang, William and Vossen, Wout and Ren, Xiang and Tong, Xiaoyu and Zhao, Xinran and Wu, Xinyi and Shen, Xudong and Yaghoobzadeh, Yadollah and Lakretz, Yair and Song, Yangqiu and Bahri, Yasaman and Choi, Yejin and Yang, Yichi and Hao, Yiding and Chen, Yifu and Belinkov, Yonatan and Hou, Yu and Hou, Yufang and Bai, Yuntao and Seid, Zachary and Zhao, Zhuoye and Wang, Zijian and Wang, Zijie J. and Wang, Zirui and Wu, Ziyi}, year = {2023}, month = apr, journal = {Transactions on Machine Learning Research}, issn = {2835-8856}, url = {https://openreview.net/forum?id=uyTL5Bvosj}, }
2022
-
GoToNet: Fast Monocular Scene Exposure and ExplorationTom Avrech, Evgenii Zheltonozhskii, Chaim Baskin, and Ehud RivlinJournal of Intelligent & Robotic Systems, Jul 2022Autonomous scene exposure and exploration, especially in localization or communication-denied areas, useful for finding targets in unknown scenes, remains a challenging problem in computer navigation. In this work, we present a novel method for real-time environment exploration, whose only requirements are a visually similar dataset for pre-training, enough lighting in the scene, and an on-board forward-looking RGB camera for environmental sensing. As opposed to existing methods, our method requires only one look (image) to make a good tactical decision, and therefore works at a non-growing, constant time. Two direction predictions, characterized by pixels dubbed the Goto and Lookat pixels, comprise the core of our method. These pixels encode the recommended flight instructions in the following way: the Goto pixel defines the direction in which the agent should move by one distance unit, and the Lookat pixel defines the direction in which the camera should be pointing at in the next step. These flying-instruction pixels are optimized to expose the largest amount of currently unexplored areas. Our method presents a novel deep learning-based navigation approach that is able to solve this problem and demonstrate its ability in an even more complicated setup, i.e., when computational power is limited. In addition, we propose a way to generate a navigation-oriented dataset, enabling efficient training of our method using RGB and depth images. Tests conducted in a simulator evaluating both the sparse pixels’coordinations inferring process, and 2D and 3D test flights aimed to unveil areas and decrease distances to targets achieve promising results. Comparison against a state-of-the-art algorithm shows our method is able to overperform it, that while measuring the new voxels per camera pose, minimum distance to target, percentage of surface voxels seen, and compute time metrics.
@article{avrech2022gotonet, title = {{GoToNet:} Fast Monocular Scene Exposure and Exploration}, author = {Avrech, Tom and Zheltonozhskii, Evgenii and Baskin, Chaim and Rivlin, Ehud}, year = {2022}, month = jul, journal = {Journal of Intelligent \& Robotic Systems}, volume = {105}, number = {3}, pages = {65}, doi = {10.1007/s10846-022-01646-9}, isbn = {1573-0409}, url = {https://doi.org/10.1007/s10846-022-01646-9}, }
- arXiv
On Recoverability of Graph Neural Network RepresentationsMaxim Fishman, Chaim Baskin, Evgenii Zheltonozhskii, Ron Banner, and Avi MendelsonJan 2022Despite their growing popularity, graph neural networks (GNNs) still have multiple unsolved problems, including finding more expressive aggregation methods, propagation of information to distant nodes, and training on large-scale graphs. Understanding and solving such problems require developing analytic tools and techniques. In this work, we propose the notion of recoverability, which is tightly related to information aggregation in GNNs, and based on this concept, develop the method for GNN embedding analysis. We define recoverability theoretically and propose a method for its efficient empirical estimation. We demonstrate, through extensive experimental results on various datasets and different GNN architectures, that estimated recoverability correlates with aggregation method expressivity and graph sparsification quality. Therefore, we believe that the proposed method could provide an essential tool for understanding the roots of the aforementioned problems, and potentially lead to a GNN design that overcomes them. The code to reproduce our experiments is available at this https URL
@misc{fishman2022recoverability, title = {On Recoverability of Graph Neural Network Representations}, author = {Fishman, Maxim and Baskin, Chaim and Zheltonozhskii, Evgenii and Banner, Ron and Mendelson, Avi}, year = {2022}, month = jan, journal = {arXiv pre-print}, url = {https://arxiv.org/abs/2201.12843}, }
- End-to-End Referring Video Object Segmentation with Multimodal TransformersAdam Botach, Evgenii Zheltonozhskii, and Chaim BaskinIn IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2022
The referring video object segmentation task (RVOS) involves segmentation of a text-referred object instance in the frames of a given video. Due to the complex nature of this multimodal task, which combines text reasoning, video understanding, instance segmentation and tracking, existing approaches typically rely on sophisticated pipelines in order to tackle it. In this paper, we propose a simple Transformer-based approach to RVOS. Our framework, termed Multimodal Tracking Transformer (MTTR), models the RVOS task as a sequence prediction problem. Following recent advancements in computer vision and natural language processing, MTTR is based on the realization that video and text can both be processed together effectively and elegantly by a single multimodal Transformer model. MTTR is end-to-end trainable, free of text-related inductive bias components and requires no additional mask-refinement post-processing steps. As such, it simplifies the RVOS pipeline considerably compared to existing methods. Evaluation on standard benchmarks reveals that MTTR significantly outperforms previous art across multiple metrics. In particular, MTTR shows impressive +5.7 and +5.0 mAP gains on the A2D-Sentences and JHMDB-Sentences datasets respectively, while processing 76 frames per second. In addition, we report strong results on the public validation set of Refer-YouTube-VOS, a more challenging RVOS dataset that has yet to receive the attention of researchers. The code to reproduce our experiments is available at this https URL
@inproceedings{botach2021mttr, title = {End-to-End Referring Video Object Segmentation with Multimodal Transformers}, author = {Botach, Adam and Zheltonozhskii, Evgenii and Baskin, Chaim}, year = {2022}, month = jun, booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, url = {https://openaccess.thecvf.com/content/CVPR2022/html/Botach_End-to-End_Referring_Video_Object_Segmentation_With_Multimodal_Transformers_CVPR_2022_paper.html}, }
- Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy LabelsIn IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Jan 2022
The success of learning with noisy labels (LNL) methods relies heavily on the success of a warm-up stage where standard supervised training is performed using the full (noisy) training set. In this paper, we identify a "warm-up obstacle": the inability of standard warm-up stages to train high quality feature extractors and avert memorization of noisy labels. We propose "Contrast to Divide" (C2D), a simple framework that solves this problem by pre-training the feature extractor in a self-supervised fashion. Using self-supervised pre-training boosts the performance of existing LNL approaches by drastically reducing the warm-up stage’s susceptibility to noise level, shortening its duration, and increasing extracted feature quality. C2D works out of the box with existing methods and demonstrates markedly improved performance, especially in the high noise regime, where we get a boost of more than 27% for CIFAR-100 with 90% noise over the previous state of the art. In real-life noise settings, C2D trained on mini-WebVision outperforms previous works both in WebVision and ImageNet validation sets by 3% top-1 accuracy. We perform an in-depth analysis of the framework, including investigating the performance of different pre-training approaches and estimating the effective upper bound of the LNL performance with semi-supervised learning. Code for reproducing our experiments is available at this https URL
@inproceedings{zheltonozhskii2021c2d, title = {Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels}, author = {Zheltonozhskii, Evgenii and Baskin, Chaim and Mendelson, Avi and Bronstein, Alex M. and Litany, Or}, year = {2022}, month = jan, booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, pages = {1657--1667}, url = {https://openaccess.thecvf.com/content/WACV2022/html/Zheltonozhskii_Contrast_To_Divide_Self-Supervised_Pre-Training_for_Learning_With_Noisy_Labels_WACV_2022_paper.html}, }
- Weakly Supervised Recovery of Semantic AttributesIn First Conference on Causal Learning and Reasoning, Apr 2022
We consider the problem of extracting semantic attributes, using only classification labels for supervision. For example, when learning to classify images of birds into species, we would like to observe the emergence of features used by zoologists to classify birds. To tackle this problem, we propose training a neural network with discrete features in the last layer, followed by two heads: a multi-layered perceptron (MLP) and a decision tree. The decision tree utilizes simple binary decision stumps, thus encouraging features to have semantic meaning. We present a theoretical analysis, as well as a practical method for learning in the intersection of two hypothesis classes. Compared with various benchmarks, our results show an improved ability to extract a set of features highly correlated with a ground truth set of unseen attributes.
@inproceedings{ali2021semantic, title = {Weakly Supervised Recovery of Semantic Attributes}, author = {Ali, Ameen and Galanti, Tomer and Zheltonozhskii, Evgenii and Baskin, Chaim and Wolf, Lior}, year = {2022}, month = apr, booktitle = {First Conference on Causal Learning and Reasoning}, url = {https://openreview.net/forum?id=GdAzRedTV7J}, }
-
Single-node attacks for fooling graph neural networksBen Finkelshtein, Chaim Baskin, Evgenii Zheltonozhskii, and Uri AlonNeurocomputing, Nov 2022Graph neural networks (GNNs) have shown broad applicability in a variety of domains. These domains, e.g., social networks and product recommendations, are fertile ground for malicious users and behavior. In this paper, we show that GNNs are vulnerable to the extremely limited (and thus quite realistic) scenarios of a single-node adversarial attack, where the perturbed node cannot be chosen by the attacker. That is, an attacker can force the GNN to classify any target node to a chosen label, by only slightly perturbing the features or the neighbors list of another single arbitrary node in the graph, even when not being able to select that specific attacker node. When the adversary is allowed to select the attacker node, these attacks are even more effective. We demonstrate empirically that our attack is effective across various common GNN types (e.g., GCN, GraphSAGE, GAT, GIN) and robustly optimized GNNs (e.g., Robust GCN, SM GCN, GAL, LAT-GCN), outperforming previous attacks across different real-world datasets both in a targeted and non-targeted attacks. Our code is available anonymously at https://github.com/gnnattack/SINGLE.
@article{finkelshtein2020singlenode, title = {Single-node attacks for fooling graph neural networks}, author = {Finkelshtein, Ben and Baskin, Chaim and Zheltonozhskii, Evgenii and Alon, Uri}, year = {2022}, month = nov, journal = {Neurocomputing}, volume = {513}, pages = {1--12}, doi = {https://doi.org/10.1016/j.neucom.2022.09.115}, issn = {0925-2312}, url = {https://www.sciencedirect.com/science/article/pii/S0925231222012012}, keywords = {Graph neural networks, Adversarial robustness, Node classification}, }
-
Adversarial robustness via noise injection in smoothed modelsYaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Alex M. Bronstein, and Avi MendelsonApplied Intelligence, Aug 2022Deep neural networks are known to be vulnerable to malicious perturbations. Current methods for improving adversarial robustness make use of either implicit or explicit regularization, with the latter is usually based on adversarial training. Randomized smoothing, the averaging of the classifier outputs over a random distribution centered in the sample, has been shown to guarantee a classifier’s performance subject to bounded perturbations of the input. In this work, we study the application of randomized smoothing to improve performance on unperturbed data and increase robustness to adversarial attacks. We propose to combine smoothing along with adversarial training and randomization approaches, and find that doing so significantly improves the resilience compared to the baseline. We examine our method’s performance on common white-box (FGSM, PGD) and black-box (transferable attack and NAttack) attacks on CIFAR-10 and CIFAR-100, and determine that for a low number of iterations, smoothing provides a significant performance boost that persists even for perturbations with a high attack norm, e. For example, under a PGD-10 attack on CIFAR-10 using Wide-ResNet28-4, we achieve 60.3% accuracy for infinity norm e∞=8/255 and 13.1% accuracy for e∞=35/255 – outperforming previous art by 3% and 6%, respectively. We achieve nearly twice the accuracy on e∞=35/255 and even more so for perturbations with higher infinity norm. A reference implementation of the proposed method is provided.
@article{nemcovsky2019smoothed, title = {Adversarial robustness via noise injection in smoothed models}, author = {Nemcovsky, Yaniv and Zheltonozhskii, Evgenii and Baskin, Chaim and Chmiel, Brian and Bronstein, Alex M. and Mendelson, Avi}, year = {2022}, month = aug, journal = {Applied Intelligence}, doi = {10.1007/s10489-022-03423-5}, isbn = {1573-7497}, url = {https://doi.org/10.1007/s10489-022-03423-5}, }
2021
-
Early-Stage Neural Network Hardware Performance AnalysisAlex Karbachevsky, Chaim Baskin, Evgenii Zheltonozhskii, Yevgeny Yermolin, Freddy Gabbay, Alex M. Bronstein, and Avi MendelsonSustainability, Jan 2021The demand for running NNs in embedded environments has increased significantly in recent years due to the significant success of convolutional neural network (CNN) approaches in various tasks, including image recognition and generation. The task of achieving high accuracy on resource-restricted devices, however, is still considered to be challenging, which is mainly due to the vast number of design parameters that need to be balanced. While the quantization of CNN parameters leads to a reduction of power and area, it can also generate unexpected changes in the balance between communication and computation. This change is hard to evaluate, and the lack of balance may lead to lower utilization of either memory bandwidth or computational resources, thereby reducing performance. This paper introduces a hardware performance analysis framework for identifying bottlenecks in the early stages of CNN hardware design. We demonstrate how the proposed method can help in evaluating different architecture alternatives of resource-restricted CNN accelerators (e.g., part of real-time embedded systems) early in design stages and, thus, prevent making design mistakes.
@article{karbachevsky2021earlystage, title = {Early-Stage Neural Network Hardware Performance Analysis}, author = {Karbachevsky, Alex and Baskin, Chaim and Zheltonozhskii, Evgenii and Yermolin, Yevgeny and Gabbay, Freddy and Bronstein, Alex M. and Mendelson, Avi}, year = {2021}, month = jan, journal = {Sustainability}, publisher = {MDPI AG}, volume = {13}, number = {2}, pages = {717}, doi = {10.3390/su13020717}, issn = {2071-1050}, url = {http://dx.doi.org/10.3390/su13020717}, issuetitle = {Energy-Efficient Computing Systems for Deep Learning}, editor = {Cano, José and Abellán, José L. and Kaeli, David}, }
-
Loss Aware Post-Training QuantizationYury Nahshan, Brian Chmiel, Chaim Baskin, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, and Avi MendelsonMachine Learning, Oct 2021Neural network quantization enables the deployment of large models on resource-constrained devices. Current post-training quantization methods fall short in terms of accuracy for INT4 (or lower) but provide reasonable accuracy for INT8 (or above). In this work, we study the effect of quantization on the structure of the loss landscape. We show that the structure is flat and separable for mild quantization, enabling straightforward post-training quantization methods to achieve good results. We show that with more aggressive quantization, the loss landscape becomes highly non-separable with steep curvature, making the selection of quantization parameters more challenging. Armed with this understanding, we design a method that quantizes the layer parameters jointly, enabling significant accuracy improvement over current post-training quantization methods. Reference implementation is available at https://github.com/ynahshan/nn-quantization-pytorch/tree/master/lapq.
@article{nahshan2019lapq, title = {Loss Aware Post-Training Quantization}, author = {Nahshan, Yury and Chmiel, Brian and Baskin, Chaim and Zheltonozhskii, Evgenii and Banner, Ron and Bronstein, Alex M. and Mendelson, Avi}, year = {2021}, month = oct, journal = {Machine Learning}, doi = {10.1007/s10994-021-06053-z}, issn = {1573-0565}, url = {https://link.springer.com/article/10.1007/s10994-021-06053-z}, }
- CAT: Compression-Aware Training for Bandwidth ReductionChaim Baskin, Brian Chmiel, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, and Avi MendelsonJournal of Machine Learning Research, Aug 2021
One major obstacle hindering the ubiquitous use of CNNs for inference is their relatively high memory bandwidth requirements, which can be the primary energy consumer and throughput bottleneck in hardware accelerators. Inspired by quantization-aware training approaches, we propose a compression-aware training (CAT) method that involves training the model to allow better compression of weights and feature maps during neural network deployment. Our method trains the model to achieve low-entropy feature maps, enabling efficient compression at inference time using classical transform coding methods. CAT significantly improves the state-of-the-art results reported for quantization evaluated on various vision and NLP tasks, such as image classification (ImageNet), image detection (Pascal VOC), sentiment analysis (CoLa), and textual entailment (MNLI). For example, on ResNet-18, we achieve near baseline ImageNet accuracy with an average representation of only 1.5 bits per value with 5-bit quantization. Moreover, we show that entropy reduction of weights and activations can be applied together, further improving bandwidth reduction. Reference implementation is available.
@article{baskin2019cat, title = {{CAT}: Compression-Aware Training for Bandwidth Reduction}, author = {Baskin, Chaim and Chmiel, Brian and Zheltonozhskii, Evgenii and Banner, Ron and Bronstein, Alex M. and Mendelson, Avi}, year = {2021}, month = aug, journal = {Journal of Machine Learning Research}, volume = {22}, number = {269}, pages = {1--20}, url = {http://jmlr.org/papers/v22/20-1374.html}, }
-
NICE: Noise Injection and Clamping Estimation for Neural Network QuantizationChaim Baskin, Evgenii Zheltonozhskii, Tal Rozen, Natan Liss, Yoav Chai, Eli Schwartz, Raja Giryes, Alexander M. Bronstein, and Avi MendelsonMathematics, Sep 2021Convolutional Neural Networks (CNNs) are very popular in many fields including computer vision, speech recognition, natural language processing, etc. Though deep learning leads to groundbreaking performance in those domains, the networks used are very computationally demanding and are far from being able to perform in real-time applications even on a GPU, which is not power efficient and therefore does not suit low power systems such as mobile devices. To overcome this challenge, some solutions have been proposed for quantizing the weights and activations of these networks, which accelerate the runtime significantly. Yet, this acceleration comes at the cost of a larger error unless spatial adjustments are carried out. The method proposed in this work trains quantized neural networks by noise injection and a learned clamping, which improve accuracy. This leads to state-of-the-art results on various regression and classification tasks, e.g., ImageNet classification with architectures such as ResNet-18/34/50 with as low as 3 bit weights and activations. We implement the proposed solution on an FPGA to demonstrate its applicability for low-power real-time applications. The quantization code will become publicly available upon acceptance.
@article{baskin2018nice, title = {{NICE}: Noise Injection and Clamping Estimation for Neural Network Quantization}, author = {Baskin, Chaim and Zheltonozhskii, Evgenii and Rozen, Tal and Liss, Natan and Chai, Yoav and Schwartz, Eli and Giryes, Raja and Bronstein, Alexander M. and Mendelson, Avi}, year = {2021}, month = sep, journal = {Mathematics}, publisher = {MDPI AG}, volume = {9}, number = {17}, doi = {10.3390/math9172144}, issn = {2227-7390}, url = {https://www.mdpi.com/2227-7390/9/17/2144}, issuetitle = {Computational Optimizations for Machine Learning}, editor = {Gabbay, Freddy}, }
- UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural NetworksChaim Baskin, Natan Liss, Eli Schwartz, Evgenii Zheltonozhskii, Raja Giryes, Alex M. Bronstein, and Avi MendelsonACM Transactions on Computer Systems, Mar 2021
We present a novel method for neural network quantization. Our method, named UNIQ, emulates a non-uniform k-quantile quantizer and adapts the model to perform well with quantized weights by injecting noise to the weights at training time. As a by-product of injecting noise to weights, we find that activations can also be quantized to as low as 8-bit with only a minor accuracy degradation. Our non-uniform quantization approach provides a novel alternative to the existing uniform quantization techniques for neural networks. We further propose a novel complexity metric of number of bit operations performed (BOPs), and we show that this metric has a linear relation with logic utilization and power. We suggest evaluating the trade-off of accuracy vs. complexity (BOPs). The proposed method, when evaluated on ResNet18/34/50 and MobileNet on ImageNet, outperforms the prior state of the art both in the low-complexity regime and the high accuracy regime. We demonstrate the practical applicability of this approach, by implementing our non-uniformly quantized CNN on FPGA.
@article{baskin2018uniq, title = {{UNIQ:} Uniform Noise Injection for Non-Uniform Quantization of Neural Networks}, author = {Baskin, Chaim and Liss, Natan and Schwartz, Eli and Zheltonozhskii, Evgenii and Giryes, Raja and Bronstein, Alex M. and Mendelson, Avi}, year = {2021}, month = mar, journal = {ACM Transactions on Computer Systems}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {37}, number = {1–4}, numpages = {15}, doi = {10.1145/3444943}, issn = {0734-2071}, url = {https://arxiv.org/abs/1804.10969}, issue_date = {March 2021}, articleno = {4}, }
2020
- Self-Supervised Learning for Large-Scale Unsupervised Image ClusteringAug 2020
Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data. However, unsupervised learning of complex data is challenging, and even the best approaches show much weaker performance than their supervised counterparts. Self-supervised deep learning has become a strong instrument for representation learning in computer vision. However, those methods have not been evaluated in a fully unsupervised setting. In this paper, we propose a simple scheme for unsupervised classification based on self-supervised representations. We evaluate the proposed approach with several recent self-supervised methods showing that it achieves competitive results for ImageNet classification (39% accuracy on ImageNet with 1000 clusters and 46% with overclustering). We suggest adding the unsupervised evaluation to a set of standard benchmarks for self-supervised learning. The code is available at this https URL
@misc{zheltonozhskii2020selfsupervised, title = {Self-Supervised Learning for Large-Scale Unsupervised Image Clustering}, author = {Zheltonozhskii, Evgenii and Baskin, Chaim and Bronstein, Alex M. and Mendelson, Avi}, year = {2020}, month = aug, journal = {NeurIPS Self-Supervised Learning Workshop}, url = {https://arxiv.org/abs/2008.10312}, }
- arXiv
Colored Noise Injection for Training Adversarially Robust Neural NetworksEvgenii Zheltonozhskii, Chaim Baskin, Yaniv Nemcovsky, Brian Chmiel, Avi Mendelson, and Alex M. BronsteinMar 2020Even though deep learning has shown unmatched performance on various tasks, neural networks have been shown to be vulnerable to small adversarial perturbations of the input that lead to significant performance degradation. In this work we extend the idea of adding white Gaussian noise to the network weights and activations during adversarial training (PNI) to the injection of colored noise for defense against common white-box and black-box attacks. We show that our approach outperforms PNI and various previous approaches in terms of adversarial accuracy on CIFAR-10 and CIFAR-100 datasets. In addition, we provide an extensive ablation study of the proposed method justifying the chosen configurations.
@misc{zheltonozhskii2020colored, title = {Colored Noise Injection for Training Adversarially Robust Neural Networks}, author = {Zheltonozhskii, Evgenii and Baskin, Chaim and Nemcovsky, Yaniv and Chmiel, Brian and Mendelson, Avi and Bronstein, Alex M.}, year = {2020}, month = mar, journal = {arXiv pre-print}, url = {https://arxiv.org/abs/2003.02188}, }
- Feature Map Transform Coding for Energy-Efficient CNN InferenceBrian Chmiel, Chaim Baskin, Ron Banner, Evgenii Zheltonozhskii, Yevgeny Yermolin, Alex Karbachevsky, Alex M. Bronstein, and Avi MendelsonIn International Joint Conference on Neural Networks (IJCNN), Jul 2020
Oral
Convolutional neural networks (CNNs) achieve state-of-the-art accuracy in a variety of tasks in computer vision and beyond. One of the major obstacles hindering the ubiquitous use of CNNs for inference on low-power edge devices is their high computational complexity and memory bandwidth requirements. The latter often dominates the energy footprint on modern hardware. In this paper, we introduce a lossy transform coding approach, inspired by image and video compression, designed to reduce the memory bandwidth due to the storage of intermediate activation calculation results. Our method does not require fine-tuning the network weights and halves the data transfer volumes to the main memory by compressing feature maps, which are highly correlated, with variable length coding. Our method outperform previous approach in term of the number of bits per value with minor accuracy degradation on ResNet-34 and MobileNetV2. We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy. When allowing accuracy degradation of up to 2%, the reduction of 60% is achieved. A reference implementation accompanies the paper.
@inproceedings{chmiel2020transformcoding, title = {Feature Map Transform Coding for Energy-Efficient CNN Inference}, author = {Chmiel, Brian and Baskin, Chaim and Banner, Ron and Zheltonozhskii, Evgenii and Yermolin, Yevgeny and Karbachevsky, Alex and Bronstein, Alex M. and Mendelson, Avi}, year = {2020}, month = jul, booktitle = {International Joint Conference on Neural Networks (IJCNN)}, pages = {1--9}, doi = {10.1109/IJCNN48605.2020.9206968}, url = {https://arxiv.org/abs/1905.10830}, }
2019
- Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural NetworksYochai Zur, Chaim Baskin, Evgenii Zheltonozhskii, Brian Chmiel, Itay Evron, Alex M. Bronstein, and Avi MendelsonApr 2019
Recently, deep learning has become a de facto standard in machine learning with convolutional neural networks (CNNs) demonstrating spectacular success on a wide variety of tasks. However, CNNs are typically very demanding computationally at inference time. One of the ways to alleviate this burden on certain hardware platforms is quantization relying on the use of low-precision arithmetic representation for the weights and the activations. Another popular method is the pruning of the number of filters in each layer. While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training. In this paper, we formulate optimal arithmetic bit length allocation and neural network pruning as a NAS problem, searching for the configurations satisfying a computational complexity budget while maximizing the accuracy. We use a differentiable search method based on the continuous relaxation of the search space proposed by Liu et al. (arXiv:1806.09055). We show, by grid search, that heterogeneous quantized networks suffer from a high variance which renders the benefit of the search questionable. For pruning, improvement over homogeneous cases is possible, but it is still challenging to find those configurations with the proposed method. The code is publicly available at this https URL and this https URL
@misc{zur2019filterlevel, title = {Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks}, author = {Zur, Yochai and Baskin, Chaim and Zheltonozhskii, Evgenii and Chmiel, Brian and Evron, Itay and Bronstein, Alex M. and Mendelson, Avi}, year = {2019}, month = apr, journal = {ICML AutoML Workshop}, url = {https://arxiv.org/abs/1904.09872}, }
2018
-
Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow PlatformIn IEEE International Parallel and Distributed Processing Symposium Workshops, May 2018Deep neural networks (DNNs) are used by different applications that are executed on a range of computer architectures, from IoT devices to supercomputers. The footprint of these networks is huge as well as their computational and communication needs. In order to ease the pressure on resources, research indicates that in many cases a low precision representation (1-2 bit per parameter) of weights and other parameters can achieve similar accuracy while requiring less resources. Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e.g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers. This paper presents a new streaming architecture for running QNNs on FPGAs. The proposed architecture scales out better than alternatives, allowing us to take advantage of systems with multiple FPGAs. We also included support for skip connections, that are used in state-of-the art NNs, and shown that our architecture allows to add those connections almost for free. All this allowed us to implement an 18-layer ResNet for 224x224 images classification, achieving 57.5% top-1 accuracy. In addition, we implemented a full-sized quantized AlexNet. In contrast to previous works, we use 2-bit activations instead of 1-bit ones, which improves AlexNet’s top-1 accuracy from 41.8% to 51.03% for the ImageNet classification. Both AlexNet and ResNet can handle 1000-class real-time classification on an FPGA. Our implementation of ResNet-18 consumes 5x less power and is 4x slower for ImageNet, when compared to the same NN on the latest Nvidia GPUs. Smaller NNs, that fit a single FPGA, are running faster then on GPUs on small (32x32) inputs, while consuming up to 20x less energy and power.
@inproceedings{baskin2018streaming, title = {Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform}, author = {Baskin, Chaim and Liss, Natan and Zheltonozhskii, Evgenii and Bronstein, Alex M. and Mendelson, Avi}, year = {2018}, month = may, booktitle = {IEEE International Parallel and Distributed Processing Symposium Workshops}, pages = {162--169}, doi = {10.1109/IPDPSW.2018.00032}, url = {https://arxiv.org/abs/1708.00052}, }