Alexander Kolesnikov

Staff Researcher Engineer at Google Deepmind.

prof_pic.jpg

I am a machine learning researcher, with extensive expierence in computer vision and natural language processing.

My research projects span topics such as neural architectures, large-scale representation learning and models, transfer learing, generative modeling, reinforcement learning and beyond. I strive to simplify things, make them end-to-end learnable, as oppsoed to introducing new components. See my papers below to see examples of projects that have this flavour.

I also spent significant chunk of my time on designing and coding flexible, but powerful research infrascture. I firmly believe that good infrasture is a necesary condition for a sustainable and high-quality research output.

selected publications

  1. tune.png
    Tuning computer vision models with task rewards
    André Susano Pinto, Alexander Kolesnikov, Yuge Shi, Lucas Beyer, and Xiaohua Zhai
    arXiv preprint arXiv:2302.08242, 2022
  2. uvim.png
    UViM: A unified modeling approach for vision with learned guiding codes
    Alexander Kolesnikov, André Susano Pinto, Lucas Beyer, Xiaohua Zhai, Jeremiah Harmsen, and Neil Houlsby
    Advances in neural information processing systems (NeurIPS), 2022
  3. svit.png
    Scaling vision transformers
    Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer
    Conference on Computer Vision and Pattern Recognition (CVPR), 2022
  4. mixer.png
    MLP-Mixer: An all-mlp architecture for vision
    Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, and  others
    Advances in neural information processing systems (NeurIPS), 2021
  5. vit.png
    An image is worth 16x16 words: Transformers for image recognition at scale
    Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, and  others
    International Conference on Representation Learning (ICLR), 2020
  6. big_transfer.png
    Big transfer (BiT): General visual representation learning
    Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby
    European Conference on Computer Vision (ECCV), 2020
  7. revisiting.png
    Revisiting self-supervised visual representation learning
    Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer
    Conference on Computer Vision and Pattern Recognition (CVPR), 2019
  8. icarl.jpg
    iCaRL: Incremental Classifier and Representation Learning
    Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H. Lampert
    Conference on Computer Vision and Pattern Recognition (CVPR), 2017
  9. sec.jpg
    Seed, expand and constrain: Three principles for weakly-supervised image segmentation
    Alexander Kolesnikov, and Christoph H Lampert
    European Conference on Computer Vision (ECCV), 2016