Lars Lien Ankile

Robot Learning @ MIT

headshot.jpg

51 Vassar St

Cambridge, MA 02139

I’m a visiting researcher at the Improbable AI Lab at MIT CSAIL working on robot learning with Prof. Pulkit Agrawal.

Before this, I completed an M.Eng. of Data Science at Harvard University, where I completed my thesis work in the Improbable AI group, working on sample-efficient imitation learning. I also spent a year in the Data to Actionable Knowledge Lab at Harvard, working with Profs. Weiwei Pan and Finale Doshi-Velez on applying RL and Bayesian inference to model human decision-making for frictionful tasks in healthcare settings. Finally, I also spent a summer and fall interning with Prof. David Parkes and Matheus Ferreira in the EconCS Lab at Harvard working on detecting manipulation in multi-agent settings.

I did my undergrad at the Norwegian University of Science and Technology (NTNU) and did my thesis work on applying Deep Learning to econometric forecasting of complex and multivariate time series, supervised by Prof. Sjur Westgaard.

Research Interests

My research goal is to enable machines to learn in a human-like manner, primarily through observation of and interaction with the environment by developing methods merging imitation and reinforcement learning with robust policy representations. Through this work, I aim to build truly adaptive and flexible robots that can reliably learn and execute complex manipulation tasks in the physical world.

news

Nov 4, 2024 Our work on the scalable data platforms for robot learning, Dexhub and DART, led by Younghyo Park, is now available on arXiv! This introduces the AR-based teleoperation system DART, that enables scalable data collection without robots, and the Dexhub platform, which enables easy sharing and collaboration on robot learning datasets.
Sep 1, 2024 The paper led by Allen Ren at Princeton, Diffusion Policy Policy Optimization, is now available on arXiv! Here, we introduce DPPO, an algorithmic framework and set of best practices for diretly fine-tuning diffusion-based policies (as opposed to finetuning a residual model as in ResIP).
Jul 23, 2024 Our latest work, From Imitation to Refinement is now available on arXiv! In this work, we take the lessons from Juicer, and show the limitations of imitation learning for tasks requiring precise control, and propose a simple-yet-effective way to finetune pre-trained diffusion models for such tasks using residual models.
Jun 30, 2024 Data-Efficient Imitation Learning for Robotic Assembly got accepted to IROS 2024 October 14-18! Please reach out if you are going and want to chat about imitation learning, robotics, or anything else!
Apr 10, 2024 Our recent work on Data-Efficient Imitation Learning for Robotic Assembly is available to read on arXiv! In this work, we show how one can learn long assemblies (~2500 timesteps) with only <50 demonstrations using diffusion policies and data augmentation strategies.

selected publications

  1. Robot Learning with Super-Linear Scaling
    Marcel Torne, Arhan Jain, Jiayi Yuan, and 5 more authors
    2024
  2. DexHub and DART: Towards Internet Scale Robot Data Collection
    Younghyo Park, Jagdeep Singh Bhatia, Lars Ankile, and 1 more author
    2024
  3. From Imitation to Refinement–Residual RL for Precise Visual Assembly
    Lars Ankile, Anthony Simeonov, Idan Shenfeld, and 2 more authors
    2024
  4. Diffusion Policy Policy Optimization
    Allen Z Ren, Justin Lidard, Lars L Ankile, and 6 more authors
    2024
  5. JUICER: Data-Efficient Imitation Learning for Robotic Assembly
    Lars Ankile, Anthony Simeonov, Idan Shenfeld, and 1 more author
    2024