Lars Lien Ankile

ML Research @ MIT

headshot.jpg

51 Vassar St

Cambridge, MA 02139

I’m a visiting researcher at the Improbable AI Lab at MIT CSAIL working on robot learning with Prof. Pulkit Agrawal.

Before this, I completed an M.Eng. of Data Science at Harvard University, where I completed my thesis work in the Improbable AI group, working on sample-efficient imitation learning. I also spent a year in the Data to Actionable Knowledge Lab at Harvard, working with Profs. Weiwei Pan and Finale Doshi-Velez on applying RL and Bayesian inference to model human decision-making for frictionful tasks in healthcare settings. Finally, I also spent a summer and fall interning with Prof. David Parkes and Matheus Ferreira in the EconCS Lab at Harvard working on detecting manipulation in multi-agent settings.

I did my undergrad at the Norwegian University of Science and Technology (NTNU) and did my thesis work on applying Deep Learning to econometric forecasting of complex and multivariate time series, supervised by Prof. Sjur Westgaard.

Research Interests

My research goal is to enable machines to learn in a human-like manner, primarily through observation of and interaction with the environment by developing methods merging imitation and reinforcement learning with robust policy representations. I aim to bridge the current divide between general policies that address simple tasks and more specialized policies tailored for complex tasks.

news

Jul 23, 2024 Our latest work, From Imitation to Refinement is now available on arXiv! In this work, we take the lessons from Juicer, and show the limitations of imitation learning for tasks requiring precise control, and propose a simple-yet-effective way to finetune pre-trained diffusion models for such tasks using residual models.
Jul 23, 2024 The paper led by Allen Ren at Princeton, Diffusion Policy Policy Optimization, is now available on arXiv! Here, we introduce DPPO, an algorithmic framework and set of best practices for diretly fine-tuning diffusion-based policies (as opposed to finetuning a residual model as in ResIP).
Jun 30, 2024 Data-Efficient Imitation Learning for Robotic Assembly got accepted to IROS 2024 October 14-18! Please reach out if you are going and want to chat about imitation learning, robotics, or anything else!
Apr 10, 2024 Our recent work on Data-Efficient Imitation Learning for Robotic Assembly is available to read on arXiv! In this work, we show how one can learn long assemblies (~2500 timesteps) with only <50 demonstrations using diffusion policies and data augmentation strategies.

selected publications

  1. From Imitation to Refinement–Residual RL for Precise Visual Assembly
    Lars Ankile, Anthony Simeonov, Idan Shenfeld, and 2 more authors
    2024
  2. Diffusion Policy Policy Optimization
    Allen Z Ren, Justin Lidard, Lars L Ankile, and 6 more authors
    2024
  3. Scaling Robot-Learning by Crowdsourcing Simulation Environments
    Marcel Torne Villasevil, Arhan Jain, Vidyaaranya Macha, and 5 more authors
    In RSS 2024 Workshop: Data Generation for Robotics 2024
  4. JUICER: Data-Efficient Imitation Learning for Robotic Assembly
    Lars Ankile, Anthony Simeonov, Idan Shenfeld, and 1 more author
    2024
  5. I See You! Robust Measurement of Adversarial Behavior
    Lars Ankile, Matheus XV Ferreira, and David Parkes
    In Multi-Agent Security Workshop @ NeurIPS’23 2023
  6. Discovering User Types: Mapping User Traits by Task-Specific Behaviors in Reinforcement Learning
    Lars Lien Ankile, Brian Ham, Kevin Mao, and 4 more authors
    In First Workshop on Theory of Mind in Communicating Agents @ ICML’23 2023
  7. M.Sc.
    Exploration of Forecasting Paradigms and a Generalized Forecasting Framework
    Lars Lien Ankile, and Kjartan Krange
    2022