Lars Lien Ankile
Robot Learning @ MIT
51 Vassar St
Cambridge, MA 02139
I’m a visiting researcher at the Improbable AI Lab at MIT CSAIL working on robot learning with Prof. Pulkit Agrawal.
Before this, I completed an M.Eng. of Data Science at Harvard University, where I completed my thesis work in the Improbable AI group, working on sample-efficient imitation learning. I also spent a year in the Data to Actionable Knowledge Lab at Harvard, working with Profs. Weiwei Pan and Finale Doshi-Velez on applying RL and Bayesian inference to model human decision-making for frictionful tasks in healthcare settings. Finally, I also spent a summer and fall interning with Prof. David Parkes and Matheus Ferreira in the EconCS Lab at Harvard working on detecting manipulation in multi-agent settings.
I did my undergrad at the Norwegian University of Science and Technology (NTNU) and did my thesis work on applying Deep Learning to econometric forecasting of complex and multivariate time series, supervised by Prof. Sjur Westgaard.
Research Interests
My research goal is to enable machines to learn in a human-like manner, primarily through observation of and interaction with the environment by developing methods merging imitation and reinforcement learning with robust policy representations. Through this work, I aim to build truly adaptive and flexible robots that can reliably learn and execute complex manipulation tasks in the physical world.
news
Nov 4, 2024 | Our work on the scalable data platforms for robot learning, Dexhub and DART, led by Younghyo Park, is now available on arXiv! This introduces the AR-based teleoperation system DART, that enables scalable data collection without robots, and the Dexhub platform, which enables easy sharing and collaboration on robot learning datasets. |
---|---|
Sep 1, 2024 | The paper led by Allen Ren at Princeton, Diffusion Policy Policy Optimization, is now available on arXiv! Here, we introduce DPPO, an algorithmic framework and set of best practices for diretly fine-tuning diffusion-based policies (as opposed to finetuning a residual model as in ResIP). |
Jul 23, 2024 | Our latest work, From Imitation to Refinement is now available on arXiv! In this work, we take the lessons from Juicer, and show the limitations of imitation learning for tasks requiring precise control, and propose a simple-yet-effective way to finetune pre-trained diffusion models for such tasks using residual models. |
Jun 30, 2024 | Data-Efficient Imitation Learning for Robotic Assembly got accepted to IROS 2024 October 14-18! Please reach out if you are going and want to chat about imitation learning, robotics, or anything else! |
Apr 10, 2024 | Our recent work on Data-Efficient Imitation Learning for Robotic Assembly is available to read on arXiv! In this work, we show how one can learn long assemblies (~2500 timesteps) with only <50 demonstrations using diffusion policies and data augmentation strategies. |