Taesung Park, Sergey Levine. Inverse Optimal Control for Humanoid Locomotion. Robotics Science and Systems (RSS) Workshop on Inverse Optimal Control & Robotic Learning from Demonstration. 2013. (pdf)
Abstract—In this paper, we present a method for learning the reward function for humanoid locomotion from motion-captured demonstrations of human running. We show how an approximate, local inverse optimal control algorithm can be used to learn the reward function for this high dimensional domain, and demonstrate how trajectory optimization can then be used to recreate dynamic, naturalistic running behaviors in new environments. Results are presented in simulation on a 29-DoF humanoid model, and include running on flat ground, rough terrain, and under strong lateral perturbation.
Taesung Park. Automatic 3D Character Animation Using Inverse Reinforcement Learning. Master’s research report, Stanford University Department of Computer Science. 2013. (pdf)
Abstract – This report presents a framework for learning 3D character animation in the Markov Decision Process (MDP) setting using a reward function learned using Inverse Reinforcement Learning (IRL). Solving the 3D character control problem as an optimization in MDP using reinforcement learning is attractive because it automatically generates the details of the motion and is portable across different environments. However, this approach has been infeasible due to two drawbacks: the curse of dimensionality and the subtlety of the reward function. This report overcomes the dimensionality problem by using a local iterative LQG method that makes local approximations, and using IRL that learns the precise reward function needed to generate the desired motion. The framework was evaluated on two models, a 2-DOF snake model and a 6-DOF bipedal walker model. Both models succesfully verified that optimal control in combination with IRL can be used to control characters to achieve the desired high-level goal. It was also shown that the reward function is portable to different domains by creating a walking motion under reduced gravity.
Taesung Park. Synthetic Panning Shot. Poster at Stanford CURIS Poster Session. 2010. (pdf)
In this project advised by prof. Marc Levoy, panning shots, which can only be captured by skilled photographers, were synthesized using two or three consecutive short-exposure photographs taken by a smartphone camera. The algorithm used corner detection methods and RANSAC to deduce the transform of the foreground object.