In 2020, I received my B.S. in Information Security from Shanghai Jiao Tong University. After graduation, I took a gap year and worked as RA at Shanghai Qi Zhi Institute, advised by Prof. Yi Wu.
In this work, we propose to learn a generative model of the tool-use trajectories as a sequence of point clouds, which generalizes to different tool shapes. We train a single model for four different challenging deformable object manipulation tasks, including cutting, rolling, large scooping, small scooping.
In this work, we introduce spatially-grounded parameterized motion primitives to improve policy generalization for robotic manipulation tasks. By grounding the primitives on a spatial location in the environment, our proposed method is able to effectively generalize across object shape and pose variations.
In this work, we introduce DROID (Distributed Robot Interaction Dataset), a diverse robot manipulation dataset with 76k demonstration trajectories or 350h of interaction data, collected across 564 scenes and 86 tasks by 50 data collectors in North America, Asia, and Europe over the course of 12 months. We demonstrate that training with DROID leads to policies with higher performance, greater robustness, and improved generalization ability.
We present a system for bimanual manipulation that coordinates by assigning roles to arms: a stabilizing arm holds an object stationary while an acting arm acts in this simplified environment.
We build a semi autonomous robotic system to do in-mouth transfer of food safely and comfortably for disabled people. The system is composed of a force-reactive controller to safely accommodate the user’s motions throughout the transfer, a novel dexterous wrist-like end effector to reduce the discomfort and a visual sensor to identify the user mouth.
We propose a general bimanual scooping primitive and an adaptive stabilization strategy that enables successful acquisition of a diverse set of food geometries and physical properties with close-loop visual feedback.
With task reduction and self-imitation, our RL agent is able to progressively tackle challenging sparse-reward and continuous control tasks with high efficiency.
Our work is to train a RL agent to acquire rope-spreading and cloth-spreading skills without any human demonstrations and the method applies to real robots after domain adaptation.
Carnegie Mello University Doctor of Philosophy in Robotics
Aug '23 - Now
Research:
Working as a Research Assistant at CMU Robots Perceiving and Doing Lab (RPAD)
Stanford University Master of Science in Computer Science
Sept '21 - Jun '23
Research:
Worked as a Research Assistant at Stanford ILIAD Lab
Worked on Assistive Feeding Project
Teaching Experience:
CA for CS 148 Introduction to Computer Graphics and Imaging | Fall 2022
CA for CS 221: Artificial Intelligence: Principles and Techniques | Spring 2023, Spring 2022 and Fall 2021
CA for CS 182 Ethics, Public Policy, and Technological Change | Winter 2023, Winter 2022
Shanghai Jiao Tong University Bachelor of Science in Information Security
Sept '16 - June '20
Awards:
Graduated with honor: Outstanding Graduate of Shanghai | 2020
Hongyi Scholarship (Top 10 Summer Research among Undergraduates) | 2019
Academic Excellence Scholarship(Second-Class) of SJTU | 2017 & 2018
National Scholarship (<1%) | 2017
University of California, Berkeley Exchange Student in Computer Science
Jan '19 - Sept '19
Research:
Worked as a Research Assistant at UC Berkeley BAIR Lab
Worked on Deformable Object Manipulation Project
This template is a modification to Jon Barron's website and Rishab Khincha's website. Find the source code to my version here. Feel free to clone it for your own use while attributing the original author Jon Barron.