We propose SAIL (Speed-Adaptive
Imitation Learning), a framework for enabling faster-than-
demonstration execution of policies by addressing key technical
challenges in robot dynamics and state-action distribution shifts.
Offline Imitation Learning (IL) methods such as
Behavior Cloning (BC) are a simple and effective way to acquire
complex robotic manipulation skills. However, existing IL-trained
policies are confined to execute the task at the same speed as
shown in the demonstration. This limits the task throughput of
a robotic system, a critical requirement for applications such
as industrial automation.
SAIL features four tightly-connected components:
High-gain control to enable high-fidelity tracking of IL policy trajectories,
consistency-preserving trajectory generation to ensure smoother robot motion,
adaptive speed modulation that dynamically adjusts execution speed based on motion complexity,
and action scheduling to handle real-world system latencies.
Experimental validation on six robotic manipulation tasks shows
that SAIL achieves up to a 4× speedup over demonstration speed
in simulation and up to 3.2× speedup on physical robot.