Human-Object Interaction with Vision-Language Model
Guided Relative Movement Dynamics

1Shanghaitech University, 2Astribot *Corresponding authors

Our Relative Movement Dynamics (RMD) architecture enables dynamic object interaction and long-term multi-task completion based on VLM guidance. RMD enables the automatic construction of a unified reward function applicable to various reinforcement learning interaction tasks.

Abstract

Human-Object Interaction (HOI) is vital for advancing simulation, animation, and robotics, enabling the generation of long-term, physically plausible motions in 3D environments. However, existing methods often fall short of achieving physics realism and supporting diverse types of interactions. To address these challenges, this paper introduces a unified Human-Object Interaction framework that provides unified control over interactions with static scenes and dynamic objects using language commands. The interactions between human and object parts can always be described as the continuous stable Relative Movement Dynamics (RMD) between human and object parts. By leveraging the world knowledge and scene perception capabilities of Vision-Language Models (VLMs) , we translate language commands into RMD diagrams, which are used to guide goal-conditioned reinforcement learning for sequential interaction with objects. Our framework supports long-horizon interactions among dynamic, articulated, and static objects. To support the training and evaluation of our framework, we present a new dataset named Interplay, which includes multi-round task plans generated by VLMs, covering both static and dynamic HOI tasks. Extensive experiments demonstrate that our proposed framework can effectively handle a wide range of HOI tasks, showcasing its ability to maintain long-term, multi-round transitions.

Video

Overview

The Relative Movement Dynamics (RMD) characterize the spatiotemporal relationships between human body parts and object components within each sub-sequence. To effectively represent RMD, we adopt a bipartite graph structure where each node representing a human body part is uniquely connected to a corresponding node representing an object part. Edges within this graph explicitly encode the relative movement dynamics between the connected body-object pairs. We employ GPT-4v as our planning module to decompose high-level tasks into multiple sub-sequences, which are then executed sequentially by the control policy.


Detailed RMD planner

Interpolate start reference image.

Receiving the top-view image of the surrounding scene context along with the textual instruction, the RMD Planner outputs sequential sub-step plans in a structured JSON format, enabling direct processing via Python scripts. To ensure the RMD Planner functions as intended, we utilize different sections within the prompt, each designed to support a specific functionality.

Qualitative Results in a Single-Task Scenario

We compare our approach with UniHSI[Xiao 2024] and InterPhys [Hassan 2023]. UniHSI struggles to effectively control humanoid get-up motions, resulting in noticeable high-frequency jittering. This occurs because it models human-object interaction tasks as sequences of discrete, instantaneous, and independently solved spatial-reaching steps, neglecting the essential temporal dynamics required to coordinate movements across body parts. Similarly, InterPhys generates unnatural motions during task completion, such as kicking doors or forcibly opening them with the entire body, primarily due to insufficient fine-grained spatiotemporal guidance. In contrast, our method enables stable interactions with objects and supports smooth, natural transitions between different interaction stages.

Ours(Open)

InterPhys(Open)

Ours(Sit)

UniHSI(Sit)

Qualitative Results in a Multi-Task Scenario