arXiv

Generating realistic human-human interactions is a challenging task that requires not only high-quality individual body and hand motions, but also coherent coordination among all interactants. Due to limitations in available data and increased learning complexity, previous methods tend to ignore hand motions, limiting the realism and expressivity of the interactions. Additionally, current diffusion-based approaches generate entire motion sequences simultaneously, limiting their ability to capture the reactive and adaptive nature of human interactions. To address these limitations, we introduce Interact2Ar, the first end-to-end text-conditioned autoregressive diffusion model for generating full-body, human-human interactions. Interact2Ar incorporates detailed hand kinematics through dedicated parallel branches, enabling high-fidelity full-body generation. Furthermore, we introduce an autoregressive pipeline coupled with a novel memory technique that facilitates adaptation to the inherent variability of human interactions using efficient large context windows. The adaptability of our model enables a series of downstream applications, including temporal motion composition, real-time adaptation to disturbances, and extension beyond dyadic to multi-person scenarios. To validate the generated motions, we introduce a set of robust evaluators and extended metrics designed specifically for assessing full-body interactions. Through quantitative and qualitative experiments, we demonstrate the state-of-the-art performance of Interact2Ar.
Interact2Ar is the first text-conditioned autoregressive diffusion model for generating full-body human-human interactions with detailed hand motions. With a novel memory strategy, it enhances the quality of the generated interaction and enables adaptive capabilities, including temporal composition, adaptation to disturbances, and multi-person scenarios.

Standard short-term memory $\mathcal{M}^s$ often leads to repetitive artifacts in long interactions due to insufficient context. To address this, we introduce Mixed Memory, which augments the immediate history with a long-term component $\mathcal{M}^l$. This component retains a temporally downsampled history at intervals of $\delta$, enabling the model to access a substantially longer context window without excessive computational cost. By conditioning on the combined memory: $$\mathcal{M}_k = \{\mathcal{M}_k^l, \mathcal{M}_k^s\}$$ our model leverages full-framerate immediate context for seamless transitions while utilizing long-range temporal information to avoid action repetition.
Quantitatively, Interact2Ar establishes a new state-of-the-art on the Inter-X dataset. To ensure rigorous assessment, we proposed retrained, robust evaluators that introduce isolated body and hand evaluations, which overcome the limitations of original benchmarks by penalizing significant degradations. Qualitatively, our method generates high-fidelity interactions with realistic hand motions and precise alignment. We have validated these capabilities through extensive experimentation and a user study.
Thanks to our proposed autoregressive pipeline with a Mixed Memory strategy, Interact2Ar presents a series of adaptability capabilities that enable several downstream interaction applications.
@misc{ruizponce2025interact2arfullbodyhumanhumaninteraction,
title={Interact2Ar: Full-Body Human-Human Interaction Generation via Autoregressive Diffusion Models},
author={Pablo Ruiz-Ponce and Sergio Escalera and José García-Rodríguez and Jiankang Deng and Rolandos Alexandros Potamias},
year={2025},
eprint={2512.19692},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.19692},
}