MotionCLR
library paperMotion generation and training-free editing via understanding attention mechanisms. Self-attention finds word-sequence correspondence while cross-attention measures frame sequential similarity. Enables motion editing via attention map manipulation including de-emphasizing, replacement, and example-based generation.
Outputs 2
MotionCLR
libraryAttention-based motion diffusion model with training-free editing capabilities via attention map manipulation.
MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms
paperProposes attention-based motion diffusion model with clear modeling of self-attention and cross-attention for motion editing.
arXiv: 2410.18977