Skip to content

LM Pretrain #99

Open
Open
@palmex

Description

@palmex

Did you ever run forward on the LM with just motion to motion or text to text or is this (below) the stage 1 of training as described in the paper.

From Paper:
"To generalize to various downstream tasks like [7, 37, 38, 28], we follow [38] to design an objective, where a
certain percentage (15%) of tokens in the input tokens Xs are randomly replaced with a special
sentinel token. On the other side, the corresponding target sequence is constructed by extracting
the dropped-out spans of tokens, delimited by the same sentinel tokens used in the input sequence,
along with an additional sentinel token to indicate the end of the target sequence. 2) We then learn
the motion-language relation by the supervision of paired text-motion datasets [11, 33]. We train
MotionGPT on the supervised motion-language translation, where the input is either a human motion
or a text description.After unsupervised and supervised training processes, we aim to equip our model
with the understanding of text and motion relationships. "

From Code:
mgpt_lm.py

condition = random.choice(
['text', 'motion', 'supervised', 'supervised', 'supervised'])

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions