Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] model(**input) cannot use under zero stage 3. #6949

Open
MarkDeng1 opened this issue Jan 14, 2025 · 0 comments
Open

[BUG] model(**input) cannot use under zero stage 3. #6949

MarkDeng1 opened this issue Jan 14, 2025 · 0 comments
Labels
bug Something isn't working training

Comments

@MarkDeng1
Copy link

MarkDeng1 commented Jan 14, 2025

Describe the bug
I would like to train a llava model using RL.
after model loaded via :

model = LlavaLlamaForCausalLM.from_pretrained(...)

I also want to have another model called ref_model:

ref0_model = copy.deepcopy(model)

then i use trainer:
trainer = LLaVATrainer(model=model,
ref_model = ref_model,
rl_mode = True,
tokenizer=tokenizer,
args=training_args,
**data_module)
use trainer.train().

In trainer.train():
i need to get output from self.model(**batch), this is successful.
But i also need to get output_ref from self.ref0_model(**batch), this is unsuccessful.

Bug report: the dimension is incorrect when forward()...

However, we use same model as ref0_model deepcopy from model.
How do i solve this under stage 3?

@MarkDeng1 MarkDeng1 added bug Something isn't working training labels Jan 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working training
Projects
None yet
Development

No branches or pull requests

1 participant