You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@Tess314, the DeepSpeed does not control logging of model statistics like training loss or accuracy. I noticed that the tutorial you referenced is logging training loss, so it might be better to ask the author or dig through the LLAVA source code.
Hi there,
I want to log my model's accuracy after each epoch and its final accuracy at the end but I cannot find a simple way of doing this.
I am following this tutorial.
My deepspeed/wandb code is as follows:
import wandb
wandb.login()
!deepspeed LLaVA/llava/train/train_mem.py
--lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5
--deepspeed LLaVA/scripts/zero3.json
--model_name_or_path liuhaotian/llava-v1.5-13b
--version v1
--data_path ./dataset/train/dataset.json
--image_folder ./dataset/images
--vision_tower openai/clip-vit-large-patch14-336
--mm_projector_type mlp2x_gelu
--mm_vision_select_layer -2
--mm_use_im_start_end False
--mm_use_im_patch_token False
--image_aspect_ratio pad
--group_by_modality_length True
--bf16 True
--output_dir ./checkpoints/llava-v1.5-13b-task-lora
--num_train_epochs 10
--per_device_train_batch_size 16
--per_device_eval_batch_size 4
--gradient_accumulation_steps 1
--evaluation_strategy "no"
--save_strategy "steps"
--save_steps 50000
--save_total_limit 1
--learning_rate 2e-4
--weight_decay 0.
--warmup_ratio 0.03
--lr_scheduler_type "cosine"
--logging_steps 1
--tf32 True
--model_max_length 2048
--gradient_checkpointing True
--dataloader_num_workers 4
--lazy_preprocess True
--report_to wandb
I have been advised to use Monitor but do not understand how it works for logging accuracy.
Any help or advice would be appreciated.
The text was updated successfully, but these errors were encountered: