maestro is a streamlined tool to accelerate the fine-tuning of multimodal models. By encapsulating best practices from our core modules, maestro handles configuration, data loading, reproducibility, and training loop setup. It currently offers ready-to-use recipes for popular vision-language models such as Florence-2, PaliGemma 2, and Qwen2.5-VL.
To begin, install the model-specific dependencies. Since some models may have clashing requirements, we recommend creating a dedicated Python environment for each model.
pip install "maestro[paligemma_2]"
Kick off fine-tuning with our command-line interface, which leverages the configuration and training routines defined in each model’s core module. Simply specify key parameters such as the dataset location, number of epochs, batch size, optimization strategy, and metrics.
maestro paligemma_2 train \
--dataset "dataset/location" \
--epochs 10 \
--batch-size 4 \
--optimization_strategy "qlora" \
--metrics "edit_distance"
For greater control, use the Python API to fine-tune your models. Import the train function from the corresponding module and define your configuration in a dictionary. The core modules take care of reproducibility, data preparation, and training setup.
from maestro.trainer.models.paligemma_2.core import train
config = {
"dataset": "dataset/location",
"epochs": 10,
"batch_size": 4,
"optimization_strategy": "qlora",
"metrics": ["edit_distance"]
}
train(config)
Looking for a place to start? Try our cookbooks to learn how to fine-tune different VLMs on various vision tasks with maestro.
description | open in colab |
---|---|
Finetune Florence-2 for object detection with LoRA | |
Finetune PaliGemma 2 for JSON data extraction with LoRA | |
Finetune Qwen2.5-VL for JSON data extraction with QLoRA |
We would love your help in making this repository even better! We are especially looking for contributors with experience in fine-tuning vision-language models (VLMs). If you notice any bugs or have suggestions for improvement, feel free to open an issue or submit a pull request.