Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-2401: Kubeflow LLM Trainer V2 #2410

Open
wants to merge 41 commits into
base: master
Choose a base branch
from

Conversation

Electronic-Waste
Copy link
Member

@Electronic-Waste Electronic-Waste commented Feb 1, 2025

This is the Kubeflow Enhancement Proposal for Kubeflow LLM Trainer V2: http://bit.ly/4gp8JGd
Related: #2401 #2170

We are collecting the final community feedback and any suggestions are welcome!

Open Questions

  • We need to pass arguments to tune run CLI to enable distributed training, instead of passing distributed parameters begins with PET_ to env variables. Do you prefer reusing the torch runtime plugin or creating a new one?

/cc @kubeflow/wg-training-leads @deepanker13 @saileshd1402 @seanlaii @helenxie-bit @astefanutti @varshaprasad96 @franciscojavierarceo @thesuperzapper @rimolive @juliusvonkohout @jbottum @varodrig @Doris-xm @truc0

Copy link

@Electronic-Waste: GitHub didn't allow me to request PR reviews from the following users: saileshd1402, varshaprasad96, truc0, astefanutti, seanlaii.

Note that only kubeflow members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

This is the Kubeflow Enhancement Proposal for Kubeflow LLM Trainer V2: http://bit.ly/4gp8JGd
Related: #2401 #2170

We will collect the final community feedback by 2.12 and start the implementation after that.

Open Questions

  1. Since we adopt torchrun as the launcher for LLM Trainer, do we need to support more launchers like torchtune and accelerate in the future?
  2. Do we want to support Adapter Prompt Tuning and Prefix Tuning?

/cc @kubeflow/wg-training-leads @deepanker13 @saileshd1402 @seanlaii @helenxie-bit @astefanutti @varshaprasad96 @franciscojavierarceo @thesuperzapper @rimolive @juliusvonkohout @jbottum @varodrig @Doris-xm @truc0

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@coveralls
Copy link

coveralls commented Feb 1, 2025

Pull Request Test Coverage Report for Build 13089269276

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall first build on doc/KEP-2401 at 100.0%

Totals Coverage Status
Change from base Build 13016586638: 100.0%
Covered Lines: 85
Relevant Lines: 85

💛 - Coveralls

@juliusvonkohout
Copy link
Member

Should security, so hard multi-tenancy, istio support and Podsecuritystandards restricted be part of the KEP?

@Electronic-Waste
Copy link
Member Author

Electronic-Waste commented Feb 12, 2025

@juliusvonkohout We haven't considered it yet. Our initial goal is to introduce simple approaches to see how users will use this feature, and make it as easy as possible to use. Maybe we could add them as the tasks for the next stage.

WDYT @franciscojavierarceo @kubeflow/wg-training-leads

@franciscojavierarceo
Copy link
Contributor

@juliusvonkohout We haven't considered it yet. Our initial goal is to introduce simple approaches to see how users will use this feature, and make it as easy as possible to use. Maybe we could add them as the tasks for the next stage.

I would probably leave that out of scope. Not to say that it's not important of course.

Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign andreyvelich for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@andreyvelich
Copy link
Member

Hi Folks, just a friendly reminder that this Wednesday at 5pm UTC, we will discuss the torchtune usage for Kubeflow Trainer LLM blueprints. Please join if you are available.

cc @kubeflow/wg-training-leads @Electronic-Waste @franciscojavierarceo @joecummings @astefanutti @akshaychitneni @shravan-achar @janeyx99 @bigsur0

Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
@google-oss-prow google-oss-prow bot added size/XL and removed size/L labels Feb 19, 2025
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
Signed-off-by: Electronic-Waste <[email protected]>
@Electronic-Waste Electronic-Waste force-pushed the doc/KEP-2401 branch 2 times, most recently from 39b82ef to ffa3b94 Compare February 25, 2025 14:05
Signed-off-by: Electronic-Waste <[email protected]>
Copy link
Member

@andreyvelich andreyvelich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates @Electronic-Waste!

To hide users from complex Kubernetes configuations, we will provide a simple yet flexible Python SDK wrapping all specifications of models, datasets, training runtime and fine-tuning configs. Like this:

```python
job_id = TrainingClient().train(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out!

spec:
containers:
- name: trainer
image: <pytorch+cuda+torchtune image>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to maintain this image by ourselves under:
/cmd/trainers/torchtune/Dockerfile

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM.

Comment on lines +299 to +300
recipe: str
config: str
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed in Slack, we might want to create Runtime for each Model/Recipe.
Which means, user can select the appropriate runtime based on it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, as you mentioned in this KEP that will increase number of ClusterTrainingRuntime we should deploy.
Any thoughts on UX here @kubeflow/wg-training-leads @franciscojavierarceo @astefanutti ?
Should we have an API that returns user available torchtune configs that we support ?

Copy link
Member Author

@Electronic-Waste Electronic-Waste Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, as you mentioned in this KEP that will increase number of ClusterTrainingRuntime we should deploy.

It will increase the number of ClusterTrainingRuntime to 100 or so, for which I think it's unacceptable for us to maintain.

If we remove the recipe and config parameters in TorchtuneConfig, we need to create exactly the same number of TrainingRuntimes for recipe-config tuples. That is because we can't merge config files for a model into one file and mutate it on demand to fit with every scenario, because we can't guarantee all config files share the same default configurations like profiler, tokenzier, which are very complex.

Copy link
Member

@andreyvelich andreyvelich Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But if we know what config we need to fetch, can we mutate values based on user configuration ?
Should we have list of supported config for every supported LLM in Kubeflow SDK ?
I want to design UX when user doesn't need to know exact recipe and config they should use to fine-tune LLM.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deepanker13 @saileshd1402 @johnugeorge do you have any ideas here ?

- name: trainer
image: <pytorch+cuda+torchtune image>
command:
- tune ls
Copy link
Member

@andreyvelich andreyvelich Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, by default our runtime should be able to fine-tune model without any additional configurations from users.
So user can do something like this:

TrainerClient().train(
  runtime_ref="torchtune-llama-3.3-70b"
)

In that case, we will just use the default settings that being configured under runtime.

However, that will depend on this: #2410 (comment).
Let's continue discussion in other thread.

- name: trainer
image: <pytorch+cuda+torchtune image>
command:
- tune ls
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we wrap the default torchtune config under TrainingRuntime args, and override the user values using tune run arguments ?
So the experience will be:

  1. ClusterTrainingRuntime contains the default torchtune config to fine-tune model.
  2. User can override values using TorchTuneConfig() class.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default torchtune configs are defined in the config file, which will be downloaded automatically to the training container by torchtune. So, maybe we do not need to wrap the default torchtune config here:)


**How to Determine Default Resources**

Currently, `torchtune` has limited support for multi-node training (but will coming soon). So, I would propose that we use 1 PyTorch node and 1 GPU by default. Users can specify `num_nodes` and `resource_per_node` in the `Trainer` field to increase PyTorch nodes and GPU number.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have multi-node support in torchtune isn't it @joecummings ?
Do you mean that it is not supported for all LLMs ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there something stopping us to use --nnodes=2 for other configs ?

Comment on lines +254 to +260
def train(
trainer: Optional[CustomTrainer],
fine_tuning_config: Optional[Union[TorchTuneConfig]],
dataset_config: Optional[types.HuggingFaceDatasetConfig] = None,
model_config: Optional[types.HuggingFaceModelInputConfig] = None,
runtime_ref: Optional[str] = "torchtune-llm-finetuning",
) -> str:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@astefanutti @kubeflow/wg-training-leads @Electronic-Waste @deepanker13 @saileshd1402 @franciscojavierarceo @seanlaii What do you think about this API design ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it gives us opportunity in the future to support more framework-specific trainers like TorchTrainer, TorchXLATrainer, DeepSpeedTrainer, etc.
Which will help user to appropriately configure model, dataset (assign devices and configure distributed backend).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about this API design ?

LGTM.

| | | |-- kustomization.yaml
| | | |-- mpi_distributed.yaml # MPI Distributed Runtime
| | | |-- torch_distributed.yaml # PyTorch Distributed Runtime
| | |-- posttraining/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be re-consider according to: #2430
@Electronic-Waste What do you think about this folder structure:

manifests/
|-- base/
|   |-- runtimes/
|   |   |-- kustomization.yaml
|   |   |-- mpi_distributed.yaml                    # MPI Distributed Runtime
|   |   |-- torch_distributed.yaml                  # PyTorch Distributed Runtime
|   |   |-- torchtune/
|   |   |   |-- kustomization.yaml
|   |   |   |-- torchtune_llm_finetuning.yaml           # Torchtune LLM Fine-tuning Runtime
|   |-- crds/

Maybe as a label to apply for these runtimes, we should add this:

trainer.runtime.kubeflow.org/type: custom-trainer
trainer.runtime.kubeflow.org/phase: any

trainer.runtime.kubeflow.org/type: torchtune
trainer.runtime.kubeflow.org/phase: post-training

WDYT @Electronic-Waste @astefanutti @kubeflow/wg-training-leads @franciscojavierarceo ?

Copy link
Member Author

@Electronic-Waste Electronic-Waste Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe:

manifests/
|-- base/
|   |-- runtimes/
|   |   |-- kustomization.yaml
|   |   |-- mpi_distributed.yaml                    # MPI Distributed Runtime
|   |   |-- torch_distributed.yaml                  # PyTorch Distributed Runtime
|   |   |-- torchtune_llm_finetuning.yaml           # Torchtune LLM Fine-tuning Runtime
|   |-- crds/

is better? Since we only use torchtune for LLM fine-tuning, it might be unnecessary to open a new dir for it.


### Support some common PEFT mechanisms

We need to support some common PEFT mechnisms like LoRA, QLoRA, DoRA to allow users to optimize the memory usage when they are fine-tuning the LLMs. This is crucial for users who have limited resources and want to fine-tune their model at the minimum cost.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we split LoRA, QLoRA, and DoRA between different configs ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are coupled in torchtune:

https://pytorch.org/torchtune/main/tutorials/memory_optimizations.html#parameter-efficient-fine-tuning-peft

They share most of the parameters. Compared to LoRA, QLoRA only adds quant_base. Compared to QLoRA, DoRA only adds use_dora. It might be a good choice to implement them together.

| epochs | Optional[int] | The number of samples processed before updating model weights. |
| loss | Optional[str] | The loss algorithm we use to fine-tune the LLM, e.g. `torchtune.modules.loss.CEWithChunkedOutputLoss` |
| peft_config | Optional[Union[LoraConfig]] | Configuration for the PEFT(Parameter-Efficient Fine-Tuning), including LoRA/QLoRA/DoRA, etc. |
| dataset_preprocess_config | Optional[Union[InstructDataset, ChatDataset, MultimodalDataset]] | Configuration for dataset preprocessing. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we also have dataset_config as part of train() API, how do we distinguish it ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have any better experience to make it clear for users when they should use it?

Maybe we could create a new dataset config: TorchTuneDatasetConfig, where we can define dataset properties which is specific to torchtune.
We can always add the storage_uri parameter there which allows users to configure location of dataset:

hf://....
s3://...

@joecummings Do we have any capabilities in torchtune to pre-process dataset outside of the main tune run loop ?
For example, in multi-node environment dataset can be pre-processed in CPU-based machine before passing data to all Training Nodes.

cc @akshaychitneni @shravan-achar @bigsur0

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make the UX clear, we can put dataset_config under TorchTuneConfig as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we also have dataset_config as part of train() API, how do we distinguish it ?

  1. dataset_config is global, in charge of downloading dataset.
  2. dataset_preprocess_config is limited to torchtune only, doing the preprocess work with the help of built in Dataset Class support of torchtune

It might not be a good idea to put dataset_config into TorchtuneConfig since their scopes are different.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eventually, shouldn't we perform dataset preprocessing on CPU-based Kubernetes pods using dataset-initializer container to offload this work from GPU nodes ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants