From 405b6d5e33d28797e363253d933982e73858b2c9 Mon Sep 17 00:00:00 2001 From: Roger Feng Date: Thu, 29 Aug 2024 00:55:33 +0800 Subject: [PATCH] Add the accelerator setup guide link in Getting Started page (#6452) Add the link of https://www.deepspeed.ai/tutorials/accelerator-setup-guide/ into the installation section in Getting Started page so that users can easily find the doc. Signed-off-by: roger feng --- docs/_tutorials/getting-started.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/_tutorials/getting-started.md b/docs/_tutorials/getting-started.md index 9f44c7c6e740..ce9e3ee9a892 100644 --- a/docs/_tutorials/getting-started.md +++ b/docs/_tutorials/getting-started.md @@ -11,6 +11,7 @@ tags: getting-started * To get started with DeepSpeed on AzureML, please see the [AzureML Examples GitHub](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/deepspeed) * DeepSpeed has direct integrations with [HuggingFace Transformers](https://github.com/huggingface/transformers) and [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning). HuggingFace Transformers users can now easily accelerate their models with DeepSpeed through a simple ``--deepspeed`` flag + config file [See more details](https://huggingface.co/docs/transformers/main_classes/deepspeed). PyTorch Lightning provides easy access to DeepSpeed through the Lightning Trainer [See more details](https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html?highlight=deepspeed#deepspeed). * DeepSpeed on AMD can be used via our [ROCm images](https://hub.docker.com/r/deepspeed/rocm501/tags), e.g., `docker pull deepspeed/rocm501:ds060_pytorch110`. +* DeepSpeed also supports Intel Xeon CPU, Intel Data Center Max Series XPU, Intel Gaudi HPU, Huawei Ascend NPU etc, please refer to the [accelerator setup guide](/tutorials/accelerator-setup-guide/)