Skip to content

mccartni-aws/llama-fine-tuning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Fine-Tuning Llama Models on Amazon SageMaker

Welcome to this repository dedicated to fine-tuning Llama2 and Llama3 models on Amazon SageMaker using Transformer Reinforcement Learning (TRL) methods and custom configurations for generating recommendation systems.

Overview

This repository provides resources and guidelines for training Llama2 and Llama3 models with advanced TRL fine-tuning techniques. Our goal is to enhance the capabilities of these models for specific use-cases, for examle recommendation systems, leveraging the latest in machine learning to provide highly relevant and accurate recommendations.

Prerequisites

Before you begin, ensure you have the following prerequisites installed:

  • Python 3.8 or later
  • PyTorch 1.8 or later
  • Transformers library
  • Datasets library
  • A suitable CUDA-enabled GPU

Installation

Clone this repository to your local machine using:

git clone https://github.com/mccartni-aws/llama-training.git

Current usage

  1. Run the llama3_orpo_sft.ipynb notebook, accompanied by the Medium blog titled Fine-tune Llama3 with ORPO in Amazon SageMaker Studio.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published