Replies: 1 comment
-
For more context, see also issue #2310. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Been thinking about fine-tuning techniques for conversational AI and trying to figure out which ones are best for specific use cases. Here’s a list:
LoRa
AdaLoRa
BONE
VeRa
XLora
LN Tuning
VbLora
HRA (Hyperparameter Regularization Adapter)
IA3 (Input-Aware Adapter)
Llama Adapter
CPT (Conditional Prompt Tuning)etc
Anyone have experience with these? Curious about how they compare, what they’re good at, or any tips. Would be great to hear some thoughts or ideas!
Beta Was this translation helpful? Give feedback.
All reactions