Fine-Tuning LLMs

Fine-tuning is the process of adapting a pre-trained large language model (LLM) to a specific task or dataset. It involves further training the model on a smaller, specialized dataset to tailor its responses to be more relevant to particular domains or applications.

The key benefits of fine-tuning LLMs include:

  • Improving performance on specific tasks by making the model more domain-specific
  • Reducing the amount of data required to train the model effectively
  • Enhancing the model’s efficiency and making it more suitable for production use cases

Standard Fine-tuning vs. PEFT

PEFT, or Parameter-Efficient Fine-Tuning, is a technique used to efficiently fine-tune large language models (LLMs) for specific downstream tasks. Standard fine-tuning of large language models (LLMs) involves retraining the entire pre-trained model on a new dataset from scratch.

Fine-TuningPEFT
RetrainingRetrains the entire pre-trained model on new data from scratchOnly fine-tunes the higher layers of the model, freezing the lower layers
ParametersUpdates all model parametersUpdates a very small subset of the model parameters
Resource requirementsRequires large compute resources and significant task-specific dataRequires less data, computing power, and time
Risk of overfittingHigher risk of overfitting due to updating all parameters on small datasetsLower risk of overfitting as most parameters remain fixed
Standard Fine-tuning vs. PEFT
An illustration of Regular Finetuning (left) and Paramater Efficient Fine-tuning (right)

In summary, PEFT selectively updates model parameters, focusing only on the higher layers relevant to the specific task. PEFT is well-suited for scenarios where resources are limited or when the target task differs significantly from the pre-training data.

PEFT methods: LoRA, QLoRA and DoRA

Low-Rank Adapatation (LoRA)

The core concept behind LoRA is to update the model’s parameters using a low-rank decomposition, which is implemented by adding two linear projection matrices. LoRA keeps the pretrained layers of the large language model fixed, and inserts a trainable low-rank matrix into each layer of the model.

Standard Fine-tuning vs. PEFT
An illustration of Regular Finetuning (left) and Paramater Efficient Fine-tuning (right)

Suppose W represents the weight matrix in a given neural network layer and \Delta W

  1. Decompose the weight matrices of the pre-trained model into a product of two lower-rank matrices A and B. This is done using techniques like Singular Value Decomposition (SVD).
  2. The original weights W of the pre-trained model are kept frozen and not updated during fine-tuning.
  3. Instead of updating the full weight matrices, LoRA fine-tunes only the smaller matrices A and B on the target task. This is much more efficient as the number of trainable parameters is significantly reduced.
  4. After fine-tuning A and B, the updated weights W’ are reconstructed by computing their product W’ = W + AB. This allows the model to be adapted to the target task while maintaining the performance of the original pre-trained model.

The key advantage of LoRA is that it enables efficient fine-tuning of large models by focusing the adaptation on a low-rank subspace of the weights. This significantly reduces the memory footprint and speeds up the fine-tuning process compared to traditional methods.

Quantized Low-Rank Adaptation (QLoRA)

QLoRA (Quantized Low-Rank Adaptation) combines two key ideas:

  1. Quantization: The pre-trained weights of the LLM are quantized to a lower bit precision (e.g. 4-bit) to reduce memory usage. This makes the model more compact and the operations faster.
  2. Low-Rank Adaptation (LoRA): Instead of updating all the model parameters during fine-tuning, LoRA adds a low-rank decomposition to the weights[1][2][4]. It learns a small number of trainable parameters (adapters) while keeping the original weights frozen.

By applying quantization to the base model and LoRA to the adapters, QLoRA can fine-tune massive LLMs with billions of parameters on relatively small GPUs. This democratizes access to sophisticated fine-tuning methods that were previously limited to large compute clusters.

QLoRA introduces several innovations to make this combination work effectively:

  • 4-bit NormalFloat (NF4) quantization that is information-theoretically optimal
  • Double quantization of the quantization constants
  • Paged optimizers that exploit unified memory to use more memory than available on a single GPU

These techniques allow QLoRA to fine-tune a 137B parameter LLM on a single 48GB GPU, while achieving comparable performance to full-precision fine-tuning[3][4].

In summary, QLoRA is a breakthrough method that makes it feasible to fine-tune state-of-the-art LLMs on modest hardware, opening up new possibilities for both researchers and practitioners[1][2][3].

RAG architecture
Overall workflow of RAG (Source: Langchain)

Quantized Low Rank Adaptation (QLoRA) is an efficient technique for fine-tuning large language models by combining low-rank adaptation with quantization. Here’s a simple step-by-step explanation of how QLoRA works:

  1. Start with a pre-trained language model
  2. Decompose the model’s weight matrices into a product of two lower-rank matrices A and B using techniques like Singular Value Decomposition (SVD)
  3. Quantize the pre-trained weights into a low-bit format like 4-bit NormalFloat (NF4) to reduce memory usage
  4. Fine-tune only the low-rank matrices A and B on the target task, keeping the original quantized weights frozen
  5. During fine-tuning, dequantize the pre-trained weights to match the precision of the low-rank matrices to enable efficient computation
  6. After fine-tuning, merge the low-rank matrices with the quantized weights to obtain the final quantized fine-tuned model

The key advantages of QLoRA are:

  • Significant reduction in memory usage during fine-tuning by freezing the base model and quantizing the weights
  • Efficient inference with the final quantized model
  • Competitive performance compared to full fine-tuning while being more memory-efficient

However, a challenge with QLoRA is finding the optimal low-rank adaptation size. To address this, an extension called QDyLoRA (Quantized Dynamic Low-Rank Adaptation) was proposed, which enables efficiently fine-tuning on a range of low-rank sizes in a single pass.

In summary, QLoRA combines the memory savings of low-rank adaptation with the inference efficiency of quantization to enable fine-tuning large language models on resource-constrained devices.

Weight-Decomposed Low-Rank Adaptation (DoRA)

Ref: https://magazine.sebastianraschka.com/p/lora-and-dora-from-scratch?utm_source=publication-search

DoRA (Weight-Decomposed Low-Rank Adaptation) is a novel technique for efficiently fine-tuning large language models (LLMs) on specific tasks[3][4][5]. It builds upon the popular LoRA (Low-Rank Adaptation) method by introducing a weight decomposition approach that provides several key advantages:

  1. DoRA consistently outperforms LoRA on various downstream tasks like commonsense reasoning, visual question answering, and natural language inference[3][4]. The performance gap widens as the rank of the adaptation matrices decreases.

  2. DoRA is more robust to hyperparameter changes compared to LoRA[4][5]. It is less sensitive to the choice of rank, allowing for more flexibility in hyperparameter tuning.

  3. DoRA learns faster and requires fewer training examples to achieve the same performance as LoRA[4][5]. This makes it more sample-efficient.

  4. DoRA is more parameter-efficient than LoRA[3][4]. It can achieve the same or better performance as LoRA while using fewer trainable parameters in the adaptation matrices.

  5. DoRA further closes the performance gap with full fine-tuning compared to LoRA[4][5]. It can match or even slightly exceed the accuracy of full fine-tuning while being much more memory-efficient.

The key idea behind DoRA is to decompose the pre-trained weights into magnitude and directional components[3][4][5]. It applies LoRA only to the directional component, keeping the magnitude frozen. This allows more flexibility in the weight updates compared to standard LoRA[3][4].

In summary, DoRA provides better performance, faster learning, and more robustness compared to LoRA, while being more parameter-efficient[3][4][5]. It represents a significant advancement in the field of parameter-efficient fine-tuning of large language models.

Standard Fine-tuning vs. PEFT
An illustration of Regular Finetuning (left) and Paramater Efficient Fine-tuning (right)

Here is a simple step-by-step explanation of Weight-Decomposed Low-Rank Adaptation (DoRA):

  1. Start with a pre-trained model and decompose its weight matrices into magnitude and directional components

  2. Keep the magnitude component frozen and initialize the directional component with the pre-trained weights

  3. Apply Low-Rank Adaptation (LoRA) to the directional component only, learning two low-rank matrices A and B to update the direction

  4. The final weight update in DoRA is computed as:

    W’ = W + αS + AB

    Where W are the original pre-trained weights, S is the magnitude component, A and B are the low-rank LoRA matrices, and α is a scaling factor

  5. After fine-tuning, merge the updated weights W’ back into a single matrix for efficient inference, without any additional latency

The key ideas behind DoRA are:

  • Decomposing the weights into magnitude and direction allows DoRA to mimic the learning pattern of full fine-tuning more closely
  • Limiting LoRA to only update the directional component can improve learning capacity compared to standard LoRA
  • The weight decomposition also stabilizes the optimization process

Empirically, DoRA has been shown to outperform LoRA and other variants, especially when using a lower rank for the LoRA matrices. It provides a good balance of performance and efficiency for fine-tuning large pre-trained models.

Effect of Hyperparameters on Fine-Tuning

Hyperparameters play a crucial role in PEFT by determining the trade-off between model performance and computational efficiency. The key hyperparamters in PEFT of LLMs are:

  1. Adapter size: The size of the adapter modules added to the pre-trained model. Larger adapters can capture more task-specific knowledge but require more parameters to be fine-tuned.
  2. Prefix length: The length of the prefix tokens added to the input in prefix-tuning methods. Longer prefixes provide more flexibility but increase the number of trainable parameters.
  3. Rank of low-rank decomposition: In LoRA, this determines the rank of the low-rank matrices used to update the model weights. Higher ranks allow more flexibility but require more parameters.
  4. Learning rate: Controls the step size for updating the trainable parameters during fine-tuning. A higher learning rate leads to faster convergence but can cause instability.
  5. Batch size: The number of examples used in each training iteration. Larger batches are more computationally efficient but smaller batches can provide more frequent updates.
  6. Number of epochs: The number of passes through the entire training dataset. More epochs allow the model to learn more but can lead to overfitting.

Tuning the hyperparamters is essential for adapting LLMs to new tasks while minimizing computational costs.For example, increasing the adapter size or prefix length allows the model to capture more task-specific knowledge but requires fine-tuning more parameters. Choosing the right learning rate and batch size is important for stable and efficient training. The number of epochs must be selected to avoid overfitting while still allowing the model to learn the task.

The hyperparameters of LoRA, QLoRA and DoRA can significantly impact their effectiveness in adapting pre-trained language models to new tasks. Here’s how the key hyperparameters affect each method:

LoRA

  • The most important LoRA hyperparameter is “r”, which determines the rank or dimension of the low-rank adaptation matrices
  • Increasing “r” generally improves performance but also increases the number of trainable parameters
  • The scaling factor “alpha” controls the magnitude of the LoRA weight updates
  • Finding the optimal settings for “r” and “alpha” can be challenging and task-dependent

QLoRA

  • QLoRA inherits the LoRA hyperparameters like “r” and “alpha”
  • Additionally, the quantization hyperparameters like the bit-width can impact QLoRA’s performance
  • Using lower bit-widths (e.g. 4-bit) reduces memory usage but may hurt accuracy
  • The choice of quantization algorithm (e.g. bitsandbytes, GPTQ, AWQ) also affects the final model quality

DoRA

  • DoRA decomposes the weights into magnitude and directional components
  • The rank “r” of the LoRA matrices applied to the directional component is a key hyperparameter
  • DoRA is more robust to the choice of “r” and can achieve good performance with lower ranks compared to LoRA
  • The scaling factor “alpha” also affects DoRA’s learning dynamics

In general, DoRA is more robust to hyperparameter changes and can achieve better performance with fewer trainable parameters compared to LoRA. QLoRA trades off some performance for reduced memory usage during fine-tuning.

The optimal hyperparameters depend on the specific task, dataset, and hardware constraints. Careful tuning of “r”, “alpha”, and quantization settings can help achieve the best results for each method.

Which method to use when

Here are some bullet points on which PEFT method (LoRA, QLoRA, DoRA) to choose based on various constraints:

If you prioritize:

Performance:

  • DoRA consistently outperforms LoRA and can match or exceed full fine-tuning performance
  • QDoRA combines the parameter efficiency of QLoRA with the performance of DoRA

Memory efficiency during fine-tuning:

  • QLoRA and QDoRA significantly reduce memory usage by quantizing the base model
  • DoRA is more parameter-efficient than LoRA, requiring fewer trainable parameters

Robustness to hyperparameters:

  • DoRA is more robust to hyperparameter changes like the rank “r” compared to LoRA
  • DoRA can achieve good performance with lower ranks than LoRA

Inference speed:

  • Merging the adapter into the base model after fine-tuning yields faster inference
  • QLoRA and QDoRA have similar inference speeds when the adapter is loaded on top

If you have limited training data:

  • DoRA learns faster and requires fewer training examples to achieve the same performance as LoRA

If you have hardware constraints:

  • QLoRA and QDoRA are good choices to fine-tune on resource-constrained devices
  • NOLA can compress LoRA up to 20x while maintaining performance

In summary, DoRA provides the best performance, robustness and parameter efficiency, QLoRA and QDoRA prioritize memory savings, while LoRA and its variants offer a good balance across different constraints.

References

  1. Getting started with LLM fine-tuning. https://learn.microsoft.com/en-us/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning
  2. https://magazine.sebastianraschka.com/p/practical-tips-for-finetuning-llms
  3. https://symbl.ai/developers/blog/a-guide-to-llm-hyperparameters/