Improving the Generate Fine-tuning results

There are several things you need to take into account to achieve the best fine-tuned generative model:

Refining Data Quality:

If your fine-tuned model is not learning well, try these steps to improve your training data:

  • Add more specific examples: If the model struggles with certain tasks, include examples in your data file that clearly demonstrate how to do those tasks.
  • Check your data for errors: If the model makes grammar or logic mistakes, it might be because your data has similar errors. If it incorrectly says "I will schedules this meeting," for example, check if your data mistakenly taught it to say such things.
  • Balance your data: Make sure your data reflects how you'll use the model. If your data has many examples of a response you rarely need, the model might use that response too often.
  • Ensure your data contains complete information: Include all necessary information in your examples. If the model needs to respond based on certain information, ensure that information is in your training data.
  • Ensure data consistency : If different people helped prepare your data, make sure they all followed the same guidelines. Inconsistent data can limit how well your model learns.
  • Keep a standard format : All your training examples should be in the format you plan to use when you actually use the model.
  • Include real data : If you have actual user data or human-created examples, be sure to use that rather than fake LLM-generated data, so as to capture the nuances of human interaction and improve the model beyond what it’s capable of generating already.

Iterating on Hyperparameters

We allow you to specify the following hyper-parameters:

  • epochs
  • learning rate
  • batch size

We suggest starting your training without setting any specific parameters. That said, here are some guidelines for resolving certain issues you might encounter:

  • If the model outputs are too similar or lack diversity, reduce the epoch number by 1 or 2.
  • If the model does not appear to be converging, increase the learning rate.
  • If you want to change your batch size, you can use 8, 24 or 32.

Troubleshooting

We have a dedicated guide for troubleshooting fine-tuned models which is consistent for all the different model types and endpoints. Check it out here.