Methods for tuning foundation models
Last updated: Mar 04, 2025
Methods for tuning foundation models

Learn more about different tuning methods.

Foundation models can be tuned in the following ways:

  • Full fine tuning: Using the base model’s previous knowledge as a starting point, full fine tuning tailors the model by tuning it with a smaller, task-specific dataset. The full fine-tuning method changes the parameter weights for a model whose weights were set through prior training to customize the model for a task.

    Note: You currently cannot fine tune foundation models in watsonx.ai, but you can prompt tune them.
  • Prompt tuning: Adjusts the content of the prompt that is passed to the model to guide the model to generate output that matches a pattern you specify. The underlying foundation model and its parameter weights are not changed. Only the prompt input is altered.

    Although the result of prompt tuning is a new tuned model asset, the prompt-tuned model merely adds a layer of function that runs before the input is processed by the underlying foundation model. When you prompt-tune a model, the underlying foundation model is not changed, which means that it can be used to address different business needs without being retrained each time. As a result, you reduce computational needs and inference costs. See Prompt tuning.

Parent topic: Tuning foundation models