Python library
You can inference and tune foundation models in IBM watsonx.ai programmatically by using the Python library.
See Foundation models Python library.
You can also work with watsonx.ai foundation models from third-party tools, including:
Using the Python library
The ibm-watsonx-ai
Python library is available on PyPI from the url: https://pypi.org/project/ibm-watsonx-ai/.
You can install the ibm-watsonx-ai
Python library in your development environment by using the following command:
pip install ibm-watsonx-ai
If you already have the library installed, include the -U
parameter to pick up any updates and work with the latest version of the library.
pip install -U ibm-watsonx-ai
Learn from available sample notebooks
Sample notebooks are available that you can use as a guide as you create notebooks of your own to do common tasks such as inferencing or tuning a foundation model.
To find available notebooks, search the Resource hub. You can add notebooks that you open from the Resource hub to your project, and then run them.
You can also access notebooks from the Python sample notebooks GitHub repository.
Prerequisites
To get started, you first need credentials and a project ID or deployment space ID. For more information, see Authenticating for programmatic access to a project or space.
Learn more
- Getting foundation model information
- Inferencing a foundation model
- Inferencing a foundation model by using a prompt template
- Tuning a foundation model
- Converting text to text embeddings
Parent topic: Coding generative AI solutions