0 / 0
Importing models to a deployment space

Importing models to a deployment space

Import machine learning models trained outside of IBM watsonx.ai Runtime so that you can deploy and test the models. Review the model frameworks that are available for importing models.

Here, to import a trained model means:

  1. Store the trained model in your watsonx.ai Runtime repository
  2. Optional: Deploy the stored model in your watsonx.ai Runtime service

and repository means a Cloud Object Storage bucket. For more information, see Creating deployment spaces.

You can import a model in these ways:

For more information, see:

For an example of how to add a model programmatically by using the Python client, refer to this notebook:

For an example of how to add a model programmatically by using the REST API, refer to this notebook:

Available ways to import models, per framework type

This table lists the available ways to import models to watsonx.ai Runtime, per framework type.

Import options for models, per framework type
Import option Spark MLlib Scikit-learn XGBoost TensorFlow PyTorch
Importing a model object
Importing a model by using a path to a file
Importing a model by using a path to a directory

Adding a model by using UI

Note:

If you want to import a model in the PMML format, you can directly import the model .xml file.

To import a model by using UI:

  1. From the Assets tab of your space in watsonx.ai Runtime, click Import assets.
  2. Go to Local file and select Model.
  3. Select the model file that you want to import and click Import.

The importing mechanism automatically selects a matching model type and software specification based on the version string in the .xml file.

Importing a model object

Note:

This import method is supported by a limited number of ML frameworks. For more information, see Available ways to import models, per framework type.

To import a model object:

  1. If your model is located in a remote location, follow Downloading a model that is stored in a remote location.
  2. Store the model object in your watsonx.ai Runtime repository. For more information, see Storing model in watsonx.ai Runtime repository.

Importing a model by using a path to a file

Note:

This import method is supported by a limited number of ML frameworks. For more information, see Available ways to import models, per framework type.

To import a model by using a path to a file:

  1. If your model is located in a remote location, follow Downloading a model that is stored in a remote location to download it.

  2. If your model is located locally, place it in a specific directory:

      !cp <saved model> <target directory>
      !cd <target directory>
    
  3. For Scikit-learn, XGBoost, Tensorflow, and PyTorch models, if the downloaded file is not a .tar.gz archive, make an archive:

      !tar -zcvf <saved model>.tar.gz <saved model>
    

    The model file must be at the top-level folder of the directory, for example:

    assets/
    <saved model>
    variables/
    variables/variables.data-00000-of-00001
    variables/variables.index
    
  4. Use the path to the saved file to store the model file in your watsonx.ai Runtime repository. For more information, see Storing model in watsonx.ai Runtime repository.

Importing a model by using a path to a directory

Note:

This import method is supported by a limited number of ML frameworks. For more information, see Available ways to import models, per framework type.

To import a model by using a path to a directory:

  1. If your model is located in a remote location, refer to Downloading a model stored in a remote location.

  2. If your model is located locally, place it in a specific directory:

    !cp <saved model> <target directory>
    !cd <target directory>
    

    For scikit-learn, XGBoost, Tensorflow, and PyTorch models, the model file must be at the top-level folder of the directory, for example:

    assets/
    <saved model>
    variables/
    variables/variables.data-00000-of-00001
    variables/variables.index
    
  3. Use the directory path to store the model file in your watsonx.ai Runtime repository. For more information, see Storing model in watsonx.ai Runtime repository.

Downloading a model stored in a remote location

Follow this sample code to download your model from a remote location:

import os
from wget import download

target_dir = '<target directory name>'
if not os.path.isdir(target_dir):
    os.mkdir(target_dir)
filename = os.path.join(target_dir, '<model name>')
if not os.path.isfile(filename):
    filename = download('<url to model>', out = target_dir)

Things to consider when you import models

To learn more about importing a specific model type, see:

To learn more about frameworks that you can use with watsonx.ai Runtime, see Supported frameworks.

Models saved in PMML format

  • The only available deployment type for models that are imported from PMML is online deployment.
  • The PMML file must have the .xml file extension.
  • PMML models cannot be used in an SPSS stream flow.
  • The PMML file must not contain a prolog. Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default. For example, if your file contains a prolog string such as spark-mllib-lr-model-pmml.xml, remove the string before you import the PMML file to the deployment space.

Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default, like in this example:

::::::::::::::
spark-mllib-lr-model-pmml.xml
::::::::::::::

You must remove that prolog before you can import the PMML file to watsonx.ai Runtime.

Spark MLlib models

  • Only classification and regression models are available.
  • Custom transformers, user-defined functions, and classes are not available.

Scikit-learn models

  • .pkl and .pickle are the available import formats.
  • To serialize or pickle the model, use the joblib package.
  • Only classification and regression models are available.
  • Pandas Dataframe input type for predict() API is not available.
  • The only available deployment type for scikit-learn models is online deployment.

XGBoost models

  • .pkl and .pickle are the available import formats.
  • To serialize or pickle the model, use the joblib package.
  • Only classification and regression models are available.
  • Pandas Dataframe input type for predict() API is not available.
  • The only available deployment type for XGBoost models is online deployment.

TensorFlow models

  • .pb, .h5, and .hdf5 are the available import formats.
  • To save or serialize a TensorFlow model, use the tf.saved_model.save() method.
  • tf.estimator is not available.
  • The only available deployment types for TensorFlow models are: online deployment and batch deployment.

PyTorch models

  • The only available deployment type for PyTorch models is online deployment.

  • For a Pytorch model to be importable to watsonx.ai Runtime, it must be previously exported to .onnx format. Refer to this code.

    torch.onnx.export(<model object>, <prediction/training input data>, "<serialized model>.onnx", verbose=True, input_names=<input tensor names>, output_names=<output tensor names>)
    

Storing a model in your watsonx.ai Runtime repository

Use this code to store your model in your watsonx.ai Runtime repository:

from ibm_watson_machine_learning import APIClient

client = APIClient(<your credentials>)
sw_spec_uid = client.software_specifications.get_uid_by_name("<software specification name>")

meta_props = {
    client.repository.ModelMetaNames.NAME: "<your model name>",
    client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid,
    client.repository.ModelMetaNames.TYPE: "<model type>"}

client.repository.store_model(model=<your model>, meta_props=meta_props)

Notes:

  • Depending on the model framework used, <your model> can be the actual model object, a full path to a saved model file, or a path to a directory where the model file is located. For more information, see Available ways to import models, per framework type.

  • For a list of available software specifications to use as <software specification name>, use the client.software_specifications.list() method.

  • For a list of available model types to use as model_type, refer to Software specifications and hardware specifications for deployments.

  • When you export a Pytorch model to the .onnx format, specify the keep_initializers_as_inputs=True flag and set opset_version to 9 (watsonx.ai Runtime deployments use the caffe2 ONNX runtime that doesn't support opset versions higher than 9).

    torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9)
    

Parent topic: Assets in deployment spaces

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more