Custom machine learning frameworks
You can use your custom machine learning framework to complete payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, drift detection, and auto-debias function for model evaluations. The custom machine learning framework must have equivalency to IBM watsonx.ai Runtime.
The following custom machine learning frameworks support model evaluations:
Framework | Problem type | Data type |
---|---|---|
Equivalent to IBM watsonx.ai Runtime | Classification | Structured |
Equivalent to IBM watsonx.ai Runtime | Regression | Structured |
For a model that is not equivalent to IBM watsonx.ai Runtime, you must create a wrapper for the custom model that exposes the required REST API endpoints. You must also and bridge the input/output between Watson OpenScale and the actual custom machine learning engine.
When is a custom machine learning engine the best choice for me?
A custom machine learning engine is the best choice when the following situations are true:
- You are not using any immediately available products to serve your machine learning models. You have a system to serve your models and no direct support exists for that function for model evaluations.
- The serving engine that you are using from a 3rd-party supplier is not supported for model evaluations yet. In this case, consider developing a custom machine learning engine as a wrapper to your original or native deployments.
How it works
The following image shows the custom environment support:
You can also reference the following links:
Watson OpenScale payload logging API
-
Input criteria for model to support monitors
In the following example, your model takes a feature vector, which is essentially a collection of named fields and their values, as an input.
{ "fields": [ "name", "age", "position" ], "values": [ [ "john", 33, "engineer" ], [ "mike", 23, "student" ] ]
The
“age”
field can be evaluated for fairness.If the input is a tensor or matrix, which is transformed from the input feature space, that model cannot be evaluated. By extension, deep learning models with text or image inputs cannot be handled for bias detection and mitigation.
Additionally, training data must be loaded to support Explainability.
For explainability on text, the full text should be one of the features. Explainability on images for a Custom model is not supported in the current release.
-
Output criteria for model to support monitors
Your model outputs the input feature vector alongside the prediction probabilities of various classes in that model.
{ "fields": [ "name", "age", "position", "prediction", "probability" ], "labels": [ "personal", "camping" ], "values": [ [ "john", 33, "engineer", "personal", [ 0.6744664422398081, 0.3255335577601919 ] ], [ "mike", 23, "student" "camping", [ 0.2794765664946941, 0.7205234335053059 ] ] ] }
In this example,
"personal”
and“camping”
are the possible classes, and the scores in each scoring output are assigned to both classes. If the prediction probabilities are missing, bias detection works, but auto-debias does not.You can access the scoring output from a live scoring endpoint that you can call with the REST API for model evaluations. For CUSTOMML, Amazon SageMaker, and IBM watsonx.ai Runtime, Watson OpenScale directly connects to the native scoring endpoints.
Custom machine learning engine
A custom machine learning engine provides the infrastructure and hosting capabilities for machine learning models and web applications. Custom machine learning engines that are supported for model evaluations must conform to the following requirements:
-
Expose two types of REST API endpoints:
- discovery endpoint (GET list of deployments and details)
- scoring endpoints (online and real-time scoring)
-
All endpoints need to be compatible with the swagger specification to be supported.
-
Input payload and output to or from the deployment must be compliant with the JSON file format that is described in the specification.
Watson OpenScale supports only the BasicAuth
, none
, or apiKey
authentication formats.
To see the REST API endpoints specification, see the REST API.
Adding a custom machine learning engine
You can configure model evaluations to work with a custom machine learning provider by using one of the following methods:
- You can use the configuration interface to add your first custom machine learning provider. For more information, see Specifying a custom machine learning instance.
- You can also add your machine learning provider by using the Python SDK. You must use this method if you want to have more than one provider. For more information, see Add your custom machine learning engine.
Explore further
You can use custom machine learning monitor to create a way to interact with other services.
Specifying a Custom ML service instance
Your first step to configure model evaluations is to specify a service instance. Your service instance is where you store your AI models and deployments.
Connect your Custom service instance
AI models and deployments are connected in a service instance for model evaluations. You can connect a custom service. To connect your service, go to the Configure tab, add a machine learning provider, and click the Edit icon. In addition to a name, description and specifying the Pre-production or Production environment type, you must provide the following information that is specific to this type of service instance:
- Username
- Password
- API endpoint that uses the format
https://host:port
, such ashttps://custom-serve-engine.example.net:8443
Choose whether to connect to your deployments by requesting a list or by entering individual scoring endpoints.
Requesting the list of deployments
If you selected the Request the list of deployments tile, enter your credentials and API Endpoint, then save your configuration.
Providing individual scoring endpoints
If you selected the Enter individual scoring endpoints tile, enter your credentials for the API Endpoint, then save your configuration.
You are now ready to select deployed models and configure your monitors. Your deployed models are listed on the Insights dashboard where you can click Add to dashboard. Select the deployments that you want to monitor and click Configure.
For more information, see Configure monitors.
Custom machine learning engine examples
Use the following ideas to set up your own custom machine learning engine.
Python and flask
You can use Python and flask to serve scikit-learn model.
To generate the drift detection model, you must use scikit-learn version 0.20.2 in the notebook.
The app can be deployed locally for testing purposes and as an application on IBM Cloud.
Node.js
You can also find an example of a custom machine learning engine that is written in Node.js here.
End2end code pattern
Code pattern showing end2end example of custom engine deployment and integration with model evaluations.
Payload logging with the Custom machine learning engine
To configure payload logging for a non-IBM watsonx.ai Runtime or custom machine learning engine, you must bind the ML engine as custom.
Add your Custom machine learning engine
A non-watsonx.ai Runtime engine is added as custom by using metadata and no direct integration with the non-IBM watsonx.ai Runtime service exists. You can add more than one machine learning engine for model evaluations by using the wos_client.service_providers.add
method.
CUSTOM_ENGINE_CREDENTIALS = {
"url": "***",
"username": "***",
"password": "***",
}
wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.CUSTOM_MACHINE_LEARNING,
credentials=CustomCredentials(
url= CUSTOM_ENGINE_CREDENTIALS['url'],
username= CUSTOM_ENGINE_CREDENTIALS['username'],
password= CUSTOM_ENGINE_CREDENTIALS['password'],
),
background_mode=False
).result
You can see your service provider with the following command:
client.service_providers.get(service_provider_id).result.to_dict()
Configure security with an API key
To configure security for your custom machine learning engine, you can use IBM Cloud and IBM Cloud Pak for Data as authentication providers for your model evaluations. You can use the https://iam.cloud.ibm.com/identity/token
URL
to generate an IAM token for IBM Cloud and use the https://<$hostname>/icp4d-api/v1/authorize
URL to generate a token for Cloud Pak for Data.
You can use the POST /v1/deployments/{deployment_id}/online
request to implement your scoring API in the following formats:
Request
{
"input_data": [{
"fields": [
"name",
"age",
"position"
],
"values": [
[
"john",
33,
"engineer"
],
[
"mike",
23,
"student"
]
]
}]
}
Response
{
"predictions": [{
"fields": [
"name",
"age",
"position",
"prediction",
"probability"
],
"labels": [
"personal",
"camping"
],
"values": [
[
"john",
33,
"engineer",
"personal",
[
0.6744664422398081,
0.32553355776019194
]
],
[
"mike",
23,
"student",
"camping",
[
0.2794765664946941,
0.7205234335053059
]
]
]
}]
}
Add Custom subscription
To add a custom subscription, run the following command:
custom_asset = Asset(
asset_id=asset['entity']['asset']['asset_id'],
name=asset['entity']['asset']['name'],
url = "dummy_url",
asset_type=asset['entity']['asset']['asset_type'] if 'asset_type' in asset['entity']['asset'] else 'model',
problem_type=ProblemType.MULTICLASS_CLASSIFICATION,
input_data_type=InputDataType.STRUCTURED,
)
deployment = AssetDeploymentRequest(
deployment_id=asset['metadata']['guid'],
url=asset['metadata']['url'],
name=asset['entity']['name'],
deployment_type=asset['entity']['type'],
scoring_endpoint = scoring_endpoint
)
asset_properties = AssetPropertiesRequest(
prediction_field='predicted_label',
probability_fields = ["probability"],
training_data_reference=None,
training_data_schema=None,
input_data_schema=None,
output_data_schema=output_schema,
)
result = ai_client.subscriptions.add(
data_mart_id=cls.datamart_id,
service_provider_id=cls.service_provider_id,
asset=custom_asset,
deployment=deployment,
asset_properties=asset_properties,
background_mode=False
).result
To get the subscription list, run the following command:
subscription_id = subscription_details.metadata.id
subscription_id
details: wos_client.subscriptions.get(subscription_id).result.to_dict()
Enable payload logging
To enable payload logging in subscription, run the following command:
request_data = {'fields': feature_columns,
'values': [[payload_values]]}
To get logging details, run the following command:
response_data = {'fields': list(result['predictions'][0]),
'values': [list(x.values()) for x in result['predictions']]}
Scoring and payload logging
-
Score your model.
-
Store the request and response in the payload logging table
records_list = [PayloadRecord(request=request_data, response=response_data, response_time=response_time), PayloadRecord(request=request_data, response=response_data, response_time=response_time)] subscription.payload_logging.store(records=records_list)
For languages other than Python, you can also log payload by using a REST API.
Parent topic: Supported machine learning engines, frameworks, and models