Microsoft Azure ML Studio frameworks
You can use Microsoft Azure ML Studio to log payload or feedback data, and to measure performance accuracy, runtime bias detection, explainability, and auto-debias results for deployed models when you evaluate models.
The following Microsoft Azure Machine Learning Studio frameworks are fully supported for model evaluations:
Table 1. Framework support details
Framework | Problem type | Data type |
---|---|---|
Native | Classification | Structured |
Native | Regression | Structured |
Azure designer container instance endpoints are supported for model evaluations.
Adding Microsoft Azure ML Studio
You can configure model evaluations to work with Microsoft Azure ML Studio by using one of the following methods:
- For the first time that you add a machine learning provider, you can use the configuration interface. For more information, see Specifying a Microsoft Azure ML Studio instance.
- You can also add your machine learning provider by using the Python SDK. You must use this method if you want to have more than one provider. For more information on performing this programmatically, see Add your Microsoft Azure machine learning engine.
Sample Notebooks
The following Notebook shows how to work with Microsoft Azure ML Studio:
Explore further
Consume an Azure Machine Learning model that is deployed as a web service
Specifying a Microsoft Azure ML Studio instance
Your first step in the Watson OpenScale tool is to specify a Microsoft Azure ML Studio instance. Your Azure ML Studio instance is where you store your AI models and deployments.
You can also add your machine learning provider by using the Python SDK. For more information, see Add your Microsoft Azure machine learning engine.
Connect your Azure ML Studio instance
You can connect to AI models and deployments in an Azure ML Studio instance for model evaluations. To connect your service, go to the Configure tab, add a machine learning provider, and click the Edit icon. In addition to a name and description and whether the environment is a Pre-production or Production, you must provide the following information:
- Client ID: The actual string value of your client ID, which verifies who you are and authenticates and authorizes calls that you make to Azure Studio.
- Client Secret: The actual string value of the secret, which verifies who you are and authenticates and authorizes calls that you make to Azure Studio.
- Tenant: Your tenant ID corresponds to your organization and is a dedicated instance of Azure AD. To find the tenant ID, hover over your account name to get the directory and tenant ID, or select Azure Active Directory > Properties > Directory ID in the Azure portal.
- Subscription ID: Subscription credentials that uniquely identify your Microsoft Azure subscription. The subscription IDforms part of the URI for every service call. See How to: Use the portal to create an Azure AD application and service principal that can access resources for instructions about how to get your Microsoft Azure credentials.
Payload logging with the Microsoft Azure Machine Learning Studio engine
Add your Microsoft Azure machine learning engine
A non-IBM watsonx.ai Runtime engine is bound as Custom, meaning that this is just metadata; there is no direct integration with the non-IBM watsonx.ai Runtime service.
AZURE_ENGINE_CREDENTIALS = {
"client_id": "",
"client_secret": "",
"subscription_id": "",
"tenant": ""
}
wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.AZURE_MACHINE_LEARNING,
#deployment_space_id = WML_SPACE_ID,
#operational_space_id = "production",
credentials=AzureCredentials(
subscription_id= AZURE_ENGINE_CREDENTIALS['subscription_id'],
client_id = AZURE_ENGINE_CREDENTIALS['client_id'],
client_secret= AZURE_ENGINE_CREDENTIALS['client_secret'],
tenant = AZURE_ENGINE_CREDENTIALS['tenant']
),
background_mode=False
).result
To see your service subscription, run the following command:
client.service_providers.list()
Add Microsoft Azure ML Studio subscription
Add subscription by adapting the following code sample:
asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id).result
asset_deployment_details
deployment_id=''
for model_asset_details in asset_deployment_details['resources']:
if model_asset_details['metadata']['guid']==deployment_id:
break
azure_asset = Asset(
asset_id=model_asset_details["entity"]["asset"]["asset_id"],
name=model_asset_details["entity"]["asset"]["name"],
url=model_asset_details["entity"]["asset"]["url"],
asset_type=model_asset_details['entity']['asset']['asset_type'] if 'asset_type' in model_asset_details['entity']['asset'] else 'model',
input_data_type=InputDataType.STRUCTURED,
problem_type=ProblemType.BINARY_CLASSIFICATION
)
deployment_scoring_endpoint = model_asset_details['entity']['scoring_endpoint']
scoring_endpoint = ScoringEndpointRequest(url = model_asset_details['entity']['scoring_endpoint']['url'],request_headers = model_asset_details['entity']['scoring_endpoint']['request_headers'],
credentials = None)
deployment = AssetDeploymentRequest(
deployment_id=model_asset_details['metadata']['guid'],
url=model_asset_details['metadata']['url'],
name=model_asset_details['entity']['name'],
description=model_asset_details['entity']['description'],
deployment_type=model_asset_details['entity']['type'],
scoring_endpoint = scoring_endpoint
)
asset_properties = AssetPropertiesRequest(
label_column="Risk ",
prediction_field='Scored Labels',
probability_fields=['Scored Probabilities'],
training_data_reference=training_data_reference,
training_data_schema=None,
input_data_schema=None,
output_data_schema=None,
)
subscription_details = wos_client.subscriptions.add(
data_mart_id=data_mart_id,
service_provider_id=service_provider_id,
asset=azure_asset,
deployment=deployment,
asset_properties=asset_properties,
background_mode=False
).result
To get the subscription list, run the following code:
subscription_id = subscription_details.metadata.id
subscription_id
details: wos_client.subscriptions.get(subscription_id).result.to_dict()
Enable payload logging
To enable payload logging, run the following code:
payload_data_set_id = None
payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id
payload store:
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=request_data,
response=response_data,
response_time=460
)])
To get the logging details, run the following command:
subscription.payload_logging.get_details()
Scoring and payload logging
Score your model. For a full example, see the Working with Azure Machine Learning Studio Engine Notebook.
To store the request and response in the payload logging table, use the following code:
records_list = [PayloadRecord(request=request_data, response=response_data, response_time=response_time),
PayloadRecord(request=request_data, response=response_data, response_time=response_time)]
for i in range(1, 10):
records_list.append(PayloadRecord(request=request_data, response=response_data, response_time=response_time))
subscription.payload_logging.store(records=records_list)
For languages other than Python, you can also log payload by using a REST API.
Parent topic: Supported machine learning engines, frameworks, and models