0 / 0

Configuring endpoint evaluation

Last updated: Dec 08, 2022
Configuring endpoint evaluation
Configuring endpoint evaluation

To properly log scoring requests for quality, model evaluation uses endpoints. Fairness and drift evaluation use a payload logging endpoint. Quality evaluation uses the feedback endpoint. You can generate a code snippet for the payload and feedback endpoints and for debiased transactions so that you can integrate them into your application.

Steps

  1. On the Evaluations window, click Configure monitors.
  2. In the navigation panel, click Endpoints.
  3. In the Information panel, click Endpoints.
  4. From the Endpoint list, choose the type of endpoint: Payload logging, Feedback logging, or Debiased transactions.
  5. From the Code language list, choose the type of code: cURL, Java, or Python.
  6. To copy the code snippet, click the Copy to clipboard The copy to clipboard icon is displayed. icon.

You can also click Upload feedback data to upload feedback data with a CSV file. For production models, you can click Upload payload data to upload payload data with a CSV file.

Next steps

Configuring model monitors

Related topics

Parent topic: Managing data for model evaluations in Watson OpenScale