About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Evaluating AI models
Last updated: Jan 21, 2025
You can track and measures outcomes from your AI assets to help ensure that they are compliant with business processes no matter where your models are built or running.
You can use model evaluations as part of your AI governance strategies to ensure that models in deployment environments meet established compliance standards regardless of the tools and frameworks that are used to build and run the models. This approach ensures that models are free from bias, can be easily explained and understood by business users, and are auditable in business transactions.
- Required service
- watsonx.ai Runtime
- Training data format
- Relational: Tables in relational data sources
- Tabular: Excel files (.xls or .xlsx), CSV files
- Textual: In the supported relational tables or files
- Connected data
- Cloud Object Storage (infrastructure)
- Db2
- Data size
- Any
Watch this short video to learn more about model evaluations.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial
The Evaluate a machine learning model tutorial provides hands-on experience with configuring evaluations to monitor fairness, quality, and explainability.
Learn more
Parent topic: Governing AI assets
Was the topic helpful?
0/1000