Calculating fairness
Understand the concepts that are used to calculate fairness evaluations
- How bias is computed
- Balanced data and perfect equality
- Calculating perfect equality
- Converting the data type of a prediction column
- Interpreting a fairness score that is greater than 100 percent
How bias is computed
The algorithm for the fairness monitor computes bias on an hourly basis by using the last N
records that are present in the payload logging table and the value of N
is specified when you configure the fairness monitor.
The algorithm applies a method called perturbation to evaluate differences in expected outcomes in the data.
The perturbation changes the values of the feature from the reference group to the monitored group, or vice-versa. The perturbed data is then sent to the model to evaluate its behavior. The algorithm looks at the last N
records
in the payload table, and the behavior of the model on the perturbed data, to decide whether the model results indicate the presence of bias.
A model is biased if the percentage of favorable outcomes for the monitored group is less than the percentage of favorable outcomes for the reference group, by a threshold value you specify when you configure the fairness monitor.
Note that fairness values can be more than 100%. This calculation means that the monitored group received more favorable outcomes than the reference group. In addition, if no new scoring requests are sent, then the fairness value remains constant.
Balanced data and perfect equality
For balanced data sets, the following concepts apply:
- To determine the perfect equality value, reference group transactions are synthesized by changing the monitored feature value of every monitored group transaction to all reference group values. These new synthesized transactions are added to the set of reference group transactions and evaluated by the model.
For example, if the monitored feature is SEX
and the monitored group is FEMALE
, all FEMALE
transactions are duplicated as MALE
transactions. Other features values remain unchanged. These new
synthesized MALE
transactions are added to the set of original MALE
reference group transactions.
- The percentage of favorable outcomes is determined from the new reference group. This percentage represents perfect fairness for the monitored group.
- The monitored group transactions are also synthesized by changing the reference feature value of every reference group transaction to the monitored group value. These new synthesized transactions are added to the set of monitored group transactions and evaluated by the model.
If the monitored feature is SEX
and the monitored group is FEMALE
, all MALE
transactions are duplicated as FEMALE
transactions. Other features values remain unchanged. These new synthesized
FEMALE
transactions are added to the set of original FEMALE
monitored group transactions.
Calculating perfect equality
The following mathematical formula is used for calculating perfect equality:
Perfect equality = Percentage of favorable outcomes for all reference transactions,
including the synthesized transactions from the monitored group
For example, if the monitored feature is SEX
and the monitored group is FEMALE
, the following formula shows the equation for perfect equality:
Perfect equality for `SEX` = Percentage of favorable outcomes for `MALE` transactions,
including the synthesized transactions that were initially `FEMALE` but changed to `MALE`
When you configure fairness evaluations, you can generate a set of metrics to evaluate the fairness of your model. You can use the fairness metrics to determine if your model produces biased outcomes.
Converting the data type of a prediction column
For fairness monitoring, the prediction column allows only an integer numerical value even though the prediction label is categorical. Conversion of the prediction column data type is possible.
For example, the training data might have class labels such as “Loan Denied”, “Loan Granted”. The prediction value that is returned by IBM watsonx.ai Runtime scoring end point has values such as “0.0”, “1.0". The scoring end point also has an optional column that contains the text representation of prediction. For example, if prediction=1.0, the predictionLabel column might have a value “Loan Granted”. If such a column is available, when you configure the favorable and unfavorable outcome for the model, specify the string values “Loan Granted” and “Loan Denied”. If such a column is not available, then you need to specify the integer and double values of 1.0, 0.0 for the favorable, and unfavorable classes.
IBM watsonx.ai Runtime has a concept of output schema that defines the schema of the output of IBM watsonx.ai Runtime scoring end point and the role for the different columns. The roles are used to identify which column contains the prediction
value, which column contains the prediction probability, and the class label value, etc. The output schema is automatically set for models that are created by using model builder. It can also be set by using the IBM watsonx.ai Runtime Python
client. Users can use the output schema to define a column that contains the string representation of the prediction. Set the modeling_role
for the column to ‘decoded-target’. The documentation for the IBM watsonx.ai Runtime Python
client is available at: https://ibm.github.io/watsonx-ai-python-sdk/core_api.html#repository. Search for “OUTPUT_DATA_SCHEMA” to understand the output schema.
The API call to use is the store_model
call that accepts the OUTPUT_DATA_SCHEMA as a parameter.
Interpreting a fairness score that is greater than 100 percent
Depending on your fairness configuration, your fairness score can exceed 100 percent. It means that your monitored group is getting relatively more “fair” outcomes as compared to the reference group. Technically, it means that the model is unfair in the opposite direction.
Learn more
Configuring the Fairness monitor for indirect bias
Parent topic: Configuring the Fairness monitor