Explaining Complex Models to Business Stakeholders

Author:Murphy  |  View: 24591  |  Time: 2025-03-22 21:53:06

Business stakeholders are starting to recognise the value machine learning models bring to their operations, gaining a deeper understanding of their benefits and drawbacks. Simultaneously, there's a rising demand for more accurate and swifter machine learning models.

A challenge emerges as these models advance rapidly, attaining greater accuracy but becoming more complex and less explainable (referred to as "black-box" models). Consequently, it becomes increasingly difficult for data scientists to:

  • explain the methodologies and outcomes to stakeholders, thereby hindering model adoption rates,
  • evaluate how alterations in features impact model performance,
  • gain insights into how adjustments in model hyper-parameters influence its structure,
  • ensure model fairness, particularly in compliance with regulations like GDPR (which prohibits the use of personal data in ways that may harm or mislead customers), [1]
  • identify vulnerabilities within the model.

Global and local explainability

LightGBM, a tree-based boosting model, delivers precise outcomes but poses challenges in comprehension due to its inherent complexity.

We'll construct a LightGBM model and delve into its internal mechanisms. Initially, we'll preprocess the diabetes dataset sourced from scikit-learn.

from sklearn.datasets import load_diabetes
from pandas import DataFrame
import pandas as pd

diabetes = load_diabetes()
X_raw, y_raw = diabetes.data, diabetes.target
X = DataFrame(X_raw, columns=diabetes.feature_names)
y = pd.Series(y_raw)
y.name = "progression"

pdf = pd.concat([X,y], axis=1)

# Rename columns
pdf = pdf.rename(columns= {"bp": "blood_pressure",
                           "s1": "total_cholestorol", 
                           "s2": "LDL",
                           "s3": "HDL",
                           "s4": "total_cholestorol/HDL",
                           "s5": "triglycerides",
                           "s6": "blood_sugar"})

The dataset has already been scaled, and in this scenario, the target variable will represent the progression of diabetes as a regression value. The features encompass various patient characteristics along with blood level measurements.

The diabetes dataset used was provided by scikit-learn Data: [3]

The assumption is that higher blood pressure would contribute to higher progression of diabetes:

#Plot blood_sugar vs progression with a regression line
import seaborn as sns
import matplotlib.pyplot as plt

sns.lmplot(x="blood_sugar", y="progression", data=pdf)
plt.show()
Relationship between blood sugar and diabetes progression

The same assumption exists for a higher BMI index.

# same for BMI
sns.lmplot(x="bmi", y="progression", data=pdf)
Relationship between BMI and diabetes progression

As we can see, the scaled features are already challenging to interpret and explain to stakeholders. A LightGBM model is fit as follows:

from sklearn.model_selection import train_test_split
X = pdf.drop("progression", axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

import lightgbm
import shap

def fit_lightgbm(x_train, y_train, x_test, y_test):
    params = {
        "task": "train",
        "boosting_type": "gbdt",
        "objective": "rmse",
        "metric": ["l2", "rmse"],
        "learning_rate": 0.005,
        "num_leaves": 128,
        "max_bin": 512,
    } # basic parameters as a starting point

    model = lightgbm.sklearn.LGBMRegressor(**params)
    fitted_model = model.fit(x_train, y_train)
    y_pred = pd.Series(fitted_model.predict(x_test))

    return y_train, y_test, y_pred, fitted_model

y_train, y_test, y_pred, fitted_model = fit_lightgbm(
        X_train, y_train, X_test, y_test
    )

At a high level, a data scientist must grasp the model's inner workings, ascertain if it captures the most pertinent features aligned with business insights, and identify any crucial features omitted from the model. At this point, it is challenging to explain the fitted model.

Global explainability

Assessing the global Explainability of a LightGBM model entails calculating the feature importance or the mean of the Shapley values.

Feature importance

The Feature Importance of the model is calculated as such:

# calculate feature importance of LightGBM fitted_model using plot_importance
Lightgbm.plot_importance(fitted_model, importance_type="gain", 
                         figsize=(20, 10), grid=False, color="grey", 
                         precision=2)
Feature importance of the LightGBM model

The BMI index is the feature that improves the model's predictions by significantly creating better splits in the model. The level of triglycerides, blood sugar, and blood pressure are also crucial features of the model. These findings are all reasonable, as the BMI index of a person, as well as the levels of blood sugar and triglycerides are highly likely contributors to diabetes.

Shapley values

The mean of the Shapley Values should give a very similar picture as above:

# plot shap values, include intercept in the shap values
shap_values = shap.TreeExplainer(fitted_model).shap_values(X_train)
shap.summary_plot(shap_values, X_train, plot_type="bar", color="grey")
Mean values of marginal contribution of each feature to the LightGBM model

Indeed, the results are very similar when using either of these techniques.

Local explainability

These tools are invaluable for assessing the overall performance of a model. With Shapley values, it's also feasible to conduct an in-depth analysis of the marginal contribution of each feature to the prediction at the level of a data point.

Marginal contribution within a non-linear model. Source: [2]

For instance, we can analyse the impact of each feature on the progression of diabetes for a particular patient:

# plot shap values for a specific data point / patient
shap.initjs()
shap.force_plot(
    shap.TreeExplainer(fitted_model).expected_value,
    shap_values[0],
    X_train.iloc[0],
    matplotlib=True,
)
The marginal contribution of each feature to the prediction of the LightGBM model for one patient

This perspective would empower us to offer tailored advice to that patient, suggesting a focus on optimising their BMI index.

Summary

In summary, while machine learning models offer significant advantages, their increasing complexity poses challenges regarding explainability, interpretability, and compliance, impacting their adoption and effectiveness. Techniques, such as SHAP and feature importance empower data scientists to understand their models better, therefore accommodating the process of explaining the predictions to the business.

Resources

*Unless otherwise noted, all images have been generated by the author

[1]https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/lawfulness-fairness-and-transparency/.

[2]https://shap.readthedocs.io/en/latest/example_notebooks/overviews/An%20introduction%20to%20explainable%20AI%20with%20Shapley%20values.html © Copyright 2018, Scott Lundberg. Revision dffc346f

[3]https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html © 2007–2024, scikit-learn developers (BSD License)

LightGBM documentation: https://lightgbm.readthedocs.io/en/stable/

SHAP documentation: https://shap.readthedocs.io/en/latest/index.html

Tags: Explainability Explainable Ai Feature Importance Lightgbm Shapley Values

Comment