Promoting Transparency and Trust through Model Understanding and Explanation (XAI)

Promoting Transparency and Trust through Model Understanding and Explanation (XAI)

This use case article details how we ensured the adoption of a machine learning model at DAT Group. Initially very reluctant to use a new and unfamiliar solution, our client’s business teams were supported by our XAI experts in understanding and adopting a data-driven solution for estimating the cost of used cars.

Key Challenges

Our client is DAT Group, an international company operating as a trust in the automotive industry. For over 90 years, they provide data products and services in the automotive sector that focus on enabling a digital vehicle lifecycle.

One of their key products is the provision of price estimates for used cars. This is used by various customers, from insurance companies to original equipment manufacturers. For the price estimates, they leveraged both domain expertise and market data. The workflows for processing and analyzing data were primarily manual, which made it impossible to scale, accelerate, and automate the information retrieval process.

As part of the AI roadmap we supported DAT with, we automated these manual data processes and developed a machine learning solution that allowed for a data-driven estimation of used car prices. These solutions enabled the team to take real-time data-driven decisions.

As the project progressed, we noticed a very low stakeholder buy-in. It turned out that team members were very reluctant to use the model’s predictions and incorporate them into their workflows. This led to a situation as simple as it was critical: the model and its insights were barely used by the team members. To improve this situation, we increased our focus on change management with the aim to:

I found instances where the model did not meet my expectations. As long as I do not understand the decision making process, I do not trust the estimates.

Team member at DAT Group

Our Approach

In order to address our client’s need of ensuring the model adoption we:

1. Assessed the acceptance issues by the business team

We started by conducting group interviews with all the stakeholders (domain experts and anyone working with them). The objective was to understand what was holding them back from using the model’s estimates, and to explore potential solutions together.

Among other issues, the recurring questions were: “Why do we obtain this estimate?”, “What are the factors influencing the price?”, and “Why should I use the data-driven solution?”.

Overall, it became apparent that the team was distrustful of the quality of the model’s predictions. This feedback prompted us to put a stronger focus on change management, with an emphasis on explainability of the model’s predictions and a stronger communication with the team.

2. Developed a dashboard with self-explanatory visualizations

A key element of our change management strategy was to, on one hand, leverage explainable AI techniques. And on the other hand, to visualize the results in a dynamic and comprehensible dashboard. In particular, we used the SHAP library that allows us to efficiently compute Shapley values1. This tool measures a given sample and predicts the marginal contribution of the individual features.   

To the domain experts it was very insightful to see which features had the largest Shapley values and thus the largest impact on the predictions. It was also very interesting for them to see how the Shapley values were distributed for features with little influence, such as certain special equipment.

The main advantage of the SHAP library is that it provides local explanations (Local Explanation vs. Global Explanation concept) for each feature, which allowed domain experts to examine individual car prices and check whether the explanations matched their expectations.

With the introduction of these methodologies, the models lost their black-box character. They also became more and more accessible to the domain experts even without any knowledge of machine learning.

Finally, we provided all stakeholders with a dashboard that summarized the most important and self-explanatory visualizations. This gave them the opportunity to directly submit samples (cars) and observe the results and details of the calculations.

3. Enhancing change management by making sure everyone understands the models

Based on the previous visualizations, we organized question-and-answer sessions with the stakeholders to clarify all their remaining concerns. We also presented the library visualizations with detailed explanations.

They were happy to be able to interpret the causality results of the model and understand the details of calculations. They could grasp the essence of how the models work and why they predict the way they do.

This greatly improved the transparency of the models developed and the confidence of the teams.

Improvement of the model itself

Through the information sessions and by performing advanced visualizations on the existing models, we indeed detected certain biases in the model that pointed us to issues in the underlying data.

After taking care of these issues, we incorporated XAI (eXplainable AI) as a fundamental part of our model development lifecycle. The objective is to prevent this type of bias in the future (and ensure the quality of the models deployed).

Benefits

Teams involved on this project

One Data Scientist & XAI Engineer collaborated with our client during 4 months on this project.

Technologies and Partners

Technologies used for this XAI project: Shap, Databricks and MLFlow

1Get more details on the “Shapley values”: https://towardsdatascience.com/the-shapley-value-for-ml-models-f1100bff78d1

Newsletter subscription