Liat
Friedman Antwarg

Roundtable – Explainable AI

Ben-Gurion University

Liat Friedman Antwarg

Liat
Friedman Antwarg

Roundtable – Explainable AI

Ben-Gurion University

Liat Friedman Antwarg

Bio

A Data scientist and Ph.D student in the department of Software and Information systems engineering at BGU, worked in the industry for few years in a text analytics research team, currently leading research projects in different domains in the Innovation labs at BGU.

 

My research field is explainability of machine and deep learning models in supervised and unsupervised models.

Bio

A Data scientist and Ph.D student in the department of Software and Information systems engineering at BGU, worked in the industry for few years in a text analytics research team, currently leading research projects in different domains in the Innovation labs at BGU.

 

My research field is explainability of machine and deep learning models in supervised and unsupervised models.

Abstract

Machine and deep learning algorithms have been used for a wide variety of problems. While such algorithms may be effective at saving experts’ time or as a decision support tool, they have a major drawback because their output is hard to explain when complicated models are used. This shortcoming can make it challenging to convince managers and experts to trust and adopt new, potentially beneficial, intelligent systems that are based on such algorithms.

The need to provide an explanation per instance (as opposed to providing an explanation for the whole model) came to the fore fairly recently, as models have become more complex. An explanation of why an instance is anomalous can increase a domain expert’s trust in the algorithm, and therefore the task of providing an explanation is extremely important.

In the round table we will discuss explainability methods such as LIME, DeepLift and SHAP; a game theory-based framework that has been shown to be effective in explaining various supervised learning models. We will also discuss which metrics for explainability are appropriate and how to measure them.

Abstract

Machine and deep learning algorithms have been used for a wide variety of problems. While such algorithms may be effective at saving experts’ time or as a decision support tool, they have a major drawback because their output is hard to explain when complicated models are used. This shortcoming can make it challenging to convince managers and experts to trust and adopt new, potentially beneficial, intelligent systems that are based on such algorithms.

The need to provide an explanation per instance (as opposed to providing an explanation for the whole model) came to the fore fairly recently, as models have become more complex. An explanation of why an instance is anomalous can increase a domain expert’s trust in the algorithm, and therefore the task of providing an explanation is extremely important.

In the round table we will discuss explainability methods such as LIME, DeepLift and SHAP; a game theory-based framework that has been shown to be effective in explaining various supervised learning models. We will also discuss which metrics for explainability are appropriate and how to measure them.

Discussion Points

What is Explainable AI? Why do we need to explain the model’s output? How is it different from feature importance? Who are the users of the explanations? Methods for explainability/interpretability (LIME, DeepLift, SHAP and more), How can explainability help us improve our models? Is the explainability model really explaining/helping the end users? How can we measure it?

Discussion Points

What is Explainable AI? Why do we need to explain the model’s output? How is it different from feature importance? Who are the users of the explanations? Methods for explainability/interpretability (LIME, DeepLift, SHAP and more), How can explainability help us improve our models? Is the explainability model really explaining/helping the end users? How can we measure it?

Planned Agenda

Planned Agenda