Panel title: Why is explainable AI still a challenge?
Trustworthiness and explainability are increasingly recognised as key to the widespread adoption of AI systems. But can, and should, all AI models be explainable? Does making it explainable affect its
performance? Must it be explainable to be trustworthy? Similar to human-to-human interactions, trust can be gained without full explanation through transparency, consistent performance and
uncertainty estimations.. can this not apply to AI?
Different problems have different requirements therefore one needs to engineer solutions that fit the specific problem rather than adopting a one size-fits-all approach. The participants in this session will discuss these challenges and elaborate on what trustworthy, explainable, interpretable, and transparent AI systems might mean for different scenarios and how they might enable more trustworthy and explainable solutions while offering state-of-the-art performances as well.
Chair:
Prof Shannon Vallor, Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, University of Edinburgh
Panellists:
Dr Shane Burns, Lead Data Scientist, Lenus Health
Dr Dimitra Gkatzia, Associate Professor, 麻豆社区
Denis Hamill, Chief Data Officer, Police Scotland
Dr Georgios Leontidis, Director of Interdisciplinary Centre for Data and AI, University of Aberdeen
Brian Mullins, CEO, Mind Foundry
Date posted
30 March 2022