TY - GEN
T1 - xxAI - Beyond Explainable Artificial Intelligence
AU - Holzinger, Andreas
AU - Goebel, Randy
AU - Fong, Ruth
AU - Moon, Taesup
AU - Müller, Klaus Robert
AU - Samek, Wojciech
N1 - Publisher Copyright:
© 2022, The Author(s).
PY - 2022
Y1 - 2022
N2 - The success of statistical machine learning from big data, especially of deep learning, has made artificial intelligence (AI) very popular. Unfortunately, especially with the most successful methods, the results are very difficult to comprehend by human experts. The application of AI in areas that impact human life (e.g., agriculture, climate, forestry, health, etc.) has therefore led to an demand for trust, which can be fostered if the methods can be interpreted and thus explained to humans. The research field of explainable artificial intelligence (XAI) provides the necessary foundations and methods. Historically, XAI has focused on the development of methods to explain the decisions and internal mechanisms of complex AI systems, with much initial research concentrating on explaining how convolutional neural networks produce image classification predictions by producing visualizations which highlight what input patterns are most influential in activating hidden units, or are most responsible for a model’s decision. In this volume, we summarize research that outlines and takes next steps towards a broader vision for explainable AI in moving beyond explaining classifiers via such methods, to include explaining other kinds of models (e.g., unsupervised and reinforcement learning models) via a diverse array of XAI techniques (e.g., question-and-answering systems, structured explanations). In addition, we also intend to move beyond simply providing model explanations to directly improving the transparency, efficiency and generalization ability of models. We hope this volume presents not only exciting research developments in explainable AI but also a guide for what next areas to focus on within this fascinating and highly relevant research field as we enter the second decade of the deep learning revolution. This volume is an outcome of the ICML 2020 workshop on “XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.”
AB - The success of statistical machine learning from big data, especially of deep learning, has made artificial intelligence (AI) very popular. Unfortunately, especially with the most successful methods, the results are very difficult to comprehend by human experts. The application of AI in areas that impact human life (e.g., agriculture, climate, forestry, health, etc.) has therefore led to an demand for trust, which can be fostered if the methods can be interpreted and thus explained to humans. The research field of explainable artificial intelligence (XAI) provides the necessary foundations and methods. Historically, XAI has focused on the development of methods to explain the decisions and internal mechanisms of complex AI systems, with much initial research concentrating on explaining how convolutional neural networks produce image classification predictions by producing visualizations which highlight what input patterns are most influential in activating hidden units, or are most responsible for a model’s decision. In this volume, we summarize research that outlines and takes next steps towards a broader vision for explainable AI in moving beyond explaining classifiers via such methods, to include explaining other kinds of models (e.g., unsupervised and reinforcement learning models) via a diverse array of XAI techniques (e.g., question-and-answering systems, structured explanations). In addition, we also intend to move beyond simply providing model explanations to directly improving the transparency, efficiency and generalization ability of models. We hope this volume presents not only exciting research developments in explainable AI but also a guide for what next areas to focus on within this fascinating and highly relevant research field as we enter the second decade of the deep learning revolution. This volume is an outcome of the ICML 2020 workshop on “XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.”
KW - Artificial intelligence
KW - Explainability
KW - Explainable AI
KW - Machine learning
UR - http://www.scopus.com/inward/record.url?scp=85128906717&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85128906717&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-04083-2_1
DO - 10.1007/978-3-031-04083-2_1
M3 - Conference contribution
AN - SCOPUS:85128906717
SN - 9783031040825
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 3
EP - 10
BT - xxAI - Beyond Explainable AI - International Workshop, Held in Conjunction with ICML 2020, Revised and Extended Papers
A2 - Holzinger, Andreas
A2 - Goebel, Randy
A2 - Fong, Ruth
A2 - Moon, Taesup
A2 - Müller, Klaus-Robert
A2 - Samek, Wojciech
PB - Springer Science and Business Media Deutschland GmbH
T2 - International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, xxAI 2020, held in Conjunction with ICML 2020
Y2 - 18 July 2020 through 18 July 2020
ER -