TY - GEN
T1 - Interactivity x Explainability
T2 - 2025 CHI Conference on Human Factors in Computing Systems, CHI EA 2025
AU - Panigrahi, Indu
AU - Kim, Sunnie S.Y.
AU - Liaqat, Amna
AU - Jinturkar, Rohan
AU - Russakovsky, Olga
AU - Fong, Ruth
AU - Abtahi, Parastoo
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/4/26
Y1 - 2025/4/26
N2 - Explanations for computer vision models are important tools for interpreting how the underlying models work. However, they are often presented in static formats, which pose challenges for users, including information overload, a gap between semantic and pixel-level information, and limited opportunities for exploration. We investigate interactivity as a mechanism for tackling these issues in three common explanation types: heatmap-based, concept-based, and prototype-based explanations. We conducted a study (N=24), using a bird identification task, involving participants with diverse technical and domain expertise. We found that while interactivity enhances user control, facilitates rapid convergence to relevant information, and allows users to expand their understanding of the model and explanation, it also introduces new challenges. To address these, we provide design recommendations for interactive computer vision explanations, including carefully selected default views, independent input controls, and constrained output spaces.
AB - Explanations for computer vision models are important tools for interpreting how the underlying models work. However, they are often presented in static formats, which pose challenges for users, including information overload, a gap between semantic and pixel-level information, and limited opportunities for exploration. We investigate interactivity as a mechanism for tackling these issues in three common explanation types: heatmap-based, concept-based, and prototype-based explanations. We conducted a study (N=24), using a bird identification task, involving participants with diverse technical and domain expertise. We found that while interactivity enhances user control, facilitates rapid convergence to relevant information, and allows users to expand their understanding of the model and explanation, it also introduces new challenges. To address these, we provide design recommendations for interactive computer vision explanations, including carefully selected default views, independent input controls, and constrained output spaces.
KW - Computer Vision
KW - Explainable AI (XAI)
KW - Explanations
KW - Human-Centered AI
KW - Interpretability
UR - http://www.scopus.com/inward/record.url?scp=105005744009&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105005744009&partnerID=8YFLogxK
U2 - 10.1145/3706599.3719730
DO - 10.1145/3706599.3719730
M3 - Conference contribution
AN - SCOPUS:105005744009
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI EA 2025 - Extended Abstracts of the 2025 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
Y2 - 26 April 2025 through 1 May 2025
ER -