Even experts are too quick to rely on AI explanations, study finds
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
As AI systems increasingly inform decision-making in health care, finance, law, and criminal justice, it’s critical that they provide human-understandable justifications for their behavior. “Explainable AI” as a field has gained momentum as regulators turn a critical eye toward black-box AI systems — and their creators. But how a person’s background can shape perceptions of AI explanations is a question that remains underexplored.
A new study coauthored by researchers at Cornell, IBM, and the Georgia Institute of Technology aims to shed light on the intersection of interpretability and explainable AI. Focusing on two groups — one with and one without an AI background — they found that both tended to over-trust AI systems and misinterpret explanations for how AI systems arrived at their decisions.
“These insights have potential negative implications like susceptibility to harmful manipulation of user trust,” the researchers wrote. “By bringing conscious awareness to how and why AI backgrounds shape perceptions of potential creators and consumers in explainable AI, our work takes a formative step in advancing a pluralistic human-centered explainable AI discourse.”
Although there’s a lack of consensus in the AI community on the meaning of explainability and interpretability, explainable AI shares the common goal of making the systems’ predictions and behaviors understandable by people. For example, explanation generation methods, which leverage a simple version of a model to be explained or meta-knowledge about the model, aim to elucidate a model’s decisions by providing plain-English rationales that non-AI experts can understand.
Building on prior research, the coauthors hypothesized that factors like cognitive load and general trust in AI could affect how users perceive AI explanations. For example, a study accepted at the 2020 ACM on Human Computer Interaction discovered that explanations could create a false sense of security and over-trust in AI. And in another paper, researchers found that data scientists and business analysts perceived an AI system’s accuracy score differently, with analysts inaccurately seeing the score as a measure of overall performance.
To test their theory, the Cornell, IBM, and Georgia Institute of Technology coauthors designed an experiment in which participants watched virtual robots carry out identical sequences of actions which differed only in the way the robots “thought out loud” about their actions. In the video game-like scenario, the robots had to navigate through a field of rolling boulders and a river of flowing lava, retrieving essential food supplies for trapped space explorers.
Above: The video game-like environment the researchers created for their experiment.
One of the robots explained the “why” behind its actions in plain English, providing a rationale. Another robot stated its actions without justification — for example, “I will move right” — while a third only gave numerical values describing its current state.
Participants in the study — 96 college students enrolled in computer science and AI courses and 53 Amazon Mechanical Turk users — were asked to imagine themselves as the space explorers. Stuck on a different planet, they had to remain inside a protective dome, their only source of survival a remote supply depot with the food supplies.
The researchers found that participants in both groups tended to place “unwarranted” faith in numbers. For example, the AI group participants often ascribed more value to numbers than was justified while the non-AI group participants believed that the numbers signaled intelligence even if they couldn’t understand the meaning. In other words, even among the AI group, people associated the mere presence of mathematical representations with logic, intelligence, and rationality.
“The AI group overly ascribed diagnostic value in [the robot’s] numbers even when their meaning was unclear,” the researchers concluded in the study. “Such perceptions point to how the modality of expression … impacts perceptions of explanations from AI agents, where we see projections of normative notions (e.g., objective versus subjective) in judging intelligence.”
Both groups preferred the robots that communicated with language, particularly the robot that gave a rationale for its actions. But this more human-like style of communication caused participants to attribute emotional intelligence to the robots, even in the absence of evidence that the robots were making the right decisions.
The takeaway is that the design of AI explanations are as much in the eye of the beholder as they are in the minds of the designer, according to the researchers. Peoples’ explanatory intent and common heuristics matter just as much as the designer’s intended goal, and as a result, people might find explanatory value where designers never intended and use them based on their intent.
“Contextually understanding the misalignment between designer goals and user intent is key to fostering effective human-AI collaboration, especially in explainable AI systems,” the coauthors wrote. “As people learn specific ways of doing, it also changes their own ways of knowing — in fact, as we argue in this paper, people’s AI background impacts their perception of what it means to explain something and how … The ‘ability’ in explain-ability depends on who is looking at it and emerges from the meaning-making process between humans and explanations.”
Importance of explanations
The results are salient in light of efforts by the European Commission’s High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building “trustworthy AI.” Explainability continues to present major hurdles for companies adopting AI. According to FICO, 65% of employees can’t explain how AI model decisions or predictions are made.
Absent carefully-designed explainability tools, AI systems have the potential to inflict real-world harms. For example, a Stanford study speculates that clinicians are misusing AI-powered medical devices for diagnosis, leading to outcomes that differ from what would be expected. A more recent report from The Makeup uncovered biases in U.S. mortgage-approval algorithms, leading lenders to turn down applicants of color more often than those who are white.
The coauthors advocate taking a “sociotechnically-informed” approach to AI explainability, incorporating things such as socio-organizational context into the decision-making process. They also suggest investigating ways to mitigate manipulation of the perceptual differences in explanations as well as educational efforts to ensure that experts hold a more critical view of AI systems.
“Explainability of AI systems is crucial to instil appropriate user trust and facilitate recourse. Disparities in AI backgrounds have the potential to exacerbate the challenges arising from the differences between how designers imagine users will appropriate explanations versus how users actually interpret and use them,” the researchers wrote.
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Source: Read Full Article