The Feedback Problem
You just made a decision. Wrong.
An AI system shows you the correct answer but not why it was correct. You try again. Still wrong. The AI gives you another answer. You're learning nothing except that the machine knows more than you do.
This scenario plays out millions of times daily as artificial intelligence moves from recommending movies to making consequential decisions in medicine, finance, and education. But there's a problem: AI systems are notoriously opaque. They deliver answers without explanation, like an unhelpful teacher who simply marks answers wrong without showing the work.
Now researchers have discovered something surprising about how people learn from AI feedback—and it hinges on a deceptively simple intervention: explanations.
The Google Street View Experiment
The research team recruited 573 people online for what seemed like a straightforward game. Match photographs from Google Street View to their city of origin: Berlin, Hamburg, Tel Aviv, or Jerusalem. Four cities, ten rounds, immediate feedback from an AI system after each guess.
Half the participants received only the AI's answer. The other half got something extra—visual highlights showing which parts of each image the AI had deemed most important for its decision. Red brick buildings typical of Hamburg. Distinctive architectural features. Street patterns.
The task was deliberately non-routine. You couldn't simply apply a memorized rule. Success required observation, pattern recognition, and the ability to extract meaning from ambiguous visual information. Precisely the kind of challenge where feedback matters most.
What emerged from the data revealed a more nuanced story than anyone expected.
Knowledge Gaps and Learning Curves
Users with less prior knowledge—those who'd never visited the countries in question—reported dramatically higher learning outcomes when they received explanations alongside AI decisions. These novices found the explanations informative. Relevant. Useful. The visual highlights revealed patterns they might have missed, providing concrete guidance for future decisions.
More experienced users showed a different pattern. They didn't perceive the explanations as particularly informative. Many felt they already knew what to look for. The AI was simply confirming what they'd figured out themselves.
Yet here's where it gets interesting: both groups improved their task performance when given explanations. Novices learned consciously, building new mental models from the information provided. Experts appeared to learn unconsciously, benefiting from the additional cognitive engagement even when they didn't subjectively value it.
The mechanism differed by experience level but the outcome converged.
The Informativeness Puzzle
Why do explanations help novices more than experts at the subjective level?
The answer lies in what researchers call informativeness—the perceived value of information in AI feedback. When you already understand the factors driving a decision, additional explanation feels redundant. When you're struggling to identify relevant patterns, the same explanation feels revelatory.
Think of it as the difference between someone telling you something you already know versus solving a mystery you couldn't crack. Same information, vastly different cognitive impact.
The study found that informativeness fully mediated the effect of explanations on learning outcomes for novices. Translation: explanations helped them learn specifically because they made the feedback more valuable from the user's perspective. Remove that perceived value and the learning benefit disappears.
For experts, performance improved through a direct path that bypassed their conscious appreciation of the information. They engaged with the explanations, processed them, incorporated subtle adjustments—all while maintaining they didn't really need the help.
The Black Box Problem
This matters because AI opacity represents one of the field's central challenges. Deep learning models can identify subtle patterns invisible to human observers, but they struggle to articulate their reasoning in ways humans find intelligible. The result: powerful tools that function as black boxes, delivering answers without justification.
Explainable AI attempts to crack open these boxes. The most common approach—post-hoc explanations—works backward from an AI decision to highlight which inputs most influenced the output. For image classification, this typically means highlighting image regions. For text analysis, specific words or phrases. For medical diagnosis, particular symptoms or test results.
The researchers used LIME, a model-agnostic explanation method that generates visual overlays showing relevant image regions. They fine-tuned parameters through iterative testing with real users, balancing robustness with interpretability. Not every explanation method works equally well.
Focus Groups Unpack the Findings
To validate and extend their quantitative findings, the researchers conducted focus groups with AI experts and users. The conversations revealed mechanisms the experiment couldn't capture.
Several experts noted that explanations activate existing knowledge structures. Even experienced users sometimes overlook relevant features. A visual highlight can trigger recall: "Oh right, I should be looking at the street signs too." This knowledge activation happens rapidly, often below conscious awareness.
Users described a parallel phenomenon. Explanations forced them to slow down and think deliberately rather than relying on intuition. The mere act of examining highlighted features encouraged more systematic processing, independent of whether the explanation revealed genuinely novel information.
One particularly insightful comment captured the expert paradox: "Users with more prior knowledge think they already know what to pay attention to. But subconsciously, they still learn from the explanations."
When Feedback Meets Theory
The findings align with classical learning theory while extending it in new directions. Feedback Theory posits that effective feedback helps learners reduce the gap between their current knowledge and a reference standard. Prior knowledge determines the size of that gap.
For novices, the gap is large. Explanations provide crucial information for closing it. For experts, the gap is small. Explanations appear redundant, yet they still drive performance improvements through mechanisms that don't require conscious recognition of their value.
This suggests Feedback Theory should more carefully distinguish between subjective and objective learning outcomes. Perceived learning and actual performance improvements don't always correlate. People don't always recognize when they're learning, particularly when the learning contradicts their self-image as already knowledgeable.
The research also highlights the importance of local versus global explanations. Local explanations clarify individual decisions—why this particular image is Hamburg. Global explanations describe overall system behavior—what features generally distinguish Hamburg from Berlin. The study used local explanations, appropriate for novices learning specific patterns. Experts might benefit more from global explanations that reveal broader decision strategies.
Real-World Implications
Organizations introducing AI systems face a choice. Provide AI decisions alone or invest in developing explanation capabilities alongside those decisions.
The research suggests that investment in explanations pays dividends across user populations. Less experienced employees gain the most subjective value, perceiving the feedback as more helpful and learning more consciously from it. But even experienced employees improve their performance, albeit through subtler mechanisms.
This has particular relevance for training and onboarding. AI-powered learning systems could accelerate skill acquisition among novices by providing rich, interpretable feedback that highlights decision-relevant features. The same systems could maintain or enhance expert performance by stimulating continued engagement and unconscious refinement.
The findings also matter for contexts where understanding AI reasoning is legally required or professionally essential. Medical diagnosis. Credit decisions. Hiring recommendations. In these domains, explanations become necessary regardless of user experience level.
The Boundary Conditions
Not every task benefits equally from explanations in AI feedback.
The study focused on non-routine inference tasks—problems with definite right answers but no straightforward procedure for reaching them. Routine tasks with clear algorithms don't create the same learning opportunities. The explanations become superfluous once the procedure is mastered.
Focus group participants highlighted another boundary condition: task complexity. Overly simple tasks make explanations feel unnecessary. Users quickly learn the pattern without AI assistance. The sweet spot for explanations lies in tasks complex enough to challenge users but not so opaque that explanations can't clarify the relevant features.
Explanation quality matters enormously. Poorly designed explanations confuse rather than clarify. The researchers invested substantial effort optimizing their visual highlights for interpretability. Organizations can't simply add any explanation method and expect positive results.
The Unconscious Learning Paradox
Perhaps the study's most intriguing finding is the dissociation between perceived informativeness and actual performance improvement for experienced users. They insist the explanations aren't particularly helpful while simultaneously benefiting from them.
This phenomenon—unconscious learning from feedback perceived as uninformative—reveals something fundamental about how experts process information. They maintain internal narratives about their expertise that sometimes blind them to ongoing learning opportunities. The explanations help despite, not because of, expert awareness of that help.
It suggests organizations shouldn't rely solely on user satisfaction metrics when evaluating AI explanation systems. Some of the most valuable learning happens beneath conscious recognition. Performance improvements provide a more reliable indicator of explanation effectiveness than subjective ratings alone.
What This Changes
The research demonstrates that AI systems can teach, not just assist. The distinction matters.
AI assistance provides answers. AI teaching provides understanding that transfers to future decisions. Assistance creates dependency. Teaching builds capability. The difference is explanation.
As AI becomes more prevalent across industries, this distinction will shape how organizations implement these systems. Will they optimize purely for decision accuracy, treating humans as endpoints who accept or reject AI recommendations? Or will they design systems that treat humans as learners who grow more capable through interaction?
The data argues for the second approach. Explanations cost little to implement once the underlying AI model exists. The benefits accrue across user populations, improving both subjective learning experiences and objective task performance.
For researchers, the findings open new directions. What types of explanations work best for different user populations? How do explanation needs change as users gain experience? Can AI systems adapt their explanations dynamically based on inferred user knowledge? When do explanations become counterproductive, overwhelming users with irrelevant detail?
The Path Forward
The future of human-AI collaboration depends partly on whether AI systems can communicate their reasoning effectively. This research proves such communication enhances learning, but it also reveals how much we still don't understand about the mechanisms.
Why do experts improve unconsciously? What cognitive processes occur during that unconscious learning? How long do the benefits persist after users stop receiving explained feedback? Do different types of explanations activate different learning mechanisms?
The answers will shape how we design AI systems for education, professional development, and decision support. They'll determine whether AI amplifies human capability or merely substitutes for it.
One thing is clear: explanations matter. They transform opaque decisions into learning opportunities, helping novices build expertise and experts maintain their edge. In a world increasingly shaped by algorithmic decisions, that transformation isn't optional.
It's essential.
Credit & Disclaimer: This article is a popular science summary written to make peer-reviewed research accessible to a broad audience. All scientific facts, findings, and conclusions presented here are drawn directly and accurately from the original research paper. Readers are strongly encouraged to consult the full research article for complete data, methodologies, and scientific detail. The article can be accessed through https://doi.org/10.1080/0960085X.2024.2404028






