How many times can a company afford to be wrong about the future? For most businesses, the answer is not many. Yet when artificial intelligence promises to predict customer behavior, optimize operations, and boost profits, executives rush to adopt it, often without understanding why most AI initiatives end in expensive failure.
A team of researchers embedded themselves inside a Swiss clothing company for months, watching as the firm struggled to transform raw data into smart decisions. What they discovered challenges everything we think we know about putting AI to work. The problem was not the algorithms. It was not the data quality. It was something far more fundamental about how humans and machines work together.
THE PROMISE THAT KEEPS BREAKING
Every year, businesses pour billions into AI systems designed to make better decisions faster. The pitch sounds irresistible. Feed historical data into sophisticated algorithms, and they will spot patterns invisible to human eyes. They will predict which customers will buy, which products will succeed, which strategies will win.
Reality tells a different story. Study after study shows that somewhere between 60% and 85% of AI projects never make it past the experimental stage. Those that do often get abandoned within months because they fail to deliver real value. Managers find the recommendations confusing. Employees do not trust the outputs. The systems become expensive digital paperweights.
The clothing company at the center of this research, TBô, seemed like an ideal candidate for AI success. Founded in 2019 and operating entirely online, it collected mountains of data about customer purchases, preferences, and behavior. It built its entire business model around co-creation, letting customers design products through surveys and feedback. The data was rich, relevant, and ready to analyze.
The leadership team envisioned AI systems that would segment customers with precision, predict who might stop buying, and identify which product ideas would become bestsellers. They imagined transforming their manual process of reading thousands of survey responses into automated intelligence that could guide every major business decision.
But having good intentions and good data was not enough.
THE HIDDEN OBSTACLES
The research team spent months working alongside TBô employees, developing three AI systems from scratch. Along the way, they documented every challenge, setback, and breakthrough. What emerged was a detailed map of why AI decision systems fail and what organizations must do differently.
The first shock came during the planning phase. TBô struggled to articulate exactly what problems they wanted AI to solve. Should the system prioritize sales or customer engagement? Both mattered, but AI algorithms demand precise objectives. You cannot optimize for everything simultaneously. The leadership team found themselves making difficult choices they had never confronted before.
Meanwhile, employees reacted with skepticism. Some worried about losing their jobs. Others questioned whether algorithms could truly understand customer nuances the way experienced staff could. A few simply did not believe AI would work. Building the technology proved easier than building trust.
Then came the resource crunch. TBô posted job listings for AI specialists and data scientists. The positions sat empty for nine months. Nobody with the right skills wanted to join a small startup. The company could not afford the salaries that tech giants offered. Without expertise, progress stalled.
Data presented its own maze of problems. TBô collected information through multiple platforms: their website, email surveys, social media campaigns. Each system used different customer identifiers. Joining datasets required finding common threads. If someone used different email addresses across platforms, the AI saw them as separate people. The resulting fragmentation corrupted predictions.
Building the actual AI models demanded constant collaboration between business experts who understood customers and data scientists who understood algorithms. The data scientists needed to learn about fashion, co-creation, and seasonal trends. The business team needed to understand what AI could and could not do. Bridging this knowledge gap took weeks of patient explanation and iteration.
THE SYSTEMS TAKE SHAPE
Eventually, three AI decision models emerged. The first predicted which customers were most likely to participate in product co-creation based on their purchase history. Instead of sending surveys to everyone equally, TBô could now target recent buyers who showed strong engagement patterns. Response rates jumped from 0.2% to 4.4% in experiments.
The second model compared purchasing behavior between customers who actively co-created products and those who just bought. It revealed that co-creators had significantly higher lifetime value. This insight justified investing more resources in the co-creation community, even though running surveys cost money.
The third model analyzed thousands of text responses explaining why customers did not make repeat purchases. Using topic modeling, it automatically grouped complaints into themes: too expensive, limited product variety, poor customer service. Each theme pointed to specific improvements TBô could make.
These systems worked. The predictions proved accurate. The recommendations made business sense. Yet implementation revealed another layer of challenges.
TRUST AND TRANSPARENCY
When the AI suggested that customers who spent more money were actually less likely to participate in surveys, managers balked. It contradicted their intuition. Surely the most engaged customers would be big spenders, right? But the data showed otherwise. Budget conscious shoppers who felt invested in the brand through co-creation were more vocal.
This mismatch between algorithmic insight and human expectation created tension. Should managers trust the AI or their experience? The research team added visualization tools showing exactly how the model reached its conclusions. Seeing the logic mapped out helped, but trust remained fragile.
Another problem emerged around fairness. The AI effectively divided customers into groups receiving different treatment. Some got targeted surveys. Others got ignored. Was this ethical? What if the algorithm was biased in ways nobody could detect? TBô had to establish clear guidelines about what the AI could and could not do.
The COVID pandemic struck during testing, throwing another wrench into validation. Consumer behavior shifted dramatically. People shopped more online. Saving rates increased. Fashion priorities changed. Did performance gains come from the AI or from pandemic induced trends? Separating these effects proved nearly impossible.
THE SIX PRINCIPLES
From this messy, complicated, frustrating, and ultimately successful project, the researchers distilled six design principles that any organization can follow when building AI decision systems.
First, alignment matters more than technology. Before writing a single line of code, companies need a clear strategic roadmap connecting AI initiatives to business objectives. This roadmap must specify measurable use cases, identify required expertise, assess technical feasibility, and acknowledge likely obstacles. TBô created a detailed data roadmap that secured buy-in from employees and investors alike.
Second, synergy between components determines success. The best AI algorithm will fail if it uses wrong data or produces incomprehensible outputs. Organizations must iteratively refine input data, model selection, and presentation format until all three work together seamlessly. TBô merged multiple datasets, tested different algorithms, and created visualizations that made predictions understandable.
Third, ethics cannot be an afterthought. Companies need governance frameworks defining acceptable AI uses before deployment. This includes adopting regulatory guidelines, developing internal AI principles, and establishing auditing processes. TBô followed emerging European Union regulations and created their own ethical standards.
Fourth, humans must remain in the loop. AI excels at processing vast amounts of data, but humans bring contextual knowledge, ethical judgment, and accountability. The most effective systems combine algorithmic recommendations with human decision authority. TBô built interfaces allowing managers to adjust parameters and reject recommendations when circumstances warranted.
Fifth, continuous learning beats perfect launches. AI systems improve over time as they accumulate more data and receive feedback. Organizations should embrace iterative development, accept initial imperfections, and plan for ongoing refinement. TBô updated models quarterly based on new information and changing business conditions.
Sixth, openness accelerates progress. Developing AI from scratch costs too much for most organizations. Using open source code, pre-trained models, and industry partnerships dramatically reduces expenses while accessing cutting edge capabilities. TBô leveraged free machine learning libraries and collaborated with university researchers.
WHAT THIS MEANS FOR BUSINESS
These principles matter because AI is not going away. It is becoming more powerful, more accessible, and more necessary for competitive survival. Companies that master AI-augmented decision making will outmaneuver rivals who rely solely on human intuition or who abandon AI after early failures.
But mastery requires understanding that AI transformation is fundamentally organizational, not just technological. It demands cultural change, skill development, process redesign, and mindset shifts. The technology itself is often the easiest part.
For small businesses and startups, these findings are particularly encouraging. You do not need million dollar budgets or teams of PhDs to use AI effectively. Open source tools, cloud computing, and academic partnerships level the playing field. What you do need is clarity about objectives, commitment to ethical use, and patience for iterative improvement.
For policymakers, the research highlights the need for balanced AI regulation. Rules should protect against genuine harms like bias and privacy violations without stifling beneficial innovation. The most effective approach combines broad ethical principles with industry specific guidance, allowing organizations to adapt frameworks to their contexts.
For workers worried about AI replacing them, the findings offer cautious optimism. The most successful AI systems augment human decision making rather than automate it away. Algorithms provide recommendations. Humans evaluate those recommendations using judgment, ethics, and contextual knowledge that machines lack. This partnership leverages the complementary strengths of both.
LOOKING AHEAD
As AI capabilities expand from simple predictions to generating text, images, and strategies, the challenges around trust, ethics, and human-machine collaboration will only intensify. Organizations implementing AI today are pioneering approaches that will shape how future systems work.
The fashion company in this study started with basic customer analytics. Within two years, they were using AI to guide major business decisions about marketing, product development, and customer engagement. The journey was neither quick nor easy, but it was transformative.
Other industries face similar opportunities and obstacles. Healthcare providers use AI to diagnose diseases but struggle with liability when algorithms err. Financial institutions deploy AI for fraud detection while navigating fairness regulations. Manufacturers optimize supply chains but grapple with explaining automated decisions to workers.
What ties these diverse applications together is a common truth: AI works best when organizations treat it not as magic but as a powerful tool requiring thoughtful design, ethical oversight, continuous refinement, and human wisdom. The algorithms themselves are neutral. How we choose to develop and deploy them determines whether AI enhances human capability or diminishes it.
The gap between AI hype and AI reality remains vast. Closing that gap requires moving beyond simplistic narratives of automation and superintelligence toward practical understanding of what makes AI useful in real organizations facing real constraints. It requires acknowledging failures alongside successes, documenting challenges alongside breakthroughs, and sharing lessons learned across industries and contexts.
This research provides exactly that honest accounting. By spending months inside one company watching AI systems succeed and struggle, the researchers captured insights that surveys and interviews never could. They saw how theoretical principles collide with practical realities. They documented the gap between what AI promises and what it delivers. Most importantly, they showed that closing that gap is possible with the right approach.
For businesses standing at the threshold of AI adoption, wondering whether to take the plunge, this research offers both encouragement and caution. Yes, AI can transform decision making. Yes, the benefits can be substantial. But no, it will not be easy. And no, technology alone will not be enough. Success requires equal parts algorithmic sophistication and organizational wisdom.
The choice is not whether to use AI but how to use it well. That distinction makes all the difference.
PUBLICATION DETAILS: Year of Publication: 2025; Journal: European Journal of Information Systems; Publisher: Taylor & Francis Group; DOI: https://doi.org/10.1080/0960085X.2024.2330402
CREDIT & DISCLAIMER: This article is based on original research conducted by an international team of scientists from ETH Zurich in Switzerland and the University of Lausanne in Switzerland. Readers are strongly encouraged to consult the full research article for complete details, comprehensive data, methodology, and factual information. The original paper provides in depth technical analysis and should be referenced for academic or professional purposes.






