Picture a mathematician staring at a complex knot, the kind sailors tie but rendered in mathematical space. For decades, experts have tried to connect what this knot looks like geometrically with its algebraic properties, the equations that describe it. These two views of the same object stubbornly refused to align. Then something unexpected happened: artificial intelligence spotted a pattern that human minds had missed.
This isn't science fiction. It's the story of how machine learning just helped solve real mathematical problems that have stumped experts for years, marking a turning point in one of humanity's oldest intellectual pursuits.
When Patterns Hide in Plain Sight
Mathematics advances through a cycle of discovery. First, mathematicians notice patterns. Then they formulate conjectures: educated guesses about what might always be true. Finally, they prove these conjectures, transforming them into theorems. For centuries, this process relied entirely on human intuition, supplemented more recently by computers that crunch numbers and test examples.
But here's the problem: some mathematical objects are so complex, with so many interacting parts, that patterns become nearly impossible for humans to spot. Imagine trying to find a specific melody hidden within an orchestra of a thousand instruments, where you don't know which instruments matter or even what you're listening for.
This is where the collaboration between DeepMind (a leading artificial intelligence research company) and mathematicians from the University of Oxford and University of Sydney becomes groundbreaking. Rather than using AI to replace mathematicians, they developed a framework where AI guides human intuition, acting like a powerful lens that brings hidden patterns into focus.
The Framework: A Partnership Model
The approach works like this. A mathematician suspects two mathematical properties might be related but can't see how. They feed thousands of examples into a machine learning system, which learns to predict one property from the other. If the AI succeeds far better than random chance, there's likely a real connection worth exploring.
But here's the clever part: the researchers don't stop at prediction. They use attribution techniques to understand what the AI is paying attention to. Think of it like asking the AI, "Which parts of this problem matter most for your answer?" This reveals which aspects of the mathematical object are most relevant, giving mathematicians crucial hints about where to focus their efforts.
Armed with these insights, mathematicians can formulate precise conjectures and work toward proofs. The process is iterative and collaborative. The AI doesn't do the mathematics; it guides human mathematicians to look in the right places.
Victory in Knot Theory
The first breakthrough came in topology, the mathematical study of shapes and spaces. Knots, in the mathematical sense, are closed loops in three dimensional space. Unlike the knots in your shoelaces, mathematical knots can't be untied.
Mathematicians describe knots using two completely different languages. Geometric invariants measure properties of the knot's shape when you imagine it living in a curved hyperbolic space. Algebraic invariants are numbers and polynomials derived through abstract calculations. These two approaches come from entirely different mathematical traditions.
The team hypothesized that geometric properties might predict an algebraic property called the signature. They trained a neural network on data from about a million different knots, feeding it geometric measurements and asking it to predict the signature.
The AI succeeded with 78% accuracy, far above the 25% expected by chance. This proved a connection existed. But what was it?
Using attribution techniques, the researchers discovered the AI was focusing on three specific geometric properties related to the "cusp shape" of the knot. These involved complex numbers called the meridional and longitudinal translations, which describe how certain curves wrap around the knot's geometry.
Guided by this discovery, the mathematicians invented a new quantity they called the "natural slope." Imagine the meridian of the knot as a path on a donut shaped surface. If you start at a point on this path and travel perpendicular to it, you'll eventually return to the path somewhere else. The natural slope measures how far you've traveled along a different direction. This simple geometric idea turned out to be the key.
The team proved a theorem: there exists a constant such that for any hyperbolic knot, twice the signature minus the natural slope is bounded by the knot's volume and another geometric property called the injectivity radius.
This is one of the first results ever to connect algebraic and geometric knot properties, opening new avenues for research that has already yielded additional insights about knot behavior.
Cracking the Kazhdan-Lusztig Code
The second breakthrough came in representation theory, a field that studies symmetry through linear algebra. At its heart are objects called Kazhdan-Lusztig polynomials, which encode deep information about symmetric groups (the mathematical formalization of all possible ways to rearrange a set of objects).
For 40 years, mathematicians have pursued the combinatorial invariance conjecture. This conjecture states that you should be able to calculate these polynomials from an unlabeled graph called a Bruhat interval, without needing to know which specific elements of the symmetric group you started with.
The problem? For non-trivial cases, these graphs are enormous and their structure is difficult to grasp. A Bruhat interval in the symmetric group of 9 elements can contain hundreds of thousands of nodes. Spotting patterns in such complexity exceeds human visual processing.
The research team trained a graph neural network to predict the coefficients of Kazhdan-Lusztig polynomials from Bruhat intervals. The model achieved reasonable accuracy, suggesting the conjecture might be correct. More importantly, experimenting with different ways to represent the graphs revealed that certain subgraphs might contain all the necessary information.
Using attribution techniques, the researchers noticed something curious. When they identified which parts of the graph the AI found most important, certain types of edges appeared far more often than expected. These were "extremal reflections," edges of a specific algebraic type that the network couldn't directly see because the graphs were unlabeled.
This observation led to a key insight: every Bruhat interval can be decomposed into two parts. One part is a hypercube (a high dimensional generalization of a cube), and the other part is isomorphic to an interval in a smaller symmetric group. The team proved that the Kazhdan-Lusztig polynomial can be computed directly from these two components through an elegant formula.
Even more remarkably, they conjectured that any such decomposition works, not just the canonical one. If proven true, this would completely solve the 40 year old combinatorial invariance conjecture for symmetric groups. The conjecture has been computationally verified for all intervals in symmetric groups up to 7 elements (about 3 million cases) and for over 130,000 samples from larger groups.
Why This Changes Everything
These results matter far beyond the specific theorems proved. They demonstrate a fundamentally new way for humans and AI to work together in mathematics.
Previous attempts to automate mathematical discovery fell into two camps. Some systems generated genuinely useful conjectures but used methods so specific they couldn't generalize to other areas of mathematics. Others demonstrated general methods but produced conjectures that mathematicians found trivial or uninteresting.
This new framework bridges that gap. It's general enough to apply across different mathematical fields (as demonstrated by successes in both topology and representation theory), yet it produces results that mathematicians recognize as deep and valuable.
The key is keeping the mathematician in control. The AI doesn't generate conjectures directly. Instead, it serves as a pattern detection tool and an intuition guide. Mathematicians still choose what questions to ask, interpret the results, formulate conjectures, and construct proofs. But they do so with enhanced perception, able to see patterns in data that would otherwise remain invisible.
The Human Element
Mathematics has always relied heavily on intuition. The legendary Indian mathematician Ramanujan was called the "Prince of Intuition" for his uncanny ability to perceive deep truths. But intuition is limited by what the human mind can process and visualize.
The framework doesn't replace intuition. It augments it, much like a telescope augments vision. Telescopes don't make astronomers obsolete; they reveal celestial objects that would otherwise remain invisible. Similarly, this AI framework reveals mathematical patterns that exceed unaided human perception.
This is very different from how AI has succeeded in other domains. AlphaGo, DeepMind's famous Go-playing system, learned to play better than any human. But Go is a competitive game with a clear winner and loser. Mathematics is collaborative, with the goal of discovering truth. The role of AI is therefore not to compete with mathematicians but to assist them.
Looking Forward
The implications extend in several directions. First, this methodology can likely be applied to many other areas of mathematics where large datasets of mathematical objects exist and where patterns might be hiding in high dimensional spaces.
Second, it offers a productive model for human-AI collaboration in other scientific fields. The core idea of using machine learning for pattern detection, attribution techniques for understanding, and human expertise for interpretation could work anywhere complex data contains hidden structure.
Third, it challenges us to rethink what tools mathematicians might use in the future. For centuries, mathematicians worked with pen and paper. Computers added computational power. Now machine learning adds pattern recognition at a scale and dimensionality beyond human capability. The next generation of mathematicians may routinely use these tools as naturally as current mathematicians use computer algebra systems.
There are limitations, of course. The approach requires generating large datasets of mathematical objects, which isn't always possible. The patterns must be detectable in examples small enough to compute, which excludes some types of problems. And in some domains, the relevant functions might be too complex for current machine learning methods to learn effectively.
The Deeper Meaning
Step back from the technical details for a moment. What we're witnessing is a new chapter in the ancient human quest to understand patterns and structure in the world.
Mathematics is unique among human endeavors. It deals with eternal truths. The Pythagorean theorem was true before Pythagoras discovered it and will remain true forever. Mathematical discovery is not invention but rather the uncovering of what already exists in the abstract realm of logical relationships.
For most of human history, we've explored this realm using only our minds, occasionally aided by physical tools for calculation. The introduction of computers in the mid 20th century expanded our reach but didn't fundamentally change our role. We still needed to tell the computer exactly what to do.
Machine learning changes this. For the first time, we have tools that can find patterns we didn't know to look for, in ways we didn't program explicitly. This doesn't diminish the human role; it enhances it. Mathematicians bring creativity, judgment, taste, and the ability to construct rigorous proofs. AI brings computational power and pattern recognition across vast datasets.
The knot theory result is particularly poetic. Knots have fascinated humans since prehistoric times. We've tied them for practical purposes (securing boats, binding objects) and aesthetic ones (decorative knots in art and culture). Mathematical knot theory, developed over the past century, seemed to reveal all their secrets. Yet here, in 2021, we discover a fundamental connection that was overlooked. It was hiding in plain sight, waiting for the right tool to illuminate it.
The representation theory result is equally significant. The combinatorial invariance conjecture has stood for 40 years. Partial progress was made, but the general case remained elusive. Now we have a conjectured solution with a beautiful form, verified across enormous numbers of test cases. Even if the final proof takes years, we have a clear target and strong evidence we're on the right track.
A Model for the Future
Perhaps the most important contribution of this work is the framework itself. The paper provides a template that other mathematicians can follow. The code has been made publicly available. The methodology is clearly described. This isn't a one-off result but a reproducible approach.
Other mathematicians are already beginning to explore applications. Could similar techniques reveal patterns in number theory, algebraic geometry, or combinatorics? Could they help crack other long standing conjectures? The possibilities feel genuinely open.
It's worth noting what this is not. This is not AI solving mathematical problems independently. It's not computers proving theorems without human oversight. It's not the replacement of mathematical intuition with brute force computation.
Instead, it's the recognition that humans and AI have complementary strengths. Humans excel at creativity, high level reasoning, judging what's interesting, and constructing rigorous arguments. AI excels at processing vast amounts of data, detecting subtle patterns in high dimensional spaces, and tirelessly checking countless examples.
By combining these strengths thoughtfully, we achieve results that neither could reach alone. The mathematician benefits from enhanced perception. The AI benefits from guidance about what problems matter and how to interpret results.
The Path Ahead
Where does this lead? In the near term, we can expect more applications of this framework to open mathematical questions. Collaborations between AI researchers and mathematicians, once rare, will likely become more common.
In the longer term, these tools might become standard parts of mathematical training and practice. Graduate students might learn to use machine learning for pattern detection alongside traditional proof techniques. Research papers might routinely include AI assisted discovery in their methodology sections.
There are interesting questions about how mathematical culture might adapt. Mathematics has traditionally been a highly individual pursuit, with most theorems attributed to single mathematicians or small groups. How do we credit AI contributions? If a machine learning system provides the key insight leading to a theorem, how should this be acknowledged?
These questions echo broader societal debates about AI and human achievement. But in mathematics, they may be easier to resolve. Mathematics cares ultimately about truth, not credit. A theorem is either correct or incorrect, regardless of how it was discovered. The value lies in the knowledge gained, not the method of gaining it.
Why You Should Care
You might be thinking: this is fascinating, but why does it matter to me if I'm not a mathematician?
Here's why. Mathematics underpins nearly every aspect of modern life. The phone in your pocket, the bridge you drive across, the encryption protecting your financial data, the algorithms recommending your next video—all rely on mathematics.
Every advance in pure mathematics, no matter how abstract, has potential applications. Knot theory, which seems purely theoretical, has applications in molecular biology (understanding how DNA knots and unknots), physics (studying quantum field theories), and computer science (analyzing algorithms).
Representation theory has connections to particle physics, cryptography, and quantum computing. The mathematical structures studied here aren't mere abstractions; they describe symmetries fundamental to the universe.
More broadly, this work demonstrates that AI can augment human intelligence in domains requiring creativity and insight, not just pattern matching. If AI can help mathematicians discover new theorems, what else might human-AI collaboration achieve? Could it accelerate drug discovery, materials science, climate modeling, or economic theory?
The framework shown here—using AI to detect patterns, attribution techniques to understand them, and human expertise to interpret and act on insights—is a template that could work far beyond mathematics.
The Wonder of Discovery
There's something deeply moving about mathematical discovery. Unlike scientific experiments that might yield different results, or artistic creations that reflect individual vision, mathematical truths are universal and eternal. When you prove a theorem, you're uncovering something that has always been true and will always remain true, across all times and places.
The researchers working on these problems experienced something that has driven mathematicians for millennia: the thrill of seeing something no one has seen before. The knot theorists saw a connection between algebra and geometry that had eluded generations of experts. The representation theorists saw structure in enormous graphs that pointed toward resolving a 40 year old conjecture.
These moments of insight feel like magic, even to those experiencing them. The sensation of suddenly understanding something previously opaque, of seeing how pieces fit together in a way that now seems obvious but wasn't moments before—this is the joy of mathematics.
What this work shows is that AI can facilitate these moments without diminishing them. The human mathematicians still experienced that joy of discovery. They still had to exercise creativity, judgment, and rigor. The AI simply helped them look in the right places.
In a sense, this mirrors how mathematics has always worked. Mathematicians build on each other's work. They use theorems proved by predecessors as stepping stones to new results. They consult tables of data, study examples, and seek advice from colleagues. AI is just a new, powerful tool in this collaborative endeavor.
A Glimpse of Things to Come
This research offers a glimpse of a future where human-AI collaboration becomes routine and transformative. It shows that AI need not threaten human expertise but can enhance it in ways that expand what's possible.
The theorems discovered here are real contributions to mathematics. They will be cited in future papers, built upon by other researchers, and may lead to further unexpected insights. They join the great chain of mathematical knowledge stretching back thousands of years.
But beyond the specific results, this work establishes a new way of doing mathematics. It proves that machine learning can be more than a computational tool; it can be a partner in the creative process of discovery.
As these methods mature and spread, we may see an acceleration in mathematical progress. Problems that would have taken decades to crack might yield to human-AI collaboration in years. Patterns invisible to unaided human perception might become routinely discoverable.
The ancient art of mathematics, which has driven human intellectual progress since the first person counted on their fingers, is entering a new era. The questions will still come from human curiosity. The judgment of what's interesting will still come from human aesthetic sense. The rigorous proofs will still come from human logical reasoning.
But the journey of discovery will be enhanced by AI companions that help us see further and clearer than ever before. And that, ultimately, is what tools have always done for humanity—extended our reach, enhanced our capabilities, and helped us uncover truths that were always there, waiting to be found.
Publication Details
Published online: December 1, 2021
Journal: Nature
Publisher: Springer Nature
DOI: https://doi.org/10.1038/s41586-021-04086-x
Credit and Disclaimer
This article is based on original research published in Nature by scientists from DeepMind (London, UK), the University of Oxford (UK), and the University of Sydney (Australia). The content has been adapted for general audiences while maintaining complete scientific accuracy. Readers are strongly encouraged to consult the full research article for comprehensive technical details, complete mathematical proofs, detailed methodologies, and supplementary information via the DOI link provided above. All scientific findings, data, and conclusions presented here are derived directly from the original publication, and full credit belongs to the research team and their institutions.






