Invited Talks
- Reinforcement Learning for Theorem Proving
-
Alex Best
Harmonic, Palo Alto, USAI'll discuss recent advances training language models to prove theorems at a very advanced level via reinforcement learning on formal languages. I will cover the progression of these systems from proving standalone statements, to building up whole new theories. I'll give a demo of the system we have been developing with these techniques, called Aristotle, and survey interesting use cases so far. Additionally I will discuss the potential ways that a mathematicians work will change when incorporating these tools.
- Vibe Coding, Vibe Proving, Vibe Mathematics …?
-
Bruno Buchberger
Johannes Kepler University Linz, AustriaI discuss two recent research areas combining symbolic computation and machine learning:
- learning algorithms for mathematical problems from training data
- proving by generating proof ideas through large language models and checking the ideas with a proof checker.
The first area is concerned with replacing provenly correct algorithms for algebraic problems (as simple as integer addition or as complex as Gröbner bases construction or Cylindrical Algebraic Decomposition) with algorithms (for example, Neural Networks) that have been trained on large sets of input-output examples for the respective problem. Why should one do that – knowing that probably the trained algorithm may be incorrect for certain inputs? Well, this is practically interesting if the computation times of the trained algorithm are drastically better. This has been proved to be the case by recent work by H. Kera and others.
In the talk, I will discuss whether it is sufficient just to compare the computation times of a provenly correct algebraic algorithm and an algorithm trained by machine learning from examples. I argue that one also must take into account the computation time for checking the results generated by the trained algorithm. In the extreme case, this may be equally costly as the generation of the output by an algebraic algorithm. I will analyze this in particular in the case of the Gröbner bases problem, where we do not only have to prove that the result is a Gröbner bases but also that the ideal generated does not change. Both questions may need considerable effort. The second question may need the application of what is called “an extended Gröbner bases algorithm”.
The second area is concerned with building up large mathematical knowledge bases (definitions, theorems, algorithms) together with formally checked proofs. This area existed for a long time under the name “Mathematical Knowledge Management” (MKM). The practical impact of work in this area on the daily work of mathematicians or for generating mathematical archives , however, was relatively low. The reason was that proof-checking, as the only algorithmic tool in MKM, still left enormous work for the mathematicians who added knowledge to such mathematical knowledge bases. With the THEOREMA system (around 2000) I tried to improve the situation by coming up proof-generating algorithms. However, still much of the work – finding essential proof ideas – was left to the mathematicians. With the new approach (for which the Aristotle system is a prominent example) much of the creative work of finding proof ideas (and interesting definitions, lemmata, theorems, algorithms) is automated by just “looking to the literature for relevant text expressed in the usual semi-formal mathematical language” using kind of a specifically trained LLM and submit the results found to a proof checker. (In the case of Aristotle, this is LEAN). This step can be viewed as a kind of “vibe proving” (generalizing the recent “vibe coding” approach for programming) or even “vibe mathematics”.
In the talk, I will discuss whether, in this approach, it would make sense to replace a proof checker by a proof generator like THEOREMA.
- 36 Years of AI and Math
-
Martin Charles Golumbic
University of Haifa, IsraelResearch considered to be “artificial intelligence” constantly redefines itself. As the disciplines within AI have evolved and changed over many decades, mathematical methods have provided the solid foundation of these diverse areas. The range of topics appearing in the literature has been impressive.
To help bridge the gap between new applications of technology and foundational mathematical research, we explore research in a variety of disciplines, with a particular emphasis on the foundations of AI and mathematical methods. Artificial intelligence (AI) has contributed to mathematical discovery, by guiding conjecture generation, assisting in formalizing mathematics, and exploiting its power to identify patterns in data and refine relationships between properties.
There has always been a strong relationship between mathematics and artificial intelligence, including applications of each discipline in the other. Computational geometry is central to much of motion planning in robotics. Combinatorial optimization is a primary tool in search, and a main ingredient of machine problem solving. Mathematical logic is arguably the appropriate language for AI, and the theoretical underpinnings of intelligent systems. Automated reasoning systems have provided a further motivation for work on non-standard, non-monotonic logics and belief deduction. Mathematical probability is the foundation of dealing with uncertainty. The mathematical theory of games is the basis for AI game-playing and models of strategy and fairness.
The state of the art and current challenges provide a wealth of opportunities to advance research in the interactions between AI and mathematics through interdisciplinary collaboration. We present a selection of topics from recent symposia as an indication of this, along with some early history of interaction between Math and AI and its development over the years.
- From π to QFT: Symbolic Discovery at Scale
-
Ido Kaminer
Technion - Israel Institute of Technology, IsraelFor centuries, formulas for mathematical constants such as π and e appeared sporadically, discovered by figures like Newton, Euler, Gauss, and Ramanujan. Inspired by experimental physics, we established the Ramanujan Machine collaboration, a program of experimental mathematics that discovers formulas at scale, including new identities for π and for Riemann ζ values. This effort runs on a global volunteer network contributing CPU time, enabling the discovery of thousands of formulas—now surpassing the total found by humans over centuries.
In this talk, I will show how combining these symbolic mathematical capabilities with large language models yielded the first example of automated unification of mathematical knowledge, revealing deeper organizing structure among constants. I will then outline potential applications to areas outside of number theory: efficient computation of hypergeometric functions and broader families of D-finite functions, with implications for computing Feynman integrals.
As another application of massive symbolic computation, I will discuss our recent contribution to modern toolkits for automated effective field theory. As a proof of concept, I will present our massive automated scan for quantum completions of gravity, showing that one-loop contributions can reproduce the curvature of gravity from flat-space quantum field theory.
I will conclude with a broader perspective on AI4Physics vs. Physics4AI, with recent examples from my lab.
- G. Raayoni, et al., The Ramanujan Machine: Automatically Generated Conjectures on Fundamental Constants, Nature 590, 67–73 (2021)
- O. Razon, et al., Automated Search for Conjectures on Mathematical Constants using Analysis of Integer Sequences, ICML 202, 28809-28842 (2023)
- R. Elimelech, et al., Algorithm-assisted discovery of an intrinsic order among mathematical constants, PNAS 121, e2321440121 (2024)
- M. Shalyt†, U.Seligmann†, et al., Unsupervised Discovery of Formulas for Mathematical Constants, NeurIPS 37, 113156 (2024)
- Beit Halachmi and I. Kaminer, The Ramanujan Library - Automated Discovery on the Hypergraph of Integer Relations, ICLR (2025); arXiv:2412.12361
- T. Raz, et al., From Euler to AI: Unifying Formulas for Mathematical Constants, NeurIPS (2025); arXiv:2502.17533
- Computational Algebra with Transformers: What Deep Learning Adds to Computational Algebra
-
Hiroshi Kera
Chiba University, JapanComputer algebra has advanced with computing. Computing enables experimenting with theories, algorithms, and heuristics of computer algebra, supporting their verification and the study of their limitations. This survey overviews recent applications of deep learning, particularly Transformer models, to arithmetic and symbolic computation, and presents a new axis that deep learning can add to computer algebra. While not always offering interpretable or guaranteed computation, the learning-based approach can extract patterns of computation through observation of instances at enormous scale; exploiting this may lead to faster algorithms and the discovery of new heuristics. In addition, generating training sets for deep learning models gives rise to new symbolic computation tasks. I will also briefly introduce the CALT library as a convenient entry point for interested audiences.
- Reason with SAT For Rule Learning
-
Martina Seidl
Johannes Kepler University Linz, AustriaPropositional logic is one of the most successful formalisms in symbolic artificial intelligence. Many hard combinatorial problems can be naturally formulated as decision problems of propositional logic (SAT) and efficiently solved using modern SAT solvers. When using SAT, problems are described symbolically in a compact manner, in contrast to machine learning approaches, which typically rely on large volumes of data to characterize a problem.
In recent work, we have investigated how SAT techniques can be applied to learning tasks that involve examples. In this talk, we report on our experiences, examine the gap between symbolic and subsymbolic approaches, and outline potential directions for future research.
- Learning-Based Complexity of PDE Solutions
-
Juan Esteban Suarez
Ludwig-Maximilians-Universität München, GermanyPartial Differential Equations (PDEs) are fundamental tools for modeling phenomena across physics, biology, and engineering, yet their numerical approximation becomes challenging in the presence of singularities, discontinuities, or complex geometries. While classical numerical methods—such as finite element, finite volume, and spectral schemes—offer strong theoretical guarantees, they typically rely on high regularity assumptions and scale poorly in non-smooth or high-dimensional settings. Recent advances in scientific machine learning have introduced new approximation paradigms based on variational formulations, neural representations, and hybrid surrogate models, raising fundamental questions about the computability and algorithmic complexity of PDE solutions.
In this talk, we explore how modern learning-based models can be used as computational structures for studying the computability and complexity of broad classes of PDE solutions, including those with singularities and discontinuities. We present a variational framework for analyzing the algorithmic complexity of a class of continuous PDE solutions using polynomial approximation schemes, and discuss extensions based on hybrid surrogate models that enable the analysis of discontinuous solutions. This framework provides constructive insights into the design of novel computational architectures for solving PDEs, addressing both the limitations of classical solvers and the energy-efficiency challenges of state-of-the-art AI models.
Please consider subscribing to the SCML Mailing List in order to stay informed about the SCML-2026 conference.

