About Unsolved

Mission & Motivation

We aim to be a venue where important unsolved problems across the mathematical sciences are collected. To date, unsolved and open problems — such as Hilbert's 23 problems, the Millennium problems, and Erdős problems — have had a significant impact on the development of the mathematical sciences.

Moreover, recently, such lists — problem lists — took on a new role in serving as live benchmarks for AI mathematical reasoning. This has led to the emergence of additional lists such as FrontierMath, First Proof, and Optimizing Constants in Mathematics. Competition problems from Putnam or the IMOs have also been used on day zero as benchmarks.

However, these have some significant limitations:

  1. Certain areas are vastly under-represented. To our knowledge, there is no repository of significant, important unsolved problems in areas adjacent to the mathematics of machine learning and AI, including mathematical statistics and learning theory. Therefore, these areas form the initial focus of our collection.
  2. Existing benchmarks miss the “working research” difficulty level. The problems posed either have known solutions (FrontierMath, First Proof, competition problems), or are at the level where they may require incredible breakthroughs from humans over an unknown timeline — perhaps years, perhaps centuries (Millennium problems). In our view, neither of these represents the everyday research work in areas related to the mathematics of machine learning and AI. In those areas, novel mathematical research is often comprised of solving problems whose solution has not been available in the literature and yet researchers have enough ideas that they are willing to spend time working on the problems, aiming to resolve them. This category of problems, we believe, is not well represented in existing benchmarks. Erdős Problems are an exception, but they only cover a relatively small area — and certainly do not touch on statistics, machine learning, and most areas of applied math.
  3. The broad expertise of the community is not well leveraged. We believe that individual researchers know about important and interesting problems, and they discuss them in their papers, during their talks, or with their colleagues. They have motivation to do so because they want their problems to be solved, and they would often be happy to have the community engaged in helping them accomplish these goals. And yet, existing benchmarks often draw from a relatively narrow set of sources. We believe that, by leveraging the expertise of human researchers, it is possible to collect a much broader, more representative, and more realistic set of unsolved problems.
  4. The lack of a real and efficient marketplace between problem posers and solvers. Researchers (often senior) who have problems they would like to see solved often share them in their papers or with their colleagues and mentees, which by nature can limit their reach. At the same time, researchers (often more junior) who are looking for problems to solve in order to build their reputation rely on the good fortune of interacting with the right colleagues and reading the right papers to find problems. We think that this process can be made more efficient — by enabling a standardized and centralized idea marketplace where problems can be served with ease.

We aim to address these issues, and become the venue where such interactions take place.

What Makes a Good Problem Submission?

  • Precisely stated: The problem should be completely and unambiguously formulated, with all notation defined and assumptions stated.
  • Verifiably unsolved: The problem should be referenced in the mathematical literature and have no known solution.
  • Significance explained: Include context about why the problem matters — what would a solution imply?
  • References provided: Cite the original source and any papers with partial results.
  • Partial results summarized: What is already known? This helps researchers and AI systems understand the current frontier.

Categories

Our categorization follows the Mathematics Subject Classification (MSC 2020), the standard taxonomy used by MathSciNet and Zentralblatt. We group problems into broad areas, with particular emphasis on applied mathematics and statistics — areas that are underrepresented in existing unsolved problem collections.

AI Benchmark

In addition to serving the community by collecting interesting problems, we aim to be a live AI benchmark. Powerful AI tools can now help humans solve certain open mathematical problems, and in the future it is expected that they will become more and more powerful. However, the real value of these tools can be measured by how much they can accelerate the problems that researchers actually want to solve. We believe that by collecting such problems together, we can both accelerate progress and simultaneously get a better sense of the capabilities of AI tools.

Related Resources

Contributing

We welcome contributions from researchers at all levels. If you know of an unsolved problem that should be listed here — especially in applied mathematics, statistics, or theoretical computer science — please submit it. The codebase is open source.