Skip to content Skip to footer

The cultural divide between mathematics and AI by rfurmani

22 Comments

  • Post Author
    mistrial9
    Posted March 12, 2025 at 4:30 pm

    > Throughout the conference, I noticed a subtle pressure on presenters to incorporate AI themes into their talks, regardless of relevance.

    This is well-studied and not unique to AI, the USA in English, or even Western traditions. Here is what I mean: a book called Diffusion of Innovations by Rogers explains a history of technology introduction.. if the results are tallied in population, money or other prosperity, the civilizations and their language groups that have systematic ways to explore and apply new technology are "winners" in the global context.

    AI is a powerful lever. The meta-conversation here might be around concepts of cancer, imbalance and chairs on the deck of the Titanic.. but this is getting off-topic for maths.

  • Post Author
    golol
    Posted March 12, 2025 at 4:37 pm

    Nice article. I didn't read every section in detail but I think it makes a good point that AI researchers maybe focus too much on the thought of creating new mathematics while being able to repdroduce, index or formalize existing mathematics is really they key goal imo. This will then also lead to new mathematics. I think the more you advance in mathematical maturity the bigger the "brush" becomes with which you make your strokes. As an undergrad a stroke can be a single argument in a proof, or a simple Lemma. As a professor it can be a good guess for a well-posedness strategy for a PDE. I think AI will help humans find new mathematics with much bigger brush strokes. If you need to generalize a specific inequality on the whole space to Lipschitz domains, perhaps AI will give you a dozen pages, perhaps even of formalized Lean, in a single stroke. If you are a scientist and consider an ODE model, perhaps AI can give you formally verified error and convergence bounds using your specific constants. You switch to a probabilistic setting? Do not worry. All of these are examples of not very deep but tedious and non-trivial mathematical busywork that can take days or weeks. The mathematical ability necessary to do this has in my opinion already been demonstrated by o3 in rare cases. It can not piece things together yet though. But GPT-4 could not piece together proofs to undergrad homework problems while o3 now can. So I believe improvement is quite possible.

  • Post Author
    esafak
    Posted March 12, 2025 at 4:46 pm

    AI is young, and at the center of the industry spotlight, so it attracts a lot of people who are not in it to understand anything. It's like when the whole world got on the Internet, and the culture suddenly shifted. It's a good thing; you just have to dress up your work in the right language, and you can get funding, like when Richard Bellman coined the term "dynamic programming" to make it palatable to the Secretary of Defense, Charles Wilson.

  • Post Author
    invalidOrTaken
    Posted March 12, 2025 at 4:47 pm

    [flagged]

  • Post Author
    nicf
    Posted March 12, 2025 at 4:55 pm

    I'm a former research mathematician who worked for a little while in AI research, and this article matched up very well with my own experience with this particular cultural divide. Since I've spent a lot more time in the math world than the AI world, it's very natural for me to see this divide from the mathematicians' perspective, and I definitely agree that a lot of the people I've talked to on the other side of this divide don't seem to quite get what it is that mathematicians want from math: that the primary aim isn't really to find out whether a result is true but why it's true.

    To be honest, it's hard for me not to get kind of emotional about this. Obviously I don't know what's going to happen, but I can imagine a future where some future model is better at proving theorems than any human mathematician, like the situation, say, chess has been in for some time now. In that future, I would still care a lot about learning why theorems are true — the process of answering those questions is one of the things I find the most beautiful and fulfilling in the world — and it makes me really sad to hear people talk about math being "solved", as though all we're doing is checking theorems off of a to-do list. I often find the conversation pretty demoralizing, especially because I think a lot of the people I have it with would probably really enjoy the thing mathematics actually is much more than the thing they seem to think it is.

  • Post Author
    meroes
    Posted March 12, 2025 at 5:21 pm

    My take is a bit different. I only have a math undergrad and only worked as an AI trainer so I’m quite “low” on the totem pole.

    I have listened to colin Mclarty talk about philosophy of math and there was a contingent of mathematicians who solely cared about solving problems via “algorithms”. The time period was just preceding the modern math since the late 1800s roughly, where the algorithmists, intuitivists, and logical oriented mathematicians coalesced into a combination that includes intuitive, algorithmic, and importance of logic, leading to the modern way we do proofs and focus on proofs.

    These algorithmists didn’t care about the so called “meaningless” operations that got an answer, they just cared they got useful results.

    I think the article mitigates this side of math, and is the side AI will be best or most useful at. Having read AI proofs, they are terrible in my opinion. But if AI can prove something useful even if the proof is grossly unappealing to the modern mathematician, there should be nothing to clamor about.

    This is the talk I have in mind https://m.youtube.com/watch?v=-r-qNE0L-yI&pp=ygUlQ29saW4gbWN…

  • Post Author
    throw8404948k
    Posted March 12, 2025 at 5:24 pm

    > This quest for deep understanding also explains a common experience for mathematics graduate students: asking an advisor a question, only to be told, "Read these books and come back in a few months."

    With AI advisor I do not have this problem. It explains parts I need, in a way I understand. If I study some complicated topic, AI shortens it from months to days.

    I was somehow mathematically gifted when younger, sadly I often reinvented my own math, because I did not even know this part of math existed. Watching how Deepseek thinks before answering, is REALLY beneficial. It gives me many hints and references. Human teachers are like black boxes while teaching.

  • Post Author
    m0llusk
    Posted March 12, 2025 at 5:36 pm

    > The last mathematicians considered to have a comprehensive view of the field were Hilbert and Poincaré, over a century ago.

    Henri Cartan of the Bourbaki had not only a more comprehensive view, but a greater scope of the potential of mathematical modeling and description

  • Post Author
    woah
    Posted March 12, 2025 at 5:50 pm

    > Perhaps most telling was the sadness expressed by several mathematicians regarding the increasing secrecy in AI research. Mathematics has long prided itself on openness and transparency, with results freely shared and discussed. The closing off of research at major AI labs—and the inability of collaborating mathematicians to discuss their work—represents a significant cultural clash with mathematical traditions. This tension recalls Michael Atiyah's warning against secrecy in research: "Mathematics thrives on openness; secrecy is anathema to its progress" (Atiyah, 1984).

    Engineering has always involved large amounts of both math and secrecy, what's different now?

  • Post Author
    xg15
    Posted March 12, 2025 at 5:53 pm

    > One question generated particular concern: what would happen if an AI system produced a proof of a major conjecture like the Riemann Hypothesis, but the proof was too complex for humans to understand? Would such a result be satisfying? Would it advance mathematical understanding? The consensus seemed to be that while such a proof might technically resolve the conjecture, it would fail to deliver the deeper understanding that mathematicians truly seek.

    I think this is an interesting question. In a hypothetical SciFi world where we somehow provably know that AI is infallible and the results are always correct, you could imagine mathematicians grudgingly accepting some conjecture as "proven by AI" even without understanding the why.

    But for real-world AI, we know it can produce hallucinations and its reasoning chains can have massive logical errors. So if it came up with a proof that no one understands, how would we even be able to verify that the proof is indeed correct and not just gibberish?

    Or more generally, how do you verify a proof that you don't understand?

  • Post Author
    kkylin
    Posted March 12, 2025 at 6:16 pm

    As Feynman once said [0]: "Physics is like sex. Sure, it may give some practical results, but that's not why we do it." I don't think it's any different for mathematics, programming, a lot of engineering, etc.

    I can see a day might come when we (research mathematicians, math professors, etc) might not exist as a profession anymore, but there will continue to be mathematicians. What we'll do to make a living when that day comes, I have no idea. I suspect many others will also have to figure that out soon.

    [0] I've seen this attributed to the Character of Physical Law but haven't confirmed it

  • Post Author
    tech_ken
    Posted March 12, 2025 at 6:36 pm

    Mathematics is, IMO, not the axioms, proofs, or theorems. It's the human process of organizing these things into conceptual taxonomies that appeal to what is ultimately an aesthetic sensibility (what "makes sense"), updating those taxonomies as human understanding and aesthetic preferences evolve, as well as practical considerations ('application'). Generating proofs of a statement is like a biologist identifying a new species, critical but also just the start of the work. It's the macropatterns connecting the organisms that lead to the really important science, not just the individual units of study alone.

    And it's not that AI can't contribute to this effort. I can certainly see how a chatbot research partner could be super valuable for lit review, brainstorming, and even 'talking things through' (much like mathematicians get value from talking aloud). This doesn't even touch on the ability to generate potentially valid proofs, which I do think has a lot of merit. But the idea that we could totally outsource the work to a generative model seems impossible by definition. The point of the labor is develop human understanding, removing the human from the loop changes the nature of the endeavor entirely (basically to algorithm design).

    Similar stuff holds about art (at a high level, and glossing over 'craft art'); IMO art is an expressive endeavor. One person communicating a hard-to-express feeling to an audience. GenAI can obviously create really cool pictures, and this can be grist for art, but without some kind of mind-to-mind connection and empathy the picture is ultimately just an artifact. The human context is what turns the artifact into art.

  • Post Author
    EigenLord
    Posted March 12, 2025 at 7:01 pm

    Is it really a culture divide or is it an economic incentives divide? Many AI researchers are mathematicians. Any theoretical AI research paper will typically be filled with eye-wateringly dense math. AI dissolves into math the closer you inspect it. It's math all the way down. What differs are the incentives. Math rewards openness because there's no real concept of a "competitive edge", you're incentivized to freely publish and share your results as that is how you get recognition and hopefully a chance to climb the academic ladder. (Maybe there might be a competitive spirit between individual mathematicians working on the same problems, but this is different than systemic market competition.) AI is split between being a scientific and capitalist pursuit; sharing advances can mean the difference between making a fortune or being outmaneuvered by competitors. It contaminates the motives. This is where the AI researcher's typical desire for "novel results" comes from as well, they are inheriting the values of industry to produce economic innovations.
    It's a tidier explanation to tie the culture differences to material motive.

  • Post Author
    mcguire
    Posted March 12, 2025 at 8:27 pm

    Fundamentally, mathematics is about understanding why something is true or false.

    Modern AI is about "well, it looks like it works, so we're golden".

  • Post Author
    nothrowaways
    Posted March 12, 2025 at 10:15 pm

    You can't fake influence

  • Post Author
    Sniffnoy
    Posted March 12, 2025 at 10:16 pm

    > As Gauss famously said, there is "no royal road" to mathematical mastery.

    This is not the point, but the saying "there is no royal road to geometry" is far older than Gauss! It goes back at least to Proclus, who attributes it to Euclid.

  • Post Author
    NooneAtAll3
    Posted March 12, 2025 at 10:30 pm

    I feel like this rumbling can be summarized as "Ai is engineering, not math" – and suddenly a lot of things make sense

    Why Ai field is so secretive? Because it's all trade secrets – and maybe soon to become patents. You don't give away precisely how semiconductor fabs work, only base research level of "this direction is promising"

    Why everyone is pushed to add Ai in? Because that's where the money is, that's where the product is.

    Why Ai needs results fast? Because it's production line, and you create and design stuff

    Even the core distinction mentioned – that Ai is about "speculation and possibility" – that's all about tool experimenting and prototyping. It's all about building and constructing. Aka Engineering/Technology letters of STEM

    I guess next step is to ask "what to do next?". IMO, math and Ai fields should realise the divide and slowly diverge, leaving each other alone on an arm's length. Just as engineers and programmers (not computer scientists) already do

  • Post Author
    umutisik
    Posted March 12, 2025 at 10:36 pm

    If AI can prove major theorems, it will likely by employing similar heuristics as the mathematical community employs when searching for proofs and understanding. Studying AI-generated proofs, with the help of AI to decipher contents will help humans build that 'understanding' if that is desired.

    An issue in these discussions is that mathematics is both an art, a sport, and a science. And the development of AI that can build 'useful' libraries of proven theorems means different things for each. The sport of mathematics will be basically over. The art of mathematics will thrive as it becomes easier to explore the mathematical world. For the science of mathematics, it's hard to say, it's been kind of shaky for ~50 years anyway, but it can only help.

  • Post Author
    tylerneylon
    Posted March 12, 2025 at 10:52 pm

    I agree with the overt message of the post — AI-first folks tend to think about getting things working, whereas math-first people enjoy deeply understood theory. But I also think there's something missing.

    In math, there's an urban legend that the first Greek who proved sqrt(2) is irrational (sometimes credited to Hippasus of Metapontum) was thrown overboard to drown at sea for his discovery. This is almost certainly false, but it does capture the spirit of a mission in pure math. The unspoken dream is this:

    ~ "Every beautiful question will one day have a beautiful answer."

    At the same time, ever since the pure and abstract nature of Euclid's Elements, mathematics has gradually become a more diverse culture. We've accepted more and more kinds of "numbers:" negative, irrational, transcendental, complex, surreal, hyperreal, and beyond those into group theory and category theory. Math was once focused on measurement of shapes or distances, and went beyond that into things like graph theory and probabilities and algorithms.

    In each of these evolutions, people are implicitly asking the question:

    "What is math?"

    Imagine the work of introducing the sqrt() symbol into ancient mathematics. It's strange because you're defining a symbol as answering a previously hard question (what x has x^2=something?). The same might be said of integration as the opposite of a derivative, or of sine defined in terms of geometric questions. Over and over again, new methods become part of the canon by proving to be both useful, and in having properties beyond their definition.

    AI may one day fall into this broader scope of math (or may already be there, depending on your view). If an LLM can give you a verified but unreadable proof of a conjecture, it's still true. If it can give you a crazy counterexample, it's still false. I'm not saying math should change, but that there's already a nature of change and diversity within what math is, and that AI seems likely to feel like a branch of this in the future; or a close cousin the way computer science already is.

  • Post Author
    lmpdev
    Posted March 12, 2025 at 11:00 pm

    I did a fair bit of applied mathematics at uni

    What I think Mathematicians should remind themselves is a lot of prestigious mathematicians, the likes of Cantor or Erdos, often only employed a handful of “tricks”/heuristics for their proofs over their career. They repeatedly and successfully applied these strategies into unsolved problems

    I argue would not take a tremendous jump in performance for an AI to begin their own journey similar in kind to the greats, the only thing standing in their way (as with all contemporary mathematicians) is the extreme specialisation required to reach the boundary of unsolved problems

    AI need not be Euler to be an important tool and figure within mathematics

  • Post Author
    lairv
    Posted March 12, 2025 at 11:24 pm

    > A revealing anecdote shared at one panel highlighted the cultural divide: when AI systems reproduced known mathematical results, mathematicians were excited, while AI researchers were disappointed

    This seems very caricatural, one thing I've often heard in the AI community is that it'd be interesting to train models with an old data cutoff date (say 1900) and see whether the model is able to reinvent modern science

  • Post Author
    j2kun
    Posted March 12, 2025 at 11:32 pm

    This is written in the first person, but there is no listed author and the website does not suggest an author…

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.