Skip to content Skip to footer
0 items - $0.00 0

Translating natural language to first-order logic for logical fallacy detection by ColinWright

Translating natural language to first-order logic for logical fallacy detection by ColinWright

23 Comments

  • Post Author
    rahimnathwani
    Posted March 4, 2025 at 6:00 pm

    The paper links the code repo: https://github.com/lovishchopra/NL2FOL

    But I don't see a pretrained model in there, so I'm not sure what to pass as `your_nli_model_name`:

      python3 src/nl_to_fol.py --model_name <your_model_name> --nli_model_name <your_nli_model_name> --run_name <run_name> --dataset --length

  • Post Author
    FloorEgg
    Posted March 4, 2025 at 6:04 pm

    Not familiar with FOL as a formalism, and would love to see this in action. I feel like it's a big part of the solution to propaganda.

    The other part seems to be values obfuscation, and I wonder if this would help with that too.

    If Joe says that nails are bad, it can mean very different things if Joe builds houses for a living and prefers screws, or if Joe is anti development and thinks everyone should live in mud huts.

    Propaganda will often cast a whole narrative that can be logically consistent, but entirely misrepresents a person or people's values (their motivations and the patterns that explain their actions), and there will be logical fallacies at the boundaries of the narrative.

    We need systems that can detect logical fallacies, as well as value system inconsistencies.

  • Post Author
    ColinWright
    Posted March 4, 2025 at 6:19 pm

    It was Gottfried Leibniz who envisaged the end of philosophic disputes, replacing argument with calculation.

    "if controversies were to arise, there would be no more need of disputation between two philosophers than between two calculators. For it would suffice for them to take their pencils in their hands and to sit down at the abacus, and say to each other (and if they so wish also to a friend called to help): Let us calculate."

  • Post Author
    languagehacker
    Posted March 4, 2025 at 6:30 pm

    It sounds like the data set they use is designed to teach what logical fallacies are, which makes sense that it would do fine with it. I doubt this would do well against real-world language with things like structural ambiguity, anaphoric resolution, and dubious intent.

  • Post Author
    MortyWaves
    Posted March 4, 2025 at 6:42 pm

    Trolls that knowingly engage in bad arguments with flawed logic are going to be in shambles.

  • Post Author
    mike_hearn
    Posted March 4, 2025 at 6:49 pm

    Love the idea in theory and would like such a tool to exist, but the use cases they present aren't convincing. This would be useful in much more specific cases like drafting contracts, laws or technical documentation: places where unusually precise language without corner cases is mutually desired by everyone, and the set of fallacies that occur is small and specific.

    This paper doesn't target such use cases. Instead it's trying to tackle "pop misinformation" type claims, mostly related to climate change. Unfortunately the Logic and LogicClimate datasets that the paper are using as a benchmark have serious problems that should disqualify them from being considered a benchmark. If we check the paper that introduced them, Jin et al open by asserting that "She is the best because she is better than anyone else" is an example of circular reasoning. It's actually a tautology. Then they try again with "Global warming doesn’t exist because the earth is not getting warmer" which is also not circular reasoning, it's another tautological restatement (you may say it's false, but disagreement over facts isn't a disagreement over logic – if either clause is true so is the other). Circular reasoning often involves a mis-definition and would be something like this real-world example from a few years ago:

    1. A positive test is means you have COVID.

    2. Having COVID is defined as having a positive test.

    Their second example is "Extreme weather-related deaths in the U.S. have decreased by more than 98% over the last 100 years … Global warming saves lives" which they classed as "false causality" (they mean non-sequitur). My experience has been that climate skeptics are surprisingly logical so this would be an odd statement for them to make, and indeed if we check the original Washington Times op-ed then we find Jin et al are engaging in malicious quoting. It actually says:

    > "Contrary to sensational media reports, extreme weather-related deaths in the U.S. have decreased more than 98% over the last 100 years. Twenty times as many people die from cold as from heat, according to a worldwide review of 74 million temperature-related deaths by Dr. Antonio Gasparrini and a team of physicians. Global warming saves lives."

    The saves lives claim is based on cold being more dangerous than heat. Warmer weather = fewer deaths from cold isn't a logical fallacy, which is why they had to delete that part to make their example. It might sound like a weird or disingenuous argument to you, but it's logical in the sense that an SMT solver would approve of it. If you disagree it's probably due to prior beliefs e.g. that perhaps extreme weather has increased even as society got orders of magnitude better at reducing the impacts, or perhaps the positive effects of warmer air on the elderly are offset by other effects of climate change, or that the future will be different to the past due to compounding effects. Such rebuttals aren't identifications of a logical fallacy though, just of different priors that could maybe be addressed with additional rounds of debate.

  • Post Author
    analog31
    Posted March 4, 2025 at 6:52 pm

    Doesn't Goedel's Theorem forbid building a logic checker?

  • Post Author
    nico
    Posted March 4, 2025 at 7:09 pm

    Is this just another form of the same concept behind smart contracts?

  • Post Author
    zozbot234
    Posted March 4, 2025 at 7:17 pm

    I'm pretty sure that the semantics of natural language are a lot more complex than can be accounted for by these seemingly very ad-hoc translations into comparatively straightforward FOL formulas, as are given in this paper. A common approach for the understanding of NL semantics from a strictly formal POV is Montague semantics https://en.wikipedia.org/wiki/Montague_grammar https://plato.stanford.edu/entries/montague-semantics/ – even a cursory look at these references is enough to clarify the level of complexity that's involved. Very loosely speaking one generally has to work with multiple "modalities" at the same time each of which, when understood from the POV of ordinary FOL, introduces its own separate notion of abstract "possible worlds" (representing, e.g. an agent's set of beliefs) and ways in which these "worlds" can relate to one another. More complex cases will usually degenerate in some sort of very generic "game semantics" https://en.wikipedia.org/wiki/Game_semantics https://plato.stanford.edu/entries/logic-games/ where any given use of natural language is merely seen as a "game" (in the abstract strategic, game-theoretical sense) with its own set of possibly very ad-hoc 'rules'. The philosopher Ludwig Wittgenstein https://en.wikipedia.org/wiki/Ludwig_Wittgenstein https://plato.stanford.edu/entries/wittgenstein/ gave quite a good description of both of these approaches (from a very naïve approach based on a supposedly straightforward translation to some kind of abstract logic, to a far richer one based on notions of strategies and games) to a "formal" understanding of natural language, throughout his extensive philosophical inquiry.

    Which is to say, I'm not sure how this paper's results are generally expected to be all that useful in practice.

  • Post Author
    pixelpoet
    Posted March 4, 2025 at 7:19 pm

    Oh man, where was this back in the 90s arguing with proto-trolls on IRC and usenet who shamelessly moved goalposts, stawmanned, appealed to authority, resorted to ad hominem, …

    Imagine if you could click on a stupid internet discussion thread and make it give you a Lean proof of each argument where possible :D This thing would be hated even more than, say, vaccines, by the same sorts of people who deliberately choose to not understand things.

  • Post Author
    shortrounddev2
    Posted March 4, 2025 at 7:27 pm

    I believe this is something Immanuel Kant tried to do in the 18th century

  • Post Author
    qgin
    Posted March 4, 2025 at 7:36 pm

    I don't know how much potential this has to solve propaganda / bad faith arguments because you can just say "that logic program is biased" and handwave the entire thing away.

    But you could imagine a role for this in arbitration or legal settings.

  • Post Author
    giardini
    Posted March 4, 2025 at 7:47 pm

    Prolog has always had DCGs (Definite Clause Grammars) that allow you to write rules that resemble natural language grammar structures to parse and generate English sentences:

    https://www.metalevel.at/prolog/dcg

  • Post Author
    svnt
    Posted March 4, 2025 at 8:08 pm

    This is already something that e.g. Claude 3.7 Sonnet appears to be able to do very well, with the added benefit of explaining why if you let it — what is the benefit of this model?:

    > "Sometimes flu vaccines don't work; therefore vaccines are useless." – Hasty generalization

    > "Every time I wash my car, it rains. Me washing my car has a definite effect on the weather." – Post hoc, ergo propter hoc

    > "Everyone should like coffee: 95% of teachers do!" – Appeal to popularity and hasty generalization

    > "I don't want to give up my car, so I don't think I can support fighting climate change." – False dilemma

  • Post Author
    EigenLord
    Posted March 4, 2025 at 8:16 pm

    This is very cool and definitely a step in the right direction, however, the question remains where exactly this formalizing module should be placed in the stack. As an external api, it's clear that the model is not "thinking" in these logical terms, it just provides a translation step. I'd argue it would be better placed during inference test-time compute (as seen in these so-called reasoning models). Better yet, this formalizing step would happen at a lower level entirely, internal to the model, but that would probably require totally new architectures.

  • Post Author
    Geee
    Posted March 4, 2025 at 8:31 pm

    Yes, this is exactly what I've been dreaming about. It might finally be possible to beat the bullshit asymmetry law, i.e. Brandolini's law: "The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it."

    If LLMs can debunk bullshit as easily as it's generated, the world will instantly turn into a better place.

    Bad ideas which sound good are the root of all evil.

  • Post Author
    tiberius_p
    Posted March 4, 2025 at 8:47 pm

    First order logic can only detect formal logic fallacies. Informal logic fallacies like ad hominem, strawman, red herring, etc. are cast in language. They can't me defined and resolved mathematically. The model should be fine tuned with examples of these informal fallacies and counter-arguments to them. Even so it won't be able to detect them in all cases, but it will at least have some knowledge about them and how to reply to them. This knowledge could be further be refined with in context learning and other prompt engineering strategies.

  • Post Author
    hackandthink
    Posted March 4, 2025 at 9:01 pm

    [dead]

  • Post Author
    sergix
    Posted March 4, 2025 at 9:06 pm
  • Post Author
    booleandilemma
    Posted March 4, 2025 at 11:30 pm

    This is a threat to my company's product managers.

  • Post Author
    CJefferson
    Posted March 5, 2025 at 12:52 am

    Turning English into logic basically requires understanding the language and context.

    I’d you are told “we will go to the zoo or swimming pool tomorrow, if it is windy or rainy”, most readers would know the first or is exclusive (we aren’t going to both), while the second is inclusive (we will go if it is windy, rainy, or both).

    This is annoying when teaching logic, from experience.

  • Post Author
    talles
    Posted March 5, 2025 at 12:55 am

    The entire analytic philosophy movement is nowhere to be seen in the paper (?)

  • Post Author
    grandempire
    Posted March 5, 2025 at 2:03 am

    It’s a nerd fantasy to imagine argumentation is a logical formula, and by memorizing all the bad forms you will win arguments and detect falsehood.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.