Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

  • Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    1 month ago

    I wouldn’t bother. If I really had to ask a bot, Wolfram Alpha is there as long as I can ask it without an AI meddling with my question.

    E: To clarify, just because one AI or six will get the same answer that I can independently verify as correct for a simpler question, does not mean I can trust it for any arbitrary math question even if however many AIs arrive at the same answer. There’s often the possibility the AI will stumble upon a logical flaw, exemplified by the “number of rs in strawberry” example.

  • AmericanEconomicThinkTank@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 month ago

    Nope, language models by inherent nature, xannot be used to calculate. Sure theoretically you could have input parsed, with proper training, to find specific variables, input those to a database and have that data mathematically transformed back into language data.

    No LLMs do actual math, they only produce the most likely output to a given input based on trained data. If I input: What is 1 plus 1?

    Then given the model, most likely has trained repetition on an answer to follow that being 1 + 1 = 2, that will be the output. If it was trained on data that was 1 + 1 = 5, then that would be the output.

  • gedaliyah@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    1 month ago

    Here’s an interesting post that gives a pretty good quick summary of when an LLM may be a good tool.

    Here’s one key:

    Machine learning is amazing if:

    • The problem is too hard to write a rule-based system for or the requirements change sufficiently quickly that it isn’t worth writing such a thing and,
    • The value of a correct answer is much higher than the cost of an incorrect answer.

    The second of these is really important.

    So if your math problem is unsolvable by conventional tools, or sufficiently complex that designing an expression is more effort than the answer is worth… AND ALSO it’s more valuable to have an answer than it is to have a correct answer (there is no real cost for being wrong), THEN go ahead and trust it.

    If it is important that the answer is correct, or if another tool can be used, then you’re better off without the LLM.

    The bottom line is that the LLM is not making a calculation. It could end up with the right answer. Different models could end up with the same answer. It’s very unclear how much underlying technology is shared between models anyway.

    For example, if the problem is something like, "here is all of our sales data and market indicators for the past 5 years. Project how much of each product we should stock in the next quarter. " Sure, an LLM may be appropriately close to a professional analysis.

    If the problem is like “given these bridge schematics, what grade steel do we need in the central pylon?” Then, well, you are probably going to be testifying in front of congress one day.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    How trustable the answer is depends on knowing where the answers come from, which is unknowable. If the probability of the answers being generated from the original problem are high because it occurred in many different places in the training data, then maybe it’s correct. Or maybe everyone who came up with the answer is wrong in the same way and that’s why there is so much correlation. Or perhaps the probability match is simply because lots of math problems tend towards similar answers.

    The core issue is that the LLM is not thinking or reasoning about the problem itself, so trusting it with anything is more assuming the likelihood of it being right more than wrong is high. In some areas this is safe to do, in others it’s a terrible assumption to make.

    • Farmdude@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      1 month ago

      I’m a little confused after listening to a podcast with… Damn I can’t remember his name. He’s English. They call him the godfather of AI. A pioneer.

      Well, he believes that gpt 2-4 were major breakthroughs in artificial infection. He specifically said chat gpt is intelligent. That some type of reasoning is taking place. The end of humanity could come in a year to 50 years away. If the fella who imagined a Neural net that is mapped using the human brain. And this man says it is doing much more. Who should I listen too?. He didn’t say hidden AI. HE SAID CHAT GPT. HONESTLY ON OFFENSE. I JUST DON’T UNDERSTAND THIS EPIC SCENARIO ON ONE SIDE AND TOTALLY NOTHING ON THE OTHER

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        One step might be to try and understand the basic principles behind what makes a LLM function. The Youtube channel 3blue1brown has at least one good video on transformers and how they work, and perhaps that will help you understand that “reasoning” is a very broad term that doesn’t necessarily mean thinking. What is going on inside a LLM is fascinating and amazing in what does manage to come out that’s useful, but like any tool it can’t be used for everything well, if at all.

          • Rhaedas@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            1 month ago

            Funny, but also not a bad idea, as you can ask it to clarify on things as you go. I just reference that YT channel because he has a great ability to visually show things to help them make sense.

      • groet@feddit.org
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        Anyone with a stake in the development of AI is lying to you about how good models are and how soon they will be able to do X.

        They have to be lying because the truth is that LLMs are terrible. They can’t reason at all. When they perform well on benchmarks its because every benchmark contains questions that are in the LLMs training data. If you burn trillions of dollars and have nothing to show, you lie so people keep giving you money.

        https://arxiv.org/html/2502.14318

        However, the extent of this progress is frequently exaggerated based on appeals to rapid increases in performance on various benchmarks. I have argued that these benchmarks are of limited value for measuring LLM progress because of problems of models being over-fit to the benchmarks, lack real-world relevance of test items, and inadequate validation for whether the benchmarks predict general cognitive performance. Conversely, evidence from adversarial tasks and interpretability research indicates that LLMs consistently fail to learn the underlying structure of the tasks they are trained on, instead relying on complex statistical associations and heuristics which enable good performance on test benchmarks but generalise poorly to many real-world tasks.

  • OwlPaste@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    no, once i tried to do binary calc with chat gpt and he keot giving me wrong answers. good thing i had sone unit tests around that part so realised quickly its lying

    • Farmdude@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      1 month ago

      But, if you ran, gave the problem to all the top models and got the same? Is it still likely an incorrect answer? I checked 6. I checked a bunch of times. Different accounts. I was testing it. I’m seeing if its possible with all that in others opinions I actually had to check over a hundred times each got the same numbers.

      • OwlPaste@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        my use case was, i expect easier and simpler. so i was able to write automated tests to validate logic of incrementing specific parts of a binary number and found that expected test values llm produced were wrong.

        so if its possible to use some kind of automation to verify llm results for your problem, you will be confident in your answer. but generally llms tend to make up shit and sound confident about it

      • porcoesphino@mander.xyz
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        What if there is a popular joke that relies on bad math that happens to be your question. Then the alignment is understandable and no indication of accuracy. Why use a tool with known issues, and overhead like querying six, instead of using a decent tool like Wolfram alpha?

      • Denjin@feddit.uk
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        They could get the right answer 9999 times out of 10000 and that one wrong answer is enough to make all the correct answers suspect.

    • dan1101@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Yes more people need to realize it’s just a search engine with natural language input and output. LLM output should at least include citations.

    • Pika@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Just yesterday I was fiddling around with a logic test in python. I wanted to see how well deepseek could analyze the intro line to a for loop, it properly identified what it did in the description, but when it moved onto giving examples it contradicted itself and took 3 or 4 replies before it realized that it contradicted itself.

  • zxqwas@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 month ago

    Using a calculator or wolfram alpha or similar tools i don’t trust the answer unless it passes a few sanity checks. Frequently I am the source of error and no LLM can compensate for that.

      • EpeeGnome@feddit.online
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        If all 6 got the same answer multiple times, then that means that your query very strongly correlated with that reply in the training data used by all of them. Does that mean it’s therefore correct? Well, no. It could mean that there were a bunch of incorrect examples of your query they used to come up with that answer. It could mean that the examples it’s working from seem to follow a pattern that your problem fits into, but the correct answer doesn’t actually fit that seemingly obvious pattern. And yes, there’s a decent chance it could actually be correct. The problem is that the only way to eliminate those other still also likely possibilities is to actually do the problem, at which point asking the LLM accomplished nothing.

      • zxqwas@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        Don’t know. I’ve never asked any of them a maths question.

        How costly is it to be wrong? You seem to care enough to ask people on the Internet so it suggests that it’s fairly costly. I’d not trust them.

      • pinball_wizard@lemmy.zip
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 month ago

        Yes. All six are likely to be incorrect.

        Similarly, you could ask a subtle quantum mechanics question to six psychologists, and all six may well give you the same answer. You still should not trust that answer.

        The way that LLMs correlate and gather answers is particularly unsuited to mathematics.

        Edit: I. Contrast, the average Psychologist is much more prepared to answer a quantum mechanics question, than an average LLM is to answer a math or counting question.

  • General_Effort@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    1 month ago

    Probably, depending on the context. It is possible that all 6 models were trained on the same misleading data, but not very likely in general.

    Number crunching isn’t an obvious LLM use case, though. Depending on the task, having it create code to crunch the numbers, or a step-by-step tutorial on how to derive the formula, would be my preference.

  • qaz@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    Most LLM’s now call functions in the background. Most calculations are just simple Python expressions.

  • bunchberry@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 month ago

    I’ve used LLMs quite a few times to find partial derivatives / gradient functions for me, and I know it’s correct because I plug them into a gradient descent algorithm and it works. I would never trust anything an LLM gives blindly no matter how advanced it is, but in this particular case I could actually test the output since it’s something I was implementing in an algorithm, so if it didn’t work I would know immediately.

  • Professorozone@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 month ago

    Well, I wanted to know the answer and formula for future value of a present amount. The AI answer that came up was clear, concise, and thorough. I was impressed and put the formula into my spreadsheet. My answer did not match the AI answer. So I kept looking for what I did wrong. Finally I just put the value into a regular online calculator and it matched the answer my spreadsheet was returning.

    So AI gave me the right equation and the wrong answer. But it did it in a very impressive way. This is why I think it’s important for AI to only be used as a tool and not a replacement for knowledge. You have to be able to understand how to check the results.