• Owl@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    2 months ago

    well, it only took 2 years to go from the cursed will smith eating spaghetti video to veo3 which can make completely lifelike videos with audio. so who knows what the future holds

    • Mose13@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      2 months ago

      Hot take, today’s AI videos are cursed. Bring back will smith spaghetti. Those were the good old days

    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 months ago

      The cursed Will Smith eating spaghetti wasn’t the best video AI model available at the time, just what was available for consumers to run on their own hardware at the time. So while the rate of improvement in AI image/video generation is incredible, it’s not quite as incredible as that viral video would suggest

      • wischi@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        But wouldn’t you point still be true today that the best AI video models today would be the onces that are not available for consumers?

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Probably is still true, but I’ve not been paying close attention to the AI market in the last couple of years. But the point I was trying to make was that it’s an apples to oranges comparison

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      2 months ago

      There actually isn’t really any doubt that AI (especially AGI) will surpass humans on all thinking tasks unless we have a mass extinction event first. But current LLMs are nowhere close to actual human intelligence.

    • Saleh@feddit.org
      link
      fedilink
      arrow-up
      51
      arrow-down
      1
      ·
      2 months ago

      My uncle. Very smart very neuronal. He knows the entire Internet, can you imagine? the entire internet. Like the mails of Crooked Hillary Clinton, that crook. You know what stands in that Mails? my uncle knows. He makes the best code. The most beautiful code. No one has ever seen code like it, but for him, he’s a genius, like i am, i have inherited all his genius genes. It is very easy. He makes the best code. Sometimes he calls me and asks me: you are even smarter than i am. Can you look at my code?

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          2 months ago

          You know, I’d be interested to know what the critical size you can get to with that approach is before it becomes useless.

          • ByteOnBikes@slrpnk.net
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            2 months ago

            It can become pretty bad quickly, with just a small project with only 15-20 files. I’ve been using cursor IDE, building out flow charts & tests manually, and just seeing where it goes.

            And while incredibly impressive how it’s creating all the steps, it then goes into chaos mode where it will start ignoring all the rules. It’ll start changing tests, start pulling in random libraries, not at all thinking holistically about how everything fits together.

            Then you try to reel it in, and it continues to go rampant. And for me, that’s when I either take the wheel or roll back.

            I highly recommend every programmer watch it in action.

            • Blackmist@feddit.uk
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              I’d rather recommend every CEO see it in action…

              They’re the ones who would be cock-a-hoop to replace us and our expensive wages with kids and bots.

              When they’re sitting around rocking back and forth and everything is on fire like that Community GIF, they’ll find my consultancy fees to be quite a bit higher than my wages used to be.

            • Aeri@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              2 months ago

              I think Generative AI is a genuinely promising and novel tool with real, valuable applications. To appreciate it however, you have to mentally compartmentalize the irresponsible, low-effort ways people sometimes mostly use it—because yeah, it’s very easy to make a lot of that so that’s most of what you see when you hear “Generative AI” and it’s become its reputation…

              Like I’ve had interesting “conversations” with Gemini and ChatGPT, I’ve actually used them to solve problems. But I would never put it in charge of anything critically important that I couldn’t double check against real data if I sensed the faintest hint of a problem.

              I also don’t think it’s ready for primetime. Does it deserve to be researched and innovated upon? Absolutely, but like, by a few nerds who manage to get it running, and universities training it on data they have a license to use. Not “Crammed into every single technology object on earth for no real reason”.

              I have brain not very good sometimes disease and I consider being able to “talk” to a “person” who can get me out of a creative rut just by exploring my own feelings a bit. GPT can actually listen to music which surprised me. I consider it scientifically interesting. It doesn’t get bored or angry at you unless you like, tell it to? I’ve asked it for help with a creative task in the past and not actually used any of its suggestions at all, but being able to talk about it with someone (when a real human who cared was not available) was a valuable resource.

              To be clear I pretty much just use it as a fancy chatbot and don’t like, just copy paste its output like some people do.

            • CanadaPlus@lemmy.sdf.org
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              2 months ago

              Is there a chance that’s right around the time the code no longer fits into the LLMs input window of tokens? The basic technology doesn’t actually have a long term memory of any kind (at least outside of the training phase).

              • MaggiWuerze@feddit.org
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                Was my first thought as well. These things really need to find a way to store a larger context without ballooning past the vram limit

                • CanadaPlus@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 months ago

                  The thing being, it’s kind of an inflexible blackbox technology, and that’s easier said than done. In one fell swoop we’ve gotten all that soft, fuzzy common sense stuff that people were chasing for decades inside a computer, but it’s ironically still beyond our reach to fully use.

                  From here, I either expect that steady progress will be made in finding more clever and constrained ways of using the raw neural net output, or we’re back to an AI winter. I suppose it’s possible a new architecture and/or training scheme will come along, but it doesn’t seem imminent.

  • sturger@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 months ago

    Honest question: I haven’t used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don’t mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn’t change them, only the correct versions.

    • Derpgon@programming.dev
      link
      fedilink
      arrow-up
      10
      ·
      2 months ago

      IntelliJ IDEA, if it knows it is the same variable, it will rename it. Usually works in a non fucked up codebase that uses eval or some obscure constructs like saving a variable name into a variable as a string and dynamically invoking it.

    • killabeezio@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      2 months ago

      Itellij is actually pretty good at this. Besides that, cursor or windsurf should be able to. I was using cursor for a while and when I needed to reactor something, it was pretty good at picking that up. It kept crashing on me though, so I am now trying windsurf and some other options. I am missing the auto complete features in cursor though as I would use this all the time to fill out boilerplate stuff as I write.

      The one key difference in cursor and windsurf when compared to other products is that it will look at the entire context again for any changes or at least a little bit of it. You make a change, it looks if it needs to make changes elsewhere.

      I still don’t trust AI to do much though, but it’s an excellent helper

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      2 months ago

      Okay, I realize I’m that person, but for those interested:

      tree, cat and sed get the job done nicely.

      And… it’s my nap time, now. Please keep the Internet working, while I’m napping. I have grown fond of parts of it. Goodnight.

      • sturger@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        Yeah, I’m looking for something that would understand the operation (? insert correct term here) of the language well enough to rename intelligently.

    • lapping6596@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      2 months ago

      I use pycharm for this and in general it does a great job. At work we’ve got some massive repos and it’ll handle it fine.

      The “find” tab shows where it’ll make changes and you can click “don’t change anything in this directory”

      • setVeryLoud(true);@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        2 months ago

        Yes, all of JetBrains’ tools handle project-wide renames practically perfectly, even in weirder things like Angular projects where templates may reference variables.

    • barsoap@lemm.ee
      link
      fedilink
      arrow-up
      24
      ·
      2 months ago

      Not reliably, no. Python is too dynamic to do that kind of thing without solving general program equivalence which is undecidable.

      Use a static language, problem solved.

    • trolololol@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      2 months ago

      I’m going to laugh in Java, where this has always been possible and reliable. Not like ai reliable, but expert reliable. Because of static types.

    • LeroyJenkins@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      most IDEs are pretty decent at it if you configure them correctly. I used intelliJ and it knows the difference. use the refactor feature and it’ll crawl references, not just rename all instances.

  • chunes@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    7
    ·
    2 months ago

    Laugh it up while you can.

    We’re in the “haha it can’t draw hands!” phase of coding.

    • Soleos@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      2 months ago

      AI bad. But also, video AI started with will Will Smith eating spaghetti just a couple years ago.

      We keep talking about AI doing complex tasks right now and it’s limitations, then extrapolating its development linearly. It’s not linear and it’s not in one direction. It’s a exponential and rhizomatic process. Humans always over-estimate (ignoring hard limits) and under-estimate (thinking linearly) how these things go. With rocketships, with internet/social media, and now with AI.

    • GreenKnight23@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      3
      ·
      2 months ago

      someone drank the koolaid.

      LLMs will never code for two reasons.

      one, because they only regurgitate facsimiles of code. this is because the models are trained to ingest content and provide an interpretation of the collection of their content.

      software development is more than that and requires strategic thought and conceptualization, both of which are decades away from AI at best.

      two, because the prevalence of LLM generated code is destroying the training data used to build models. think of it like making a copy of a copy of a copy, et cetera.

      the more popular it becomes the worse the training data becomes. the worse the training data becomes the weaker the model. the weaker the model, the less likely it will see any real use.

      so yeah. we’re about 100 years from the whole “it can’t draw its hands” stage because it doesn’t even know what hands are.

      • chunes@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        edit-2
        2 months ago

        This is just your ego talking. You can’t stand the idea that a computer could be better than you at something you devoted your life to. You’re not special. Coding is not special. It happened to artists, chess players, etc. It’ll happen to us too.

        I’ll listen to experts who study the topic over an internet rando. AI model capabilities as yet show no signs of slowing their exponential growth.

        • GreenKnight23@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          3
          ·
          2 months ago

          you’re a fool. chess has rules and is boxed into those rules. of course it’s prime for AI.

          art is subjective, I don’t see the appeal personally, but I’m more of a baroque or renaissance fan.

          I doubt you will but if you believe in what you say then this will only prove you right and me wrong.

          what is this?

          1000001583

          once you classify it, why did you classify it that way? is it because you personally have one? did you have to rule out what it isn’t before you could identify what it could be? did you compare it to other instances of similar subjects?

          now, try to classify it as someone who doesn’t have these. someone who has never seen one before. someone who hasn’t any idea what it could be used for. how would you identify what it is? how it’s used? are there more than one?

          now, how does AI classify it? does it comprehend what it is, even though it lacks a physical body? can it understand what it’s used for? how it feels to have one?

          my point is, AI is at least 100 years away from instinctively knowing what a hand is. I doubt you had to even think about it and your brain automatically identified it as a hand, the most basic and fundamentally important features of being a human.

          if AI cannot even instinctively identify a hand as a hand, it’s not possible for it to write software, because writing is based on human cognition and is entirely driven on instinct.

          like a master sculptor, we carve out the words from the ether to perform tasks that not only are required, but unseen requirements that lay beneath the surface that are only known through nuance. just like the sculptor that has to follow the veins within the marble.

          the AI you know today cannot do that, and frankly the hardware of today can’t even support AI in achieving that goal, and it never will because of people like you promoting a half baked toy as a tool to replace nuanced human skills. only for this toy to poison pill the only training data available, that’s been created through nuanced human skills.

          I’ll just add, I may be an internet rando to you but you and your source are just randos to me. I’m speaking from my personal experience in writing software for over 25 years along with cleaning up all this AI code bullshit for at least two years.

          AI cannot code. AI writes regurgitated facsimiles of software based on it’s limited dataset. it’s impossible for it to make decisions based on human nuance and can only make calculated assumptions based on the available dataset.

          I don’t know how much clearer I have to be at how limited AI is.

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          2 months ago

          Coding isn’t special you are right, but it’s a thinking task and LLMs (including reasoning models) don’t know how to think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn’t learn to think from that. That’s why LLMs can’t replace humans.

          That does certainly not mean that software can’t be smarter than humans. It will and it’s just a matter of time, but to get there we likely have AGI first.

          To show you that LLMs can’t think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though it “saw” the entire Wikipedia article on how xxo works during training, that it’s a solved game, different strategies and how to consistently draw - but still it can’t do it. It loses most games against my four year old niece and she doesn’t even play good/perfect xxo.

          I wouldn’t trust anything, which is claimed to do thinking tasks, that can’t even beat my niece in xxo, with writing firmware for cars or airplanes.

          LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can’t think. For now, but likely we’ll need different architectures for real thinking models than LLMs have.

  • LanguageIsCool@lemmy.world
    link
    fedilink
    arrow-up
    49
    ·
    2 months ago

    I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      14
      ·
      2 months ago

      It will have consumed the GigaWattHours capacity of a few suns and all the moisture in our solar system, but by Jeeves, we’ll get there!

      …but it won’t be that impressive once we remember concepts like “monkey, typing, Shakespeare” were already embedded in the training data.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      1
      ·
      2 months ago

      llms are systems that output human-readable natural language answers, not true answers

    • zurohki@aussie.zone
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      2 months ago

      It generates an answer that looks correct. Actual correctness is accidental. That’s how you wind up with documents with references that don’t exist, it just knows what references look like.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        3
        ·
        edit-2
        2 months ago

        It doesn’t ‘know’ anything. It is glorified text autocomplete.

        The current AI is intelligent like how Hoverboards hover.

            • capybara@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              2 months ago

              You could claim that it knows the pattern of how references are formatted, depending on what you mean by the word know. Therefore, 100% uninteresting discussion of semantics.

              • irmoz@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                3
                ·
                edit-2
                2 months ago

                The theory of knowledge (epistemology) is a distinct and storied area of philosophy, not a debate about semantics.

                There remains to this day strong philosophical debate on how we can be sure we really “know” anything at all, and thought experiments such as the Chinese Room illustrate that “knowing” is far, far more complex than we might believe.

                For instance, is it simply following a set path like a river in a gorge? Is it ever actually “considering” anything, or just doing what it’s told?

                • capybara@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  2 months ago

                  No one cares about the definition of knowledge to this extent except for philosophers. The person who originally used the word “know” most definitely didn’t give a single shit about the philosophical perspective. Therefore, you shitting yourself a word not being used exactly as you’d like instead of understanding the usage in the context is very much semantics.

        • malin@thelemmy.club
          link
          fedilink
          arrow-up
          5
          arrow-down
          9
          ·
          edit-2
          2 months ago

          This is a philosophical discussion and I doubt you are educated or experienced enough to contribute anything worthwhile to it.

          • ItsMeForRealNow@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            Dude… the point is I don’t have to be. I just have to be human and use it. If it sucks, I am gonna say that.

            • malin@thelemmy.club
              link
              fedilink
              arrow-up
              3
              arrow-down
              4
              ·
              edit-2
              2 months ago

              I can tell you’re a member of the next generation.

              Gonna ignore you now.

              • snooggums@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                2 months ago

                At first I thought that might be a Pepsi reference, but you are probably too young to know about that.

          • frezik@midwest.social
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            2 months ago

            Insulting, but also correct. What “knowing” something even means has a long philosophical history.

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              2 months ago

              Trying to treat the discussion as a philisophical one is giving more nuance to ‘knowing’ than it deserves. An LLM can spit out a sentence that looks like it knows something, but it is just pattern matching frequency of word associations which is mimicry, not knowledge.

              • irmoz@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                2 months ago

                I’ll preface by saying I agree that AI doesn’t really “know” anything and is just a randomised Chinese Room. However…

                Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn’t know things, you’re basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can’t just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can’t- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.

                • snooggums@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  2 months ago

                  Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant.

                  That is not what I said. In fact, it is the opposite of what I said.

                  I said that treating the discussion of LLMs as a philosophical one is giving ‘knowing’ in the discussion of LLMs more nuance than it deserves.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    2 months ago

    I’ve used it extensively, almost $100 in credits, and generally it could one shot everything I threw at it. However: I gave it architectural instructions and told it to use test driven development and what test suite to use. Without the tests yeah it wouldn’t work, and a decent amount of the time is cleaning up mistakes the tests caught. The same can be said for humans, though.

    • Lyra_Lycan@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      How can it pass if it hasn’t had lessons… Well said. Ooh I wonder if lecture footage would be able to teach AI, or audio in from tutors…

  • haui@lemmy.giftedmc.com
    link
    fedilink
    arrow-up
    80
    arrow-down
    2
    ·
    2 months ago

    Welp. Its actually very in line with the late stage capitalist system. All polish, no innovation.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    9
    ·
    2 months ago

    To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.

    LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.

    • Opisek@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      2 months ago

      Perhaps 5 LOC. Maybe 3. And even then I’ll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        2 months ago

        My guess is what’s going on is there’s tons of psuedo code out there that looks like it’s a real language but has functions that don’t exist as placeholders and the LLM noticed the pattern to the point where it just makes up functions, not realizing they need to be implemented (because LLMs don’t realize things but just pattern match very complex patterns).

    • Avicenna@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      I am on you with this one. It is also very helpful in argument heavy libraries like plotly. If I ask a simple question like “in plotly how do I do this and that to the xaxis” etc it generally gives correct answers, saving me having to do internet research for 5-10 minutes or read documentations for functions with 1000 inputs. I even managed to get it to render a simple scene of cloud of points with some interactivity in 3js after about 30 minutes of back and forth. Not knowing much javascript, that would take me at least a couple hours. So yeah it can be useful as an assistant to someone who already knows coding (so the person can vet and debug the code).

      Though if you weigh pros and cons of how LLMs are used (tons of fake internet garbage, tons of energy used, very convincing disinformation bots), I am not convinced benefits are worth the damages.

        • Avicenna@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          2 months ago

          If you do it through AI you can still learn. After all I go through the code to understand what is going on. And for not so complex tasks LLMs are good at commenting the code (though it can bullshit from time to time so you have to approach it critically).

          But anyways the stuff I ask LLMs are generally just one off tasks. If I need to use something more frequently, I do prefer reading stuff for more in depth understanding.

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          9
          arrow-down
          3
          ·
          2 months ago

          Play ASCII tic tac toe against 4o a few times. A model that can’t even draw a tic tac toe game consistently shouldn’t write production code.

        • Boomkop3@reddthat.com
          link
          fedilink
          arrow-up
          8
          arrow-down
          3
          ·
          edit-2
          2 months ago

          I tried, it can’t get trough four lines without messing up. Unless I give it tasks that are so stupendously simple that I’m faster typing them myself while watching tv

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Four lines? Let’s have realistic discussions, you’re just intentionally arguing in bad faith or extremely bad at prompting AI.

            • Boomkop3@reddthat.com
              link
              fedilink
              arrow-up
              1
              ·
              2 months ago

              You can prove your point easily: show us a prompt that gives us a decent amount of code that isn’t stupidly simple or sufficiently common that I don’t just copy paste the first google result

              • Sl00k@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                I have nothing to prove to you if you wish to keep doing everything by hand that’s fine.

                But there are plenty of engineers l3 and beyond including myself using this to lighten their workload daily and acting like that isn’t the case is just arguing in bad faith or you don’t work in the industry.

                • Boomkop3@reddthat.com
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 months ago

                  I do use it, it’s handy for some sloppy css for example. Emphasis on sloppy. I was kinda hoping you actually had something there

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        10
        ·
        edit-2
        2 months ago

        Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.

        • GreenMartian@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          9
          ·
          2 months ago

          They have been pretty good on popular technologies like python & web development.

          I tried to do Kotlin for Android, and they kept tripping over themselves; it’s hilarious and frustrating at the same time.

          • doktormerlin@feddit.org
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            2 months ago

            I use ChatGPT for Go programming all the time and it rarely has problems, I think Go is more niche than Kotlin

            • Opisek@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              2 months ago

              I get a bit frustrated at it trying to replicate everyone else’s code in my code base. Once my project became large enough, I felt it necessary to implement my own error handling instead of go’s standard, which was not sufficient for me anymore. Copilot will respect that for a while, until I switch to a different file. At that point it will try to force standard go errors everywhere.

              • doktormerlin@feddit.org
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                Yes, you can’t use Copilot to generate files in your code structure way if you start from scratch. I usually start by coding a skaffold and then use Copilot to complete the rest, which works quite good most of the time. Another possibility is to create comment templates that will give instructions to Copilot. So every new Go file starts with coding structure comments and Copilot will respect that. Junior Devs might also respect that, but I am not so sure about them

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            2 months ago

            Not sure what you mean, boilerplate code is one of the things AI is good at.

            Take a straightforward Django project for example. Given a models.py file, AI can easily write the corresponding admin file, or a RESTful API file. That’s generally just tedious boilerplate work that requires no decision making - perfect for an AI.

            More than that and you are probably babysitting the AI so hard that it is faster to just write it yourself.

        • Maalus@lemmy.world
          link
          fedilink
          arrow-up
          17
          arrow-down
          1
          ·
          2 months ago

          I recently tried it for scripting simple things in python for a game. Yaknow, change char’s color if they are targetted. It output a shitton of word salad and code about my specific use case in the specific scripting jargon for the game.

          It all based on “Misc.changeHue(player)”. A function that doesn’t exist and never has, because the game is unable to color other mobs / players like that for scripting.

          Anything I tried with AI ends up the same way. Broken code in 10 lines of a script, halucinations and bullshit spewed as the absolute truth. Anything out of the ordinary is met with “yes this can totally be done, this is how” and “how” doesn’t work, and after sifting forums / asking devs you find out “sadly that’s impossible” or “we dont actually use cpython so libraries don’t work like that” etc.

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 months ago

            It’s possible the library you’re using doesn’t have enough training data attached to it.

            I use AI with python for hundreds line data engineering tasks and it nails it frequently.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            12
            ·
            2 months ago

            Well yeah, it’s working from an incomplete knowledge of the code base. If you asked a human to do the same they would struggle.

            LLMs work only if they can fit the whole context into their memory, and that means working only in highly limited environments.

            • Maalus@lemmy.world
              link
              fedilink
              arrow-up
              14
              arrow-down
              1
              ·
              2 months ago

              No, a human would just find an API that is publically available. And the fact that it knew the static class “Misc” means it knows the api. It just halucinated and responded with bullcrap. The entire concept can be summarized with “I want to color a player’s model in GAME using python and SCRIPTING ENGINE”.

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      15
      arrow-down
      4
      ·
      edit-2
      2 months ago

      Practically all LLMs aren’t good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four year old niece and I wouldn’t trust her writing production code 🤣

      Once a single model (doesn’t have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.

      Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn’t compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).

      I don’t say that AI (especially AGI) can’t replace humans. It definitely can and will, it’s just a matter of time, but state of the Art LLMs are basically just extremely good “search engines” or interactive versions of “stack overflow” but not good enough to do real “thinking tasks”.

      • MonkeMischief@lemmy.today
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        2 months ago

        extremely good “search engines” or interactive versions of “stack overflow”

        Which is such a decent use of them! I’ve used it on my own hardware a few times just to say “Hey give me a comparison of these things”, or “How would I write a function that does this?” Or “Please explain this more simply…more simply…more simply…”

        I see it as a search engine that connects nodes of concepts together, basically.

        And it’s great for that. And it’s impressive!

        But all the hype monkeys out there are trying to pedestal it like some kind of techno-super-intelligence, completely ignoring what it is good for in favor of “It’ll replace all human coders” fever dreams.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        8
        ·
        2 months ago

        Cherry picking the things it doesn’t do well is fine, but you shouldn’t ignore the fact that it DOES do some things easily also.

        Like all tools, use them for what they’re good at.

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          9
          arrow-down
          5
          ·
          2 months ago

          I don’t think it’s cherry picking. Why would I trust a tool with way more complex logic, when it can’t even prevent three crosses in a row? Writing pretty much any software that does more than render a few buttons typically requires a lot of planning and thinking and those models clearly don’t have the capability to plan and think when they lose tic tac toe games.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            12
            ·
            2 months ago

            Why would I trust a drill press when it can’t even cut a board in half?

              • wischi@programming.dev
                link
                fedilink
                arrow-up
                4
                arrow-down
                2
                ·
                2 months ago

                I can’t speak for Lemmy but I’m personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it’s good for. But “thinking” is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.

                It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can’t do.

                Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.

                • Zexks@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  2 months ago

                  “I’m not again LLMs I just never say anything useful about them and constantly point out how I can’t use them.” The other guy is right and you just prove his point.

            • wischi@programming.dev
              link
              fedilink
              arrow-up
              14
              arrow-down
              2
              ·
              edit-2
              2 months ago

              A drill press (or the inventors) don’t claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them “reasoning” models, but still fail to beat my niece in tic tac toe, which by the way doesn’t have a PhD in anything 🤣

              LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.

              If they can’t handle things they even saw during training (but sparsely, like tic tac toe) it wouldn’t be able to produce code you should use in production. I wouldn’t trust any junior dev that doesn’t set their O right next to the two Xs.

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                2 months ago

                Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.

                I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)

                • wischi@programming.dev
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  2 months ago

                  Totally agree with that and I don’t think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That’s why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.

    • petey@aussie.zone
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      It needs good feedback. Agentic systems like Roo Code and Claude Code run compilers and tests until it works (just gotta make sure to tell it to leave the tests alone)

    • kkj@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      2
      ·
      2 months ago

      And that’s what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it’s imitating, but with zero understanding of why the original looked that way.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        6
        arrow-down
        16
        ·
        edit-2
        2 months ago

        I mean, there’s about a billion ways it’s been shown to have actual coherent originality at this point, and so it must have understanding of some kind. That’s how I know I and other humans have understanding, after all.

        What it’s not is aligned to care about anything other than making plausible-looking text.

        • Jtotheb@lemmy.world
          link
          fedilink
          arrow-up
          14
          arrow-down
          1
          ·
          2 months ago

          Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

          Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

          And none of these tech companies even pretend that they’ve invented a caring machine that they just haven’t inspired yet. Don’t ascribe further moral and intellectual capabilities to server racks than do the people who advertise them.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            2 months ago

            Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

            You got the “originality” part there, right? I’m talking about tasks that never came close to being in the training data. Would you like me to link some of the research?

            Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

            Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It’s true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.

              • CanadaPlus@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that’s not good enough, it’s easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you’re more interested in ignoring any empirical evidence, though.

                • Jtotheb@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  2 months ago

                  That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

            • borari@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              1
              ·
              2 months ago

              It’s true that one is based on continuous floats and the other is dynamic peaks

              Can you please explain what you’re trying to say here?

              • CanadaPlus@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there’s no notion of time, it’s not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it’s dynamic; they can peak at any time and downstream neurons can begin to fire “early”.

                They do seem to be equivalent in some way, although AFAIK it’s unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.

                • borari@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 months ago

                  Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.

                  In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?

                  Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.

                  I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.

  • Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    90
    arrow-down
    3
    ·
    2 months ago

    All programs can be written with on less line of code. All programs have at least one bug.

    By the logical consequences of these axioms every program can be reduced to one line of code - that doesn’t work.

    One day AI will get there.

    • gmtom@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      2 months ago

      All programs can be written with on less line of code. All programs have at least one bug.

      The humble “Hello world” would like a word.

      • phx@lemmy.ca
        link
        fedilink
        arrow-up
        9
        ·
        2 months ago

        You can fit an awful lot of Perl into one line too if you minimize it. It’ll be completely unreadable to most anyone, but it’ll run

      • Amberskin@europe.pub
        link
        fedilink
        arrow-up
        21
        ·
        2 months ago

        Just to boast my old timer credentials.

        There is an utility program in IBM’s mainframe operating system, z/OS, that has been there since the 60s.

        It has just one assembly code instruction: a BR 14, which means basically ‘return’.

        The first version was bugged and IBM had to issue a PTF (patch) to fix it.

        • Rose@slrpnk.net
          link
          fedilink
          arrow-up
          3
          ·
          2 months ago

          Reminds me of how in some old Unix system, /bin/true was a shell script.

          …well, if it needs to just be a program that returns 0, that’s a reasonable thing to do. An empty shell script returns 0.

          Of course, since this was an old proprietary Unix system, the shell script had a giant header comment that said this is proprietary information and if you disclose this the lawyers will come at ya like a ton of bricks. …never mind that this was a program that literally does nothing.

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          arrow-up
          10
          ·
          2 months ago

          Okay, you can’t just drop that bombshell without elaborating. What sort of bug could exist in a program which contains a single return instruction?!?

          • Amberskin@europe.pub
            link
            fedilink
            arrow-up
            2
            ·
            2 months ago

            It didn’t clear the return code. In mainframe jobs, successful executions are expected to return zero (in the machine R15 register).

            So in this case fixing the bug required to add an instruction instead of removing one.