I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?

  • PenisDuckCuck9001@lemmynsfw.com
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    15 days ago

    One of the few things they’re good at is academic “cheating”. I’m not a fan of how the education industry has become a massive pyramid scheme intended to force as many people into debt as possible, so I see ai as the lesser evil and a way to fight back.

    Obviously no one is using ai to successfully do graduate research or anything, I’m just talking about how they take boring easy subjects and load you up with pointless homework and assignments so waste your time rather than learn anything. My homework is obviously ai generated and there’s a lot of it. I’m using every resource available to get by.

  • Lauchs@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    15 days ago

    I think there’s a lot of armchair simplification going on here. Easy to call investors dumb but it’s probably a bit more complex.

    AI might not get better than where it is now but if it does, it has the power to be a societally transformative tech which means there is a boatload of money to be made. (Consider early investors in Amazon, Microsoft, Apple and even the much derided Bitcoin.)

    Then consider that until incredibly recently, the Turing test was the yardstick for intelligence. We now have to move that goalpost after what was preciously unthinkable happened.

    And in the limited time with AI, we’ve seen scientific discoveries, terrifying advancements in war and more.

    Heck, even if AI gets better at code (not unreasonable, sets of problems with defined goals/outputs etc, even if it gets parts wrong shrinking a dev team of obscenely well paid engineers to maybe a handful of supervisory roles… Well, like Wu Tang said, Cash Rules Everything Around Me.

    Tl;dr: huge possibilities, even if there’s a small chance of an almost infinite payout, that’s a risk well worth taking.

  • kitnaht@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    8
    ·
    15 days ago

    Holy BALLS are you getting a lot of garbage answers here.

    Have you seen all the other things that generative AI can do? From bone-rigging 3D models, to animations recreated from a simple video, recreations of voices, art created from people without the talent for it. Many times these generative AIs are very quick at creating boilerplate that only needs some basic tweaks to make it correct. This speeds up production work 100 fold in a lot of cases.

    Plenty of simple answers are correct, breaking entrenched monopolies like Google from search, I’ve even had these GPTs take input text and summarize it quickly - at different granularity for quick skimming. There’s a lot of things that can be worthwhile out of these AIs. They can speed up workflows significantly.

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      15 days ago

      I’m a simple man. I just want to look up a quick bit of information. I ask the AI where I can find a setting in an app. It gives me the wrong information and the wrong links. That’s great that you can do all that, but for the average person, it’s kind of useless. At least it’s useless to me.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        You aren’t really using it for its intended purpose. It’s supposed to be used to synthesize general information. It only knows what people talk about; if the subject is particularly specific, like the settings in one app, it will not give you useful answers.

      • kitnaht@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        15 days ago

        So you got the wrong information about an app once. When a GPT is scoring higher than 97% of human test takers on the SAT and other standardized testing - what does that tell you about average human intelligence?

        The thing about GPTs is that they are just word predictors. Lots of time when asked super specific questions about small subjects that people aren’t talking about - yeah - they’ll hallucinate. But they’re really good at condensing, categorizing, and regurgitating a wide range of topics quickly; which is amazing for most people.

        • Kintarian@lemmy.worldOP
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          15 days ago

          It’s not once. It has become such an annoyance that I quit using and asked what the big deal is. I’m sure for creative and computer nerd stuff it’s great, but for regular people sitting at home listening to how awesome AI is and being underwhelmed it’s not great. They keep shoving it down our throats and plain old people are bailing.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 days ago

            tl;dr: It’s useful, but not necessarily for what businesses are trying to convince you it’s useful for

          • kitnaht@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            15 days ago

            Yeah, see that’s the kicker. Calling this “computer nerd stuff” just gives away your real thinking on the matter. My high school daughters use this to finish their essay work quickly, and they don’t really know jack about computers.

            You’re right that old people are bailing - they tend to. They’re ignorant, they don’t like to learn new and better ways of doing things, they’ve raped our economy and expect everything to be done for them. People who embrace this stuff will simply run circles around those who don’t. That’s fine. Luddites exist in every society.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 days ago

      Yeah, I feel like people who have very strong opinions about what AI should be used for also tend to ignore the facts of what it can actually do. It’s possible for something to be both potentially destructive and used to excess for profit, and also an incredible technical achievement that could transform many aspects of our life. Don’t ignore facts about something just because you dislike it.

  • Tyrangle@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    8
    ·
    15 days ago

    This is like saying that automobiles are overhyped because they can’t drive themselves. When I code up a new algorithm at work, I’m spending an hour or two whiteboarding my ideas, then the rest of the day coding it up. AI can’t design the algorithm for me, but if I can describe it in English, it can do the tedious work of writing the code. If you’re just using AI as a Google replacement, you’re missing the bigger picture.

      • Tyrangle@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        15 days ago

        A lot of people are doing work that can be automated in part by AI, and there’s a good chance that they’ll lose their jobs in the next few years if they can’t figure out how to incorporate it into their workflow. Some people are indeed out of the workforce or in industries that are safe from AI, but that doesn’t invalidate the hype for the rest of us.

  • ProfessorScience@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    15 days ago

    When ChatGPT first started to make waves, it was a significant step forward in the ability for AIs to sound like a person. There were new techniques being used to train language models, and it was unclear what the upper limits of these techniques were in terms of how “smart” of an AI they could produce. It may seem overly optimistic in retrospect, but at the time it was not that crazy to wonder whether the tools were on a direct path toward general AI. And so a lot of projects started up, both to leverage the tools as they actually were, and to leverage the speculated potential of what the tools might soon become.

    Now we’ve gotten a better sense of what the limitations of these tools actually are. What the upper limits of where these techniques might lead are. But a lot of momentum remains. Projects that started up when the limits were unknown don’t just have the plug pulled the minute it seems like expectations aren’t matching reality. I mean, maybe some do. But most of the projects try to make the best of the tools as they are to keep the promises they made, for better or worse. And of course new ideas keep coming and new entrepreneurs want a piece of the pie.

  • Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    15 days ago

    Disclaimer: I’m going to ignore all moral questions here

    Because it represents a potentially large leap in the types of problems we can solve with computers. Previously the only comparable tool we had to solve problems were algorithms, which are fast, well-defined, and repeatable, but cannot deal with arbitrary or fuzzy inputs in a meaningful way. AI excels at dealing with fuzzy inputs (including natural language, which was a huge barrier previously), at the expense of speed and reliability. It’s basically an entire missing half to our toolkit.

    Be careful not to conflate AI in general with LLMs. AI is usually implemented as Machine Learning, which is a method of fitting an output to training data. LLMs are a specific instance of this that are trained on language (hence, large language models). I suspect that if AI becomes more widely adopted, most users will be interacting with LLMs like you are now, but most of the business benefit would come from classifiers that have a more restricted input/output space. As an example, you could use ML to train an AI that can be used to detect potentially suspicious bank transactions. The more data you have to sort through, the better AI can learn from it*, so I suspect the companies that have been collecting terabytes of data will start using AI to try to analyze it. I’m curious if that will be effective.

    *technically it depends a lot on the training parameters

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      15 days ago

      I suppose it depends on the data you’re using it for. I can see a computer looking through stacks data in no time.

  • SpaceNoodle@lemmy.world
    link
    fedilink
    arrow-up
    82
    arrow-down
    3
    ·
    15 days ago

    Investors are dumb. It’s a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      15 days ago

      I can see that. That guy over there has the new shiny toy. I want a new shiny toy. Give me a new shiny toy.

    • pimeys@lemmy.nauk.io
      link
      fedilink
      arrow-up
      37
      arrow-down
      1
      ·
      15 days ago

      And LLM is mostly for investors, not for users. Investors see you “do AI” even if you just repackage GPT or llama, and your Series A is 20% bigger.

  • Tylerdurdon@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    15 days ago
    • automation by companies so they can "streamline"their workforces.

    • innovation by “teaching” it enough to solve bigger problems (cancer, climate, etc).

    • creating a sentient species that is the next evolution of life and watching it systematically eradicate every last human to save the planet.

  • aesthelete@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    14 days ago

    Tech company management loves the idea of ridding themselves of programmers and other knowledge workers, and AI companies love selling the idea of non-productivity impacting layoffs to unsavvy companies (tech and otherwise).

  • xia@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    15 days ago

    The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.

    There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.

    Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.

      • xia@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 days ago

        I’m not talking about one-offs and the assessment noise floor, more like: “ChatGPT broke the Turing test” (as is claimed). It used to be something we tried to attain, and now we don’t even bother trying to make GPT seem human… we actually train them to say otherwise lest people forget. We figuratively pole-vaulted over the turing test and are now on the other side of it, as if it was a point on a timeline instead of an academic procedure.

  • Kintarian@lemmy.worldOP
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    15 days ago

    Ok, i am working on a legal case. I asked Copilot to write a demand letter for me and it is pretty damn good.

  • Kramkar@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    6
    ·
    15 days ago

    It’s understandable to feel frustrated when AI systems give incorrect or unsatisfactory responses. Despite these setbacks, there are several reasons why AI continues to be heavily promoted and integrated into various technologies:

    1. Potential and Progress: AI is constantly evolving and improving. While current models are not perfect, they have shown incredible potential across a wide range of fields, from healthcare to finance, education, and beyond. Developers are working to refine these systems, and over time, they are expected to become more accurate, reliable, and useful.

    2. Efficiency and Automation: AI can automate repetitive tasks and increase productivity. In areas like customer service, data analysis, and workflow automation, AI has proven valuable by saving time and resources, allowing humans to focus on more complex and creative tasks.

    3. Enhancing Decision-Making: AI systems can process vast amounts of data faster than humans, helping in decision-making processes that require analyzing patterns, trends, or large datasets. This is particularly beneficial in industries like finance, healthcare (e.g., medical diagnostics), and research.

    4. Customization and Personalization: AI can provide tailored experiences for users, such as personalized recommendations in streaming services, shopping, and social media. These applications can make services more user-friendly and customized to individual preferences.

    5. Ubiquity of Data: With the explosion of data in the digital age, AI is seen as a powerful tool for making sense of it. From predictive analytics to understanding consumer behavior, AI helps manage and interpret the immense data we generate.

    6. Learning and Adaptation: Even though current AI systems like Gemini, ChatGPT, and Microsoft Co-pilot make mistakes, they also learn from user interactions. Continuous feedback and training improve their performance over time, helping them better respond to queries and challenges.

    7. Broader Vision: The development of AI is driven by the belief that, in the long term, AI can radically improve how we live and work, advancing fields like medicine (e.g., drug discovery), engineering (e.g., smarter infrastructure), and more. Developers see its potential as an assistive technology, complementing human skills rather than replacing them.

    Despite their current limitations, the goal is to refine AI to a point where it consistently enhances efficiency, creativity, and decision-making while reducing errors. In short, while AI doesn’t always work perfectly now, the vision for its future applications drives continued investment and development.

    • 5gruel@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      14 days ago

      When will people finally stop parroting this sentence? It completely misses the point and answers nothing.

      • Kanda@reddthat.com
        link
        fedilink
        arrow-up
        1
        ·
        13 days ago

        Where’s the intelligence in suggesting glue in pizza? Or is it just copying random stuff and guessing what comes next like a huge phone keyboard app?

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      15 days ago

      It’s easier for the marketing department. According to an article, it’s neither artificial nor intelligent.

        • Kintarian@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          15 days ago

          Artificial intelligence (AI) is not artificial in the sense that it is not fake or counterfeit, but rather a human-created form of intelligence. AI is a real and tangible technology that uses algorithms and data to simulate human-like cognitive processes.

            • Kintarian@lemmy.worldOP
              link
              fedilink
              arrow-up
              1
              ·
              14 days ago

              Well, using the definition that artificial means man made then no. Human intelligence wasn’t made by humans therefore it isn’t artificial.

              • canadaduane@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 day ago

                I wonder if some of our intelligence is artificial. Being able to drive directly to any destination, for example, with a simple cell-phone lookup. Reading lifetimes worth of experience in books that doesn’t naturally come at birth. Learning incredibly complex languages that are inherited not by genes, but by environment–and, depending on the language, being able to distinguish different colors.

                • Kintarian@lemmy.worldOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 day ago

                  From the day I was born, my environment shaped what I thought and felt. Entering the school system I was indoctrinated into whatever society I was born to. All of the things that I think I know are shaped by someone else. I read a book and I regurgitate its contents to other people. I read a post online and I start pretending that it’s the truth when I don’t actually know. How often do humans actually have an original thought? Most of the time we’re just regurgitating things that we’ve experienced, read, or heard from exteral foces rather than coming up with thoughts on our own.