Text to avoid paywall

The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

“The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

“I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

  • ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    edit-2
    8 months ago

    This could be a good use of AI. Since this regime is doing it, and since some of their claims are pretty unrealistic, it probably won’t be. But, ML has been used for a while to help identify new drug compounds, find interactions, etc. It could be very useful in the FDA’s work - I’m honestly surprised to hear that they’re only just now considering using it.

    The Four Thieves Vinegar Collective uses some software from MIT ASKCOS that uses neural networks to help identify reactions and retrosynthesis chains to produce chemical compounds using cheap, homemade bioreactors. Famously, they are doing this to make mifepristone available for people in areas of the US without access to abortion care.

    You can check it out here. It’s a good example of a very positive use-case for an AI/ML tool in medicine.

    • Dasus@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 months ago

      Properly implemented machine learning, sure.

      These dimwits are genuinely just gonna feed everything to a second rate LLM and treat the output as the word of God.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 months ago

    You really should put testing and verification in the hands of a new and unproven technology just to save a few bucks. Don’t worry, the ramifications are trivial, just drug safety.

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    8 months ago

    Oh my God. The reasons why I am happy not to be an American are stacking thicker every week.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    The same AI that time after time, even when I tell it the version of the app and OS that I’m using, continues to give me commands that are incompatible with my version? If I tell it the command doesn’t work it eventually loops back to its original suggestion.

  • postmateDumbass@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 months ago

    Final stage capitalism: Purging all the experts (at catching bullshit from appllicants) before the agencies train the AI with newb level inputs.

    • Lost_My_Mind@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      8 months ago

      Wait…only one? I’ve been eating several, to help break down foods inside my gizzard.

      BAAAAWWWWKKKKKK

  • OCATMBBL@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    8 months ago

    So we’re going to depend on AI, which can’t reliably remember how many fingers humans have, to take over medical science roles. Neat!

    • 3abas@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      13
      ·
      8 months ago

      Different types of AI, different training data, different expectations and outcomes. Generative AI is but one use case.

      It’s already been proven a useful tool in research, when directed and used correctly by an expert. It’s a tool, to give to scientists to assist them, not replace them.

      If you’re goal to use AI to replace people, you’ve got a bad surprise coming.

      If you’re not equipping your people with the skills and tools of AI, your people will become obsolete in short time.

      Learn AI and how to utilize it as a tool, you can train your own model on your own private data and locally interrogate the model to do unique analysis typically not possible in realtime. Learn the goods and bads of technology and let your ethics guide how you use it, but stop dismissing revolutionary technology because the earlier generative models weren’t reinforced enough get fingers right.

      • OCATMBBL@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        I’m not dismissing its use. It is a useful tool, but it cannot replace experts at this point, or maybe ever (and I’m gathering you agree on this).

        If it ever does get to that point, we need to also remedy the massive social consequences of revoking those same experts’ ability to have sufficient income to have a reasonable living.

        I was being a little silly for effect.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        8 months ago

        when directed and used correctly by an expert

        They’re also likely to fire the experts.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    8 months ago

    Oh good, a 60% chance you’ll get an ineffective or killer drug because they’ll use AI to analyze the usage and AI to report on it.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      That is an underestimate, since it doesn’t factor in the knockdown effect of the more lax regulations having, so people will try to sell all kinds of crap as “medicine”.

    • 800XL@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      If it actually ends up being an AI and not just some Trump cuck stooge masquerading as AI picking the drug by the company that gave the largest bribe to Trump, I 100% guarantee this AI is trained only on papers written by non-peer reviewed drug company paid “scientists” containing made up narratives.

      Those of us prescribed the drugs will be the guinea pigs because R&D costs money and hits the bottom line. The many deaths will be conveniently scape-goated on “the AI” the morons in charge promised is smarter and more efficient than a person.

      Fuck this shit.

  • cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    8 months ago

    Things LLM can’t do well without extensive checking on large corpus of data:

    • summarizing
    • providing informed opinions

    What is it they want to make “more efficient” again? Digesting thousands of documents, filter extremely specific subset of data, and shorten the output?

    Oh.

  • NocturnalMorning@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    Eventually a utopia society will just be filled with A.I. talking to other A.I. and training more A.I. to do A.I. things. No need for humans, those dont have any value.

    • gcheliotis@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 months ago

      Or maybe that is part of the allure of automation: the eschewing of human responsibility, such that any bias in decision making appears benign (the computer deemed it so, no one’s at fault) and any errors - if at all recognized as such - become simply a matter of bug-fixing or model fine-tuning. The more inscrutable the model the better in that sense. The computer becomes an oracle and no one’s to blame for its divinations.

      • AnarchistArtificer@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        I saw a paper a while back that argued that AI is being used as “moral crumple zones”. For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It’s an interesting concept that I’ve thought about a lot since I found it.

        • gcheliotis@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          I can absolutely see that. And I don’t think it’s AI-specific, it’s got to do with relegating responsibility to a machine. Of course AI in the guise of LLMs can make things worse with its low interpretability, where it might be even harder to trace anything back to an executive or clerical decision.

      • 2d4_bears@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        I am convinced that law enforcement wants intentionally biased AI decision makers so that they can justify doing what they’ve always done with the cover of “it’s not racist because a computer said so!”

        The scary part is most people are ignorant enough to buy it.