- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
My thoughts are summarized by this line
Casey Fiesler, Associate Professor of Information Science at University of Colorado Boulder, told me in a call that while it’s good for physicians to be discouraged from putting patient data into the open-web version of ChatGPT, how the Northwell network implements privacy safeguards is important—as is education for users. “I would hope that if hospital staff is being encouraged to use these tools, that there is some significant education about how they work and how it’s appropriate and not appropriate,” she said. “I would be uncomfortable with medical providers using this technology without understanding the limitations and risks. ”
It’s good to have an AI model running on the internal network, to help with emails and the such. A model such as Perplexity could be good for parsing research articles, as long as the user clicks the links to follow-up in the sources.
It’s not good to use it for tasks that traditional “AI” was already doing, because traditional AI doesn’t hallucinate and it doesn’t require so much processing power.
It absolutely should not be used for diagnosis or insurance claims.