doodle967@lemdro.id to Privacy@lemmy.mlEnglish · 5 months agoSnowden: "They've gone full mask-off: do not ever trust OpenAI or its products"twitter.comexternal-linkmessage-square178fedilinkarrow-up1623arrow-down124file-textcross-posted to: [email protected]
arrow-up1599arrow-down1external-linkSnowden: "They've gone full mask-off: do not ever trust OpenAI or its products"twitter.comdoodle967@lemdro.id to Privacy@lemmy.mlEnglish · 5 months agomessage-square178fedilinkfile-textcross-posted to: [email protected]
minus-squareclassic@fedia.iolinkfedilinkarrow-up14·5 months agoIs there a magazine or site that breaks this down for the less tech savvy? And is the quality of the AI on par?
minus-squareutopiah@lemmy.mllinkfedilinkarrow-up21·5 months agoCheck my notes https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence but as others suggested a good way to start is probably https://github.com/ollama/ollama/ and if you need a GUI https://gpt4all.io
minus-squareirreticent@lemmy.worldlinkfedilinkarrow-up5·5 months agoI’m not the person who asked, but still thanks for the information. I might give this a try soon.
minus-squareclassic@fedia.iolinkfedilinkarrow-up2·5 months agoDitto, thanks to everyone’s for their suggestions
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkarrow-up1·5 months ago You should have at least 16 GB of RAM available to run the 13B models, Is this gpu ram or cpu ram?
minus-squareKillingTimeItself@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up2·5 months agolikely GPU ram, there is some tech that can offload ram, but generally it’s all hosted in VRAM, this requirement will likely fade as NPUs start becoming a thing though.
minus-squarereddithalation@sopuli.xyzlinkfedilinkarrow-up1·5 months agopretty sure it can run on either, but cpus are slow compared to gpus, often to the point of being impractical
minus-squareMalReynolds@slrpnk.netlinkfedilinkEnglisharrow-up1·5 months agoEither works, but system RAM is at least an order of magnitude slower, more play by mail than chat…
minus-squareJPAKx4@lemmy.blahaj.zonelinkfedilinkarrow-up9·5 months agoOn par? No. Good enough? Definitely. Ollama baby
minus-squarePossibly linux@lemmy.ziplinkfedilinkEnglisharrow-up6arrow-down2·5 months agoOllama with Lava and Mistral
minus-squareaStonedSanta@lemm.eelinkfedilinkarrow-up3arrow-down1·5 months agoYour best bet is YouTubing ollama.
Is there a magazine or site that breaks this down for the less tech savvy? And is the quality of the AI on par?
Check my notes https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence but as others suggested a good way to start is probably https://github.com/ollama/ollama/ and if you need a GUI https://gpt4all.io
I’m not the person who asked, but still thanks for the information. I might give this a try soon.
Ditto, thanks to everyone’s for their suggestions
Is this gpu ram or cpu ram?
likely GPU ram, there is some tech that can offload ram, but generally it’s all hosted in VRAM, this requirement will likely fade as NPUs start becoming a thing though.
pretty sure it can run on either, but cpus are slow compared to gpus, often to the point of being impractical
Either works, but system RAM is at least an order of magnitude slower, more play by mail than chat…
On par? No. Good enough? Definitely. Ollama baby
Ollama with Lava and Mistral
Your best bet is YouTubing ollama.