Hayao Miyasaki is the co-founder of Studio Ghibli, a Japanese animation studio known worldwide for their stunning, emotional, beautiful stories and movies. At the core of Studio Ghibli’s work is a deep engagement with questions of humanity. About what it means to be a human, about how to care for one another and the world […]
Have you heard of ollama? You can run deepseek and stuff locally super easy. I know it’s not a complete replacement, but it feels nice to use an LLM guilt free. I’ve compared the 14b distilled model from deepseek vs the paid version of ChatGPT and it made me cancel my account.
I would prefer to run my ais locally, but my brain glazes over if I see github. I found a a program called “gpt4all”, but it’s very limited in what models it can run, and what I could get just wasn’t as good for my use case as openai’s 4o model. Also, being able to generate images in the same conversation as text work is a feature that I’m fairly certain no other ai model can do (yet).
I think whats really happening behind the scenes is that the model you’re talking to makes a function call to another model that generates the image.
I haven’t seen it either so if you want that and don’t want to code it might be best to stick with paid, but something like that could easily exist somewhere else.
What do you use to run it locally? If there was something that could use speech to text reliably to be able to use a open source option, I consider switching.
FWIW speech to text works really well on Apple stuff.
I’m not exactly sure what info you’re looking but: my gaming PC is headless and sits in a closet. I run ollama on that and I connect to it using a client called “ChatBox”. It’s got a gtx 3060 which fits the whole model, so it’s reasonably fast. I’ve tried the 32b model and it does work but slowly.
Honestly, ollama was so easy to setup, if you have any experience with computers I recommend giving it a shot. (Could be a great excuse to get a new gpu 😉)
Have you heard of ollama? You can run deepseek and stuff locally super easy. I know it’s not a complete replacement, but it feels nice to use an LLM guilt free. I’ve compared the 14b distilled model from deepseek vs the paid version of ChatGPT and it made me cancel my account.
I would prefer to run my ais locally, but my brain glazes over if I see github. I found a a program called “gpt4all”, but it’s very limited in what models it can run, and what I could get just wasn’t as good for my use case as openai’s 4o model. Also, being able to generate images in the same conversation as text work is a feature that I’m fairly certain no other ai model can do (yet).
I think whats really happening behind the scenes is that the model you’re talking to makes a function call to another model that generates the image.
I haven’t seen it either so if you want that and don’t want to code it might be best to stick with paid, but something like that could easily exist somewhere else.
I bet you’re right, but the fact that I never see it is a feature worth paying for, especially for a smooth-brain like myself.
What do you use to run it locally? If there was something that could use speech to text reliably to be able to use a open source option, I consider switching.
FWIW speech to text works really well on Apple stuff.
I’m not exactly sure what info you’re looking but: my gaming PC is headless and sits in a closet. I run ollama on that and I connect to it using a client called “ChatBox”. It’s got a gtx 3060 which fits the whole model, so it’s reasonably fast. I’ve tried the 32b model and it does work but slowly.
Honestly, ollama was so easy to setup, if you have any experience with computers I recommend giving it a shot. (Could be a great excuse to get a new gpu 😉)