I've been using GPT4All locally, and that does work pretty well. It's worth noting that a lot of these will work with the same models. So, the difference is mostly in the UI ergonomics.
Good point. I've been dabbling with Khoj myself, which connects with Obsidian, the Markdown notemaking software I use. This lets the AI draw from my notes on the fly.
I'm waiting to build a PC with the RTX 5090 or something before diving deeper into local AI. The integrated GPU I'm using right now is just too slow.
I've been using GPT4All locally, and that does work pretty well. It's worth noting that a lot of these will work with the same models. So, the difference is mostly in the UI ergonomics.
Good point. I've been dabbling with Khoj myself, which connects with Obsidian, the Markdown notemaking software I use. This lets the AI draw from my notes on the fly.
I'm waiting to build a PC with the RTX 5090 or something before diving deeper into local AI. The integrated GPU I'm using right now is just too slow.
That's a pretty neat use case. And yeah dedicated GPU is kind of a must for running locally.