SurpriZe@lemm.ee to Asklemmy@lemmy.ml • 25 days agoWhat model do you use in your GPT4all?message-squaremessage-square3 fedilinkarrow-up112file-text
arrow-up112message-squareWhat model do you use in your GPT4all?SurpriZe@lemm.ee to Asklemmy@lemmy.ml • 25 days agomessage-square3 Commentsfedilinkfile-text
Curious about what model is best to use on my RTX 3080 + Ryzen 5 3600 since I've just found out about this.
minus-squaregeneva_convenience@lemmy.mlhexbear1·24 days agoLlama3.1 8b,the other versions are too big to run on gpu linkfedilink
Llama3.1 8b,the other versions are too big to run on gpu