What are your thoughts on #privacy and #itsecurity regarding the #LocalLLMs you use? They seem to be an alternative to ChatGPT, MS Copilot etc. which basically are creepy privacy black boxes. How can you be sure that local LLMs do not A) “phone home” or B) create a profile on you, C) that their analysis is restricted to the scope of your terminal? As far as I can see #ollama and #lmstudio do not provide privacy statements.
I’m running gpt4all on AMD. Had to figure out which packages to install, which took a while, but since then it runs fine just fine
Good to know. Is there a particular guide that you followed to get it running on AMD?
arch wiki and gpt4all github & issues