What are your thoughts on #privacy and #itsecurity regarding the #LocalLLMs you use? They seem to be an alternative to ChatGPT, MS Copilot etc. which basically are creepy privacy black boxes. How can you be sure that local LLMs do not A) “phone home” or B) create a profile on you, C) that their analysis is restricted to the scope of your terminal? As far as I can see #ollama and #lmstudio do not provide privacy statements.
Ollama and Open WebUI, as far as I know, are just open source software projects created to run pre-trained models, and have the same business model as many other open source projects on Github.
The models themselves come from Google, Meta and others. Have a look at all the models available on Hugging Face. The models themselves are just binary files. They’ve been trained and there are no ongoing costs to use them apart from energy your computer uses to run them.
Thank you!