What are your thoughts on #privacy and #itsecurity regarding the #LocalLLMs you use? They seem to be an alternative to ChatGPT, MS Copilot etc. which basically are creepy privacy black boxes. How can you be sure that local LLMs do not A) “phone home” or B) create a profile on you, C) that their analysis is restricted to the scope of your terminal? As far as I can see #ollama and #lmstudio do not provide privacy statements.
The english word “free” actually carries two meanings: “free as in free food” (gratis) and “free as in free speech” (libre).
Ollama is both gratis and libre.
And about the money stuff: Ollama used to be Facebook’s proprietary model, an answer to ChatGPT and Bing Chat/Copilot. Facebook lagged behind the other players and they just said “fuck it, we’re going open-source”. That’s how and why it’s free.
Due to it being open-source, even though models are by design binary blobs, the code that interacts with them and runs them is open-source. If they were connecting to the Internet and phoning home to Facebook, chances are this would’ve been found out by the community due to the open nature of the project.
Even if it weren’t open-source, since it runs locally you could at least block (or view) Internet access.
Basically, even though this is from Facebook, one of the big bads of privacy on the Internet, it’s all good in the end.
Just to be clear, llama is the facebook model, ollama is the software that lets you run llama (along with many other models.
Ollama has internet access (otherwise how could it download models?), the only true privacy solution is to run in a container with no internet access after downloading models, or air gap your computer.
Could you not just monitor/block outgoing traffic?
Thank you for the correction!
Great, thanks for this background!