What are your thoughts on #privacy and #itsecurity regarding the #LocalLLMs you use? They seem to be an alternative to ChatGPT, MS Copilot etc. which basically are creepy privacy black boxes. How can you be sure that local LLMs do not A) “phone home” or B) create a profile on you, C) that their analysis is restricted to the scope of your terminal? As far as I can see #ollama and #lmstudio do not provide privacy statements.

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 day ago

    To add to this, I run the same setup, but add Continue to VSCode. It makes an interface similar to Cursor that uses the Ollama instance.

    One thing to be careful of, the Ollama port has no authentication (ridiculous, but it is what it is).

    You’ll need either a card with 12-16GB VRAM for the recommended models for code generation and auto complete, or you may he able to get away with an 8GB card if it’s a second card in the system. You can also run on CPU, but it’s very slow that way.

    @[email protected]