I agree. (Digital artist here) I hate the Ai hype machine madness but also understand trying to find ways to make it ethical and useful to eliminate tedium. As it should be for in the first place.
In the tech sphere, they likely have to play along to stay relevant since everybody’s all hyped about it.
How’s this local LLM of yours? Does it tend to be more accurate than Recognize? It’s integrated into Nextcloud?
Recognizing objects and people in pictures locally is one of the best uses I think.
…and maybe if it can stop auto-tagging random EDM songs as “country” or “rap” when they sound nothing like, I’d be excited about that!😂
I’ve set up OpenWebUI with the docker containers, which includes Ollama in API mode, and optionally Playwright if you want to add webscraping to your RAG queries. This gives you a ChatJippity format webpage that you can manage your models for Ollama, and add OpenAI usage as well if you want. You can manage all the users as well.
On top, then you have API support to your own Ollama instance, and you can also configure GPU usage for your local AI if available.
The AI support doesn’t hurt you if you don’t use it - and they’ve done the right thing by making sure you can do things locally instead of cloud.
Here’s what AI does for me (self-hosted, my own scripts) on NC 9:
When our phones sync photos to Nextcloud a local LLM creates image descriptions on all the photos, as well as creating five tags for each.
It is absolutely awesome.
I agree. (Digital artist here) I hate the Ai hype machine madness but also understand trying to find ways to make it ethical and useful to eliminate tedium. As it should be for in the first place.
In the tech sphere, they likely have to play along to stay relevant since everybody’s all hyped about it.
How’s this local LLM of yours? Does it tend to be more accurate than Recognize? It’s integrated into Nextcloud?
Recognizing objects and people in pictures locally is one of the best uses I think. …and maybe if it can stop auto-tagging random EDM songs as “country” or “rap” when they sound nothing like, I’d be excited about that!😂
I’ve set up OpenWebUI with the docker containers, which includes Ollama in API mode, and optionally Playwright if you want to add webscraping to your RAG queries. This gives you a ChatJippity format webpage that you can manage your models for Ollama, and add OpenAI usage as well if you want. You can manage all the users as well.
On top, then you have API support to your own Ollama instance, and you can also configure GPU usage for your local AI if available.
Honestly, it’s the easiest way to get local AI.
@troed: That sounds awesome, and something on my todo list. Do you mind sharing how your upload and tagging AI-scripts work?