Cool, thanks for sharing!
I see you prompt it to “Make sure to only use knowledge found in the following audio transcription”. Have you found that sufficient to eliminate hallucination and going off track?
Cool, thanks for sharing!
I see you prompt it to “Make sure to only use knowledge found in the following audio transcription”. Have you found that sufficient to eliminate hallucination and going off track?
True. I guess they will require you to enter your own OpenAI/Anthropic/whatever API token, because there’s no way they can afford to do that centrally. Hopefully you can point it to whatever server you like (such as a selfhosted ollama or similar).
In my experiments, local Whisper models I can run locally are comparable to YouTube’s — which is to say, not production-quality but certainly better then nothing.
I’ve also had some success cleaning up the output with a modest LLM. I suspect the VLC folks could do a good job with this, though I’m put off by the mention of cloud services. Depends on how they implement it.
Also works on Twitch with the added benefit of NOT playing ads (you still get breaks, just with a placeholder screen instead of the commercial).
mpv has yt-dlp support built in, so it can just play the streams directly.
vd
(VisiData) is a wonderful TUI spreadsheet program. It can read lots of formats, like csv, sqlite, and even nested formats like json. It supports Python expressions and replayable commands.
I find it most useful for large CSV files from various sources. Logs and reports from a lot of the tools I use can easily be tens of thousands of rows, and it can take many minutes just to open them in GUI apps like Excel or LibreOffice.
I frequently need to re-export fresh data, so I find myself needing to re-process and re-arrange it every time, which visidata makes easy (well, easier) with its replayable command files. So e.g. I can write a script to open a raw csv, add a formula column, resize all columns to fit their content, set the column types as appropriate, and sort it the way I need it. So I can do direct from exporting the data to reading it with no preprocessing in between.
The ideal amount of storage is enough that I literally never need to think about it, never need to delete anything, and never need to use cloud services for things that could realistically be local.
It’s hard to say what that would be because I’ve never had a phone that even came close.
The largest phone I’ve owned was 256GB. That was “fine”, but it was NOT big enough that I could fundamentally change my habits. For example, I don’t carry my entire music collection on my phone. I don’t even do that on my laptop anymore since the advent of SSDs.
I have a 128GB phone now and it sucks. I’ve set up a one-way copy to my home desktop with Syncthing so I can safely delete photos, videos, and screen recordings from my phone. I need to do this frequently.
With the standard price-gouging in the industry, I will probably settle for 256 with my next phone. If prices were reasonable, I’d go for 1TB at least.
I miss SD cards but there are no viable options with slots anymore.
Orbit currently uses a version of Mistral LLM (Mistral 7B) that is locally hosted on Mozilla’s Google Cloud Platform instance.
Hmm.
>locally hosted
>Google Cloud
Hmmmmmmmmmmmmmmmm.
Are we so desperate that we want what is basically malware ported to Linux? Ew. I didn’t tolerate that shit when I was running Windows, and I’m sure not going to start now.
I’ll just keep on voting with my wallet, and not pay money for such user-hostile products.