• TheImpressiveX@lemm.ee
    link
    fedilink
    English
    arrow-up
    62
    ·
    11 hours ago

    Et tu, Brute?

    VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!

    Oh, so it’s basically like YouTube’s auto-generatedd subtitles. Never mind.

    • Nemeski@lemm.eeOP
      link
      fedilink
      arrow-up
      47
      ·
      11 hours ago

      Hopefully better than YouTube’s, those are often pretty bad, especially for non-English videos.

      • wazzupdog (they/them)@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        15
        ·
        9 hours ago

        They’re awful for English videos too, IMO. Anyone with any kind of accent(read literally anyone except those with similar accents to the team that developed the auto-caption) it makes egregious errors, it’s exceptionally bad with Australian, New Zealand, English, Irish, Scottish, Southern US, and North Eastern US. I’m my experience “using” it i find it nigh unusable.

      • MoSal@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        10 hours ago

        I’ve been working on something similar-ish on and off.

        There are three (good) solutions involving open-source models that I came across:

        • KenLM/STT
        • DeepSpeech
        • Vosk

        Vosk has the best models. But they are large. You can’t use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.

        What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).

        One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either “… still he …” or “… silly …”. My tool can give you “… (still he|silly) …” instead of 50/50 chancing it.

        • fartsparkles@sh.itjust.works
          link
          fedilink
          arrow-up
          6
          ·
          10 hours ago

          I love that approach you’re taking! So many times, even in shows with official subs, they’re wrong because of homonyms and I’d really appreciate a hedged transcript.

      • moosetwin@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Youtube’s removal of community captions was the first time I really started to hate youtube’s management, they removed an accessibility feature for no good reason, making my experience with it significantly worse. I still haven’t found a replacement for it (at least, one that actually works)

        • moosetwin@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 hours ago

          and if you are forced to use the auto-generated ones remember no [__] swearing either! as we all know disabled people are small children who need to be coddled!

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 hours ago

      In my experiments, local Whisper models I can run locally are comparable to YouTube’s — which is to say, not production-quality but certainly better then nothing.

      I’ve also had some success cleaning up the output with a modest LLM. I suspect the VLC folks could do a good job with this, though I’m put off by the mention of cloud services. Depends on how they implement it.

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          Cool, thanks for sharing!

          I see you prompt it to “Make sure to only use knowledge found in the following audio transcription”. Have you found that sufficient to eliminate hallucination and going off track?

          • troed@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            8 hours ago

            Yes I have been impressed with the quality of summaries keeping to the content. I have seen, rare, attribution errors though, where who said what got mixed up in unfortunate ways.

      • IrateAnteater@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        9 hours ago

        Since VLC runs on just about everything, I’d imagine that the cloud service will be best for the many devices that just don’t have the horsepower to run an LLM locally.

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          True. I guess they will require you to enter your own OpenAI/Anthropic/whatever API token, because there’s no way they can afford to do that centrally. Hopefully you can point it to whatever server you like (such as a selfhosted ollama or similar).

        • zurohki@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          It’s not just computing power - you don’t always want your device burning massive amounts of battery.