• TheImpressiveX@lemm.ee
    link
    fedilink
    English
    arrow-up
    60
    ·
    10 hours ago

    Et tu, Brute?

    VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!

    Oh, so it’s basically like YouTube’s auto-generatedd subtitles. Never mind.

    • Nemeski@lemm.eeOP
      link
      fedilink
      arrow-up
      45
      ·
      10 hours ago

      Hopefully better than YouTube’s, those are often pretty bad, especially for non-English videos.

      • wazzupdog (they/them)@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        15
        ·
        9 hours ago

        They’re awful for English videos too, IMO. Anyone with any kind of accent(read literally anyone except those with similar accents to the team that developed the auto-caption) it makes egregious errors, it’s exceptionally bad with Australian, New Zealand, English, Irish, Scottish, Southern US, and North Eastern US. I’m my experience “using” it i find it nigh unusable.

      • MoSal@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        9 hours ago

        I’ve been working on something similar-ish on and off.

        There are three (good) solutions involving open-source models that I came across:

        • KenLM/STT
        • DeepSpeech
        • Vosk

        Vosk has the best models. But they are large. You can’t use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.

        What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).

        One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either “… still he …” or “… silly …”. My tool can give you “… (still he|silly) …” instead of 50/50 chancing it.

        • fartsparkles@sh.itjust.works
          link
          fedilink
          arrow-up
          6
          ·
          9 hours ago

          I love that approach you’re taking! So many times, even in shows with official subs, they’re wrong because of homonyms and I’d really appreciate a hedged transcript.

      • moosetwin@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        Youtube’s removal of community captions was the first time I really started to hate youtube’s management, they removed an accessibility feature for no good reason, making my experience with it significantly worse. I still haven’t found a replacement for it (at least, one that actually works)

        • moosetwin@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          and if you are forced to use the auto-generated ones remember no [__] swearing either! as we all know disabled people are small children who need to be coddled!

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 hours ago

      In my experiments, local Whisper models I can run locally are comparable to YouTube’s — which is to say, not production-quality but certainly better then nothing.

      I’ve also had some success cleaning up the output with a modest LLM. I suspect the VLC folks could do a good job with this, though I’m put off by the mention of cloud services. Depends on how they implement it.

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Cool, thanks for sharing!

          I see you prompt it to “Make sure to only use knowledge found in the following audio transcription”. Have you found that sufficient to eliminate hallucination and going off track?

          • troed@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            7 hours ago

            Yes I have been impressed with the quality of summaries keeping to the content. I have seen, rare, attribution errors though, where who said what got mixed up in unfortunate ways.

      • IrateAnteater@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        8 hours ago

        Since VLC runs on just about everything, I’d imagine that the cloud service will be best for the many devices that just don’t have the horsepower to run an LLM locally.

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          True. I guess they will require you to enter your own OpenAI/Anthropic/whatever API token, because there’s no way they can afford to do that centrally. Hopefully you can point it to whatever server you like (such as a selfhosted ollama or similar).

        • zurohki@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          It’s not just computing power - you don’t always want your device burning massive amounts of battery.

  • FundMECFSResearch@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    98
    ·
    10 hours ago

    I know people are gonna freak out about the AI part in this.

    But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.

    • kautau@lemmy.world
      link
      fedilink
      arrow-up
      50
      ·
      edit-2
      7 hours ago

      The most important part is that it’s a local LLM model running on your machine. The problem with AI is less about LLMs themselves, and more about their control and application by unethical companies and governments in a world driven by profit and power. And it’s none of those things, it’s just some open source code running on your device. So that’s cool and good.

    • mormund@feddit.org
      link
      fedilink
      arrow-up
      23
      ·
      8 hours ago

      Yeah, transcription is one of the only good uses for LLMs imo. Of course they can still produce nonsense, but bad subtitles are better none at all.

    • hushable@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      8 hours ago

      Indeed, YouTube had auto generated subtitles for a while now and they are far from perfect, yet I still find it useful.

  • pastaPersona@lemmy.world
    link
    fedilink
    arrow-up
    38
    ·
    9 hours ago

    I know AI has some PR issues at the moment but I can’t see how this could possibly be interpreted as a net negative here.

    In most cases, people will go for (manually) written subtitles rather than autogenerated ones, so the use case here would most often be in cases where there isn’t a better, human-created subbing available.

    I just can’t see AI / autogenerated subtitles of any kind taking jobs from humans because they will always be worse/less accurate in some way.

    • x00z@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      8 hours ago

      Autogenerated subtitles are pretty awesome for subtitle editors I’d imagine.

        • glimse@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          7 hours ago

          We started doing subtitling near the end of my time as an editor and I had to create the initial English ones (god forbid we give the translation company another couple hundred bucks to do it) and yeah…the timestamps are the hardest part.

          I can type at 120 wpm but that’s not very helpful when you can only write a sentence at a time

          • Kazumara@discuss.tchncs.de
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            5 hours ago

            and yeah…the timestamps are the hardest part.

            So, if you can tell us, how did the process work?

            Do you run the video and type the subtitles in some program at the same time, and it keeps score of the time at which you typed, which you manually adjust for best timing of the subtitle appearance afterwards? Or did you manually note down timestamps from the start?

            • glimse@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              5 hours ago

              We were an Adobe house so I did it inside of premiere. I can’t remember if it was built in or a plugin but there was two ways depending on if the shoot was scripted or ad-libbed. If it was scripted, I’d import a txt file into premiere and break it apart as needed with markers on the timeline. It was tedious but by far better than the alternative - manually typing it at each marker.

              I initially tried making the markers all first but I kept running into issues with the timing. Subtitles have both a beginning and an end timestamp and I often wouldn’t leave enough room to be able to actually read it.

              This was over a decade ago, I’ll bet it’s gotten easier. I know Premiere has a transcription feature that’s pretty good

              • Kazumara@discuss.tchncs.de
                link
                fedilink
                arrow-up
                3
                ·
                edit-2
                5 hours ago

                That’s interesting thank you.

                I only did it once for a school project involving translation of a film scene (also over a decade ago) but we just manually wrote an SRT file, that was miserable 😄

          • DerArzt@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            6 hours ago

            Is there a cross section of people who do live subtitles and people that have experience being a stenographer? Asking as I would imagine that using a stenographic keyboard would allow them to keep up with what’s being said.

    • ArgentRaven@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      8 hours ago

      Yeah this is exactly what we should want from AI. Filling in an immediate need, but also recognizing it won’t be as good as a pro translation.

    • Kilgore Trout@feddit.it
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      I can’t see how this could possibly be interpreted as a net negative here

      Not judging this as bad or good, but for sure if it’s offline generated it will bloat the size of the program.

  • Alice@beehaw.org
    link
    fedilink
    arrow-up
    22
    ·
    7 hours ago

    My experience with generated subtitles is that they’re awful. Hopefully these are better, but I wish human beings with brains would make them.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 hours ago

      subtitling by hand takes sooooo fucking long :( people who do it really are heroes. i did community subs on youtube when that was a thing and subtitling + timing a 20 minute video took me six or seven hours, even with tools that suggested text and helped align it to sound. your brain instantly notices something is off if the subs are unaligned.

      • Alice@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        4 hours ago

        Oh shit, I knew it was tedious but it sounds like I seriously underestimated how long it takes. Good to know, and thanks for all you’ve done.

        Sounds to me like big YouTubers should pay subtitlers, but that’s still a small fraction of audio/video content in existence. So yeah, I guess a better wish would be for the tech to improve. Hopefully it’s on the right track.

        • lime!@feddit.nu
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          i just did it for one video :P it really is tedious and thankless though so it would be a great application of ml.

      • Nate@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        I did this for a couple videos too. It’s actually still a thing, it was just so time consuming for no pay that almost nobody did it, so creators don’t check the box to allow people to contribute subs

    • OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      arrow-up
      12
      ·
      4 hours ago

      Iirc this is because of how they’ve optimized the file reading process; it genuinely might be more work to add efficient frame-by-frame backwards seeking than this AI subtitle feature.

      That said, jfc please just add backwards seeking. It is so painful to use VLC for reviewing footage. I don’t care how “inefficient” it is, my computer can handle any operation on a 100mb file.

  • moosetwin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 hours ago

    I don’t mind the idea, but I would be curious where the training data comes from. You can’t just train them off of the user’s (unsubtitled) videos, because you need subtitles to know if the output is right or wrong. I checked their twitter post, but it didn’t seem to help.

  • qyron@sopuli.xyz
    link
    fedilink
    arrow-up
    7
    ·
    5 hours ago

    Fuck no. Leave the subtitles alone. Make people learn something, like searching and applying subtitles files or actually make them write their own and give back, for a change.

        • qaz@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 hours ago

          And what is AI? Is it being able to read text from images or isn’t that AI anymore because it’s proven to be useful?

          • GrammarPolice@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            5 hours ago

            Dude what? My point is that this community loves to ramble on about how trash AI is especially when used by big corporations. But when our beloved FOSS incorporates it, this community is head over heels with the idea.

            Y’all just look for different ways to shit on corporations and AI was an avenue for that.