• ArchRecord@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This man doesn’t even know the difference between AGI and a text generation program, so it doesn’t surprise me he couldn’t tell the difference between that program and real, living human beings.

    He also seems to have deleted his LinkedIn account.

    • lemmydividebyzero@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      AGI is currently just a buzzword anyway…

      Microsoft defines AGI in contracts in dollars of earnings…

      If you’d travel in time 5 years back and show the currently best GPT to someone, he/she would probably accept it as AGI.

      I’ve seen multiple experts in German television explaining that LLMs will reach the AGI state within a few years…

      (That does not mean that the CEO guy isn’t a fool. Let’s wait for the first larger problem that requires not writing new code, but rather dealing with a bug, something not documented, or similar…)

      • cynar@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        LLMs can’t become AGIs. They have no ability to actually reason. What they can do is use predigested reasoning to fake it. It’s particularly obvious with certain classes of proble., when they fall down. I think the fact it fakes so well tells us more about human intelligence than AI.

        That being said, LLMs will likely be a critical part of a future AGI. Right now, they are a lobotomised speech centre. Different groups are already starting to tie them to other forms of AI. If we can crack building a reasoning engine, then a full AGI is possible. An LLM might even form its internal communication method, akin to our internal monologue.

        • Mikina@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          While I haven’t read the paper, the comment’s explanation seems to make sense. It supposedly contains a mathematical proof that making AGI from a finite dataset is a NP-hard problem. I have to read it and parse out the reasoning, if true, it would make for a great argument in cases like these.

          https://lemmy.world/comment/14174326

          • Redjard@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            If that is true, how does the brain work?

            Call everything you have ever experienced the finite dataset.
            Constructing your brain from dna works in a timely manner.
            Then training it does too, you get visibly smarter with time, so on a linear scale.