• moomoomoo309@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Sure, who will it impersonate if you don’t? That’s where the bias comes in.

    And yes, they do need a guide, because the way chatbots behave is not intuitive or clear, there’s lots of weird emergent behavior in them even experts don’t fully understand (see OpenAI’s 4o sycophancy articles today). Chatbots’ behavior looks obvious, and in many cases it is…until it isn’t. There’s lots of edge cases.

    • nesc@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      They will impersonate ‘helpful assistant made by companyname (following hundreds of lines of invisible rules and what to say and when)’. Experts that don’t have an incentive to understand and at least partially in the cult who would have guessed!

      • moomoomoo309@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        And you think there’s not bias in those rules that’s notable, and that the edge cases I mentioned won’t be an issue, or what?

        You seem to have sidestepped what I’ve said to rant about how OpenAI sucks when that was just meant to be an example of how even those best informed about AI in the world right now don’t really understand it.

        • nesc@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          2
          ·
          23 hours ago

          That’s not ‘bias’, that’s intended behaviour, iirc meta published some research on it. Returning to my initial point, viewing chat bots as ‘white male who lacks self-awareness’ is dumb as fuck.

          As for not understanding, they are paid to not understand.