• chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    21 hours ago

    Not even just because people are idiots, but also because a LLM is going to have quirks you will need to work around or exploit to get the best results out of it. Like how it’s better to edit your question to clarify a misunderstanding and regenerate the response than it is to respond again with the correction, because there is more of a risk it gets stuck on its mistake that way. Or how it can be useful in some situations to (if the interface allows this) manually edit part of the LLM output to be more in line with what you want it to be saying before generating the rest.