Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    5
    ·
    5 hours ago

    I think Meta and others went open with their models as firewall protection against legal action due to their blatant stealing of people’s work to train with. If the models has stayed commercial and controlled within the company, they could be (probably still wouldn’t be, but could be) forced to shut down or start over properly. But it’s far too late now since it’s everywhere there is a GPU running, even if models don’t progress past current state.

    That being said, not much is getting done about the safety factors. Yes, they are only LLMs and not AGI, but there’s commonality in regards to not being sure what’s going on inside the box and if it’s really doing what it’s told to do. Now is the time boundaries and research should be done, because once something happens (LLM or AGI) it’s too late. So what do I want to see happen? Heavy regulation and transparency on the leading edge of development. And stop the madness of more compute being the only solution with its environmental effects. It might be the only solution, but companies are going that way because it’s the easiest way to throw money at a problem and reap profits, which is all they care about.