It was all fun and games two years ago when most AI videos were obvious (6 fingers, 7 fingers, etc.).

But things are getting out of hand. I am at a point I’m questioning if Lemmy, Reddit, Youtube comments etc. are even real. I wouldn’t even be suprised if I was playing Overwatch 5v5 with 9 AIs while three of them are programmed to act like kids, 4 being non toxic etc…

This whole place could just be an illusion.

I can’t prove it. Its really less fun now.

The upside is I go to the gym more frequently and just hang out with people I know are 100% real. Nothing worse than having a conversation with AI person. It was just an average 7/10 like I am an average 5/10 so I thought it could be a real thing but turned out I was chatting with AI. A 7/10 AI. The creator made the person less perfect looking to make it more realistic.

Nice. What is the point of internet when everything is fake but can’t even or only be identified as fake with deep research.

I’m 32 and I know many young people who also hate it. To be fair I only know people who hate on AI nowadays. This has to end.

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 hours ago

    critical thinking is tough

    To preface, I don’t know a whole lot about AI bots. But we already see posts of the limitations of what AI can do/will allow, like bots refusing to repeat a given phrase. But what about actual critical thinking? If most bots are trained off human behavior, and most people don’t run on logical arguments, doesn’t that create a gap?

    Not that it’s impossible to program such a bot, and again, my knowledge on this is limited, but it doesn’t seem like the aim of current LLMs is to apply critical thought to arguments. They can repeat what others have said, or mix words around to recreate something similar to what others have said, but are there any bots actively questioning anything?

    If there are bots that question societal narratives, they risk being unpopular amongst both the ruling class and the masses that interact with them. As long as those that design and push for AI do so with an aim of gaining popular traction, they will probably act like most humans do and “not rock the boat.”

    If the AI we interact with were instead to push critical thinking, without applying the biases that constrain people from applying it perfectly, that’d be awesome. I’d love to see logic bots that take part in arguments on the side of reason - it’s something a bot could do all day, but a human can only do for so long.

    Which is why when I see a comment that argues a cogent point against a popular narrative, I am more likely to believe they are human. For now.