Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.
I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.
Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.
Every step of any deductive process needs to be citable and traceable.
Their creators can’t even keep them from deliberately lying.
Exactly.
Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.
Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.
If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.
The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”