Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I’d love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.
I don’t love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I’m not sure it’s enough to really matter. I think we need bigger “what if” scenarios to handle that.
I’d like for it to be forgotten, because it’s not AI.
They have to pay for every copyrighted material used in the entire models whenever the AI is queried.
They are only allowed to use data that people opt into providing.
What about models folks run at home?
Careful, that might require a nuanced discussion that reveals the inherent evil of capitalism and neoliberalism. Better off just ensuring that wealthy corporations can monopolize the technology and abuse artists by paying them next-to-nothing for their stolen work rather than nothing at all.
Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.
There’s too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who’s controlling it and who’s not.
We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else’s artwork really really fast…that is all it does, at least for the public sector currently.
For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.
AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don’t inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn’t been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it’s interesting to them.
There’s too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.
Here are further resources if you didn’t get enough ranting.
If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.
Failing that, any models built on stolen work should be released to the public for free.
This is the best solution. Also, any use of AI should have to be stated and watermarked. If they used someone’s art, that artist has to be listed as a contributor and you have to get permission. Just like they do for every film, they have to give credit. This includes music, voice and visual art. I don’t care if they learned it from 10,000 people, list them.
If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.
I’m going to ask the tough question: Why?
Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.
Copyright law lets you download whatever TF you want. It isn’t until you distribute said copyrighted material that you violate copyright law.
Before generative AI, Google screwed around internally with all those copyrighted works in dozens of different ways. They never asked permission from any of those copyright holders.
Why is that OK but doing the same with generative AI is not? I mean, really think about it! I’m not being ridiculous here, this is a serious distinction.
If OpenAI did all the same downloading of copyrighted content as Google and screwed around with it internally to train AI then never released a service to the public would that be different?
If I’m an artist that makes paintings and someone pays me to copy someone else’s copyrighted work. That’s on me to make sure I don’t do that. It’s not really the problem of the person that hired me to do it unless they distribute the work.
However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?
If you believe that it’s not Xerox’s problem then you’re on the side of the AI companies. Because those companies that make LLMs available to the public aren’t actually distributing copyrighted works. They are, however, providing a tool that can do that (sort of). Just like a copier.
If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?
My argument is that there’s absolutely nothing illegal about it. They’re clearly not distributing copyrighted works. Not intentionally, anyway. That’s on the user. If someone constructs a prompt with the intention of copying something as closely as possible… To me, that is no different than walking up to a copier with a book. You’re using a general-purpose tool specifically to do something that’s potentially illegal.
So the real question is this: Do we treat generative AI like a copier or do we treat it like an artist?
If you’re just angry that AI is taking people’s jobs say that! Don’t beat around the bush with nonsense arguments about using works without permission… Because that’s how search engines (and many other things) work. When it comes to using copyrighted works, not everything requires consent.
Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.
No they don’t. They index the content of the page and score its relevance and reliability, and still provide the end user with the actual original information
However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?
This is false equivalence
LLMs do not wholesale reproduce an original work in it’s original form, they make it easy to mass produce a slightly altered form without any way to identify the original attribution.
If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?
I think this is intentionally missing the point.
LLMs don’t actually think, or produce original ideas. If the human artist produces a work that too closely resembles a copyrighted work, then they will be subject to those laws. LLMs are not capable of producing new works, by definition they are 100% derivative. But their methods in doing so intentionally obfuscate attribution and allow anyone to flood a space with works that require actual humans to identify the copyright violations.
Stop selling it a loss.
When each ugly picture costs $1.75, and every needless summary or expansion costs 59 cents, nobody’s going to want it.
I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.
I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.
Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.
Every step of any deductive process needs to be citable and traceable.
Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.
Their creators can’t even keep them from deliberately lying.
Exactly.
Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.
… I want clear evidence that the LLM … will never hallucinate or make something up.
Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.
I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That’s what I want.
AI people always want to ignore the environmental damage as well…
Like all that electricity and water are just super abundant things humans have plenty of.
Everytime some idiot asks AI instead of googling it themselves the planet gets a little more fucked
Are you not aware that Google also runs on giant data centers that eat enormous amounts of power too?
This is like saying a giant truck is the same as a civic for a 2 hr commute …
Multiple things can be bad at the same time, they don’t all need to be listed every time any one bad thing is mentioned.I wasn’t listing other bad things, this is not a whataboutism, this was a specific criticism of telling people not to use one thing because it uses a ton of power/water when the thing they’re telling people to use instead also uses a ton of power/water.
Yeah, you’re right. I think I misread your/their comment initially or something. Sorry about that.
And ai is in search engines now too, so even if asking chatfuckinggpt uses more water than google searching something used to, google now has its own additional fresh water resource depletor to insert unwanted ai into whatever you look up.
We’re fucked.
So your argument against AI is that it’s making us dumb? Just like people have claimed about every technology since the invention of writing? The essence of the human experience is change, we invent new tools and then those tools change how we interact with the world, that’s how it’s always been, but there have always been people saying the internet is making us dumb, or the TV, or books, or whatever.
Get back to me after you have a few dozen conversations with people who openly say “Well I asked ChatGPT and it said…” without providing any actual input of their own.
Oh, you mean like people have been saying about books for 500+ years?
People haven’t ”thought for themselves” since the printing press was invented. You gotta be more specific than that.
Ah, yes, the 14th century. That renowned period of independent critical thought and mainstream creativity. All downhill from there, I tell you.
Independent thought? All relevant thought is highly dependent of other people and their thoughts.
That’s exactly why I bring this up. Having systems that teach people to think in a similar way enable us to build complex stuff and have a modern society.
That’s why it’s really weird to hear this ”people should think for themselves” criticism of AI. It’s a similar justification to antivaxxers saying you ”should do your own research”.
Surely there are better reasons to oppose AI?
The usage of “independent thought” has never been “independent of all outside influence”, it has simply meant going through the process of reasoning–thinking through a chain of logic–instead of accepting and regurgitating the conclusions of others without any of one’s own reasoning. It’s a similar lay meaning as being an independent adult. We all rely on others in some way, but an independent adult can usually accomplish activities of daily living through their own actions.
I agree on the sentiment, it was just a weird turn of phrase.
Social media has done a lot to temper my techno-optimism about free distribution of information, but I’m still not ready to flag the printing press as the decay of free-thinking.
Things are weirder than they seem on the surface.
A math professor collegue of mine calls extremely restrictive use of language ”rigor”, for example.
The point isn’t that it’s restrictive, the point is that words have precise technical meanings that are the same across authors, speakers, and time. It’s rigorous because of that precision and consistency, not just because it’s restrictive. It’s necessary to be rigorous with use of language in scientific fields where clear communication is difficult but important to get right due to the complexity of the ideas at play.
Speak for yourself.
I think Meta and others went open with their models as firewall protection against legal action due to their blatant stealing of people’s work to train with. If the models has stayed commercial and controlled within the company, they could be (probably still wouldn’t be, but could be) forced to shut down or start over properly. But it’s far too late now since it’s everywhere there is a GPU running, even if models don’t progress past current state.
That being said, not much is getting done about the safety factors. Yes, they are only LLMs and not AGI, but there’s commonality in regards to not being sure what’s going on inside the box and if it’s really doing what it’s told to do. Now is the time boundaries and research should be done, because once something happens (LLM or AGI) it’s too late. So what do I want to see happen? Heavy regulation and transparency on the leading edge of development. And stop the madness of more compute being the only solution with its environmental effects. It might be the only solution, but companies are going that way because it’s the easiest way to throw money at a problem and reap profits, which is all they care about.
I’m not anti AI, but I wish the people who are would describe what they are upset about a bit more eloquently, and decipherable. The environmental impact I completely agree with. Making every google search run a half cooked beta LLM isn’t the best use of the worlds resources. But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument. It comes across as childish to me
It feels like we’re being delivered the sort of stuff we’d consider flim-flam if a human did it, but lapping it up bevause the machine did it.
“Sure, boss, let me write this code (wrong) or outline this article (in a way that loses key meaning)!” If you hired a human who acted like that, we’d have them on an improvement plan in days and sacked in weeks.
So you dislike that the people selling LLMs are hyping up their product? They know they’re all dumb and hallucinate, their business model is enough people thinking it’s useful that someone pays them to host it. If the hype dies Sam Altman is back in a closet office at Microsoft, so he hypes it up.
I actually don’t use any LLMs, I haven’t found any smart ones. Text to image and image to image models are incredible though, and I understand how they work a lot more.
I expect the hype people to do hype, but I’m frustrated that the consumers are also being hypemen. So much of this stuff, especially at the corporate level, is FOMO rather than actually delivered value.
If it was any other expensive and likely vendor-lockin-inducing adventure, it would be behind years of careful study and down-to-the-dime estimates of cost and yield. But the same people who historically took 5 years to decide to replace an IBM Wheelwriter with a PC and a laser printer are rushing to throw AI at every problem up to and including the men’s toilet on the third floor being clogged.
But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument.
The math around it is unimportant, frankly. The issue with AI isn’t about GANN networks alone, it’s about the licensing of the materials used to train a GANN and whether or not companies that used materials to train a GANN had proper ownership rights. Again, like the post I made, there’s an easy argument to make that OpenAI and others never licensed the material they used to train the AI, making the whole model poisoned by copyright theft.
There’s plenty of uses of GANNs that are not problematic. Bespoke solution for predicting the outcomes of certain equations or data science uses that involve rough predictions on publically sourced statistics (or privately owned.) The problem is that these are not the same uses that we call “AI” today – and we’re actually sleeping on much better uses of neural networks by focusing on a pie in the sky AGI nonsense being pushed by companies that are simply pushing highly malicious, copyright infringing products to make a quick buck on the stock market.
I’m not super bothered by Tue copyright issue - the copyright system is barely serving people these days anyway. Blow it up.
I’m deeply troubled by the obscene power use. It might be worth it if it was a good tool. But it’s not.
I haven’t gone out of my way to use AI anything, but it’s been stuffed into everything. And it’s truly bad at it’s job. AI is like a precocious 8-year-old, butting into every conversation. And it gives the right answer at about the rate a ln 8-year-old does. When I do a web search, I then need to do another one to check the AI’s answer. Or scroll down a page to get past the AI answers to real sources. When someone uses it to summarize a meeting, I then need to read through that summary to make sure the notes are accurate. And it doesn’t know to ask when it doesn’t understand something like a proper secretary would. When I go looking for reference images, I have to check to make sure they’re real and not hallucinations.
It gets in my way and slows me down. It needed at least another decade of development before being deployed at all, never mind at the scale it has, and it needs to be opt-in, not crammed into everything. And until it can be relied on, it shouldn’t be allowed to suck down as much electricity as it does.
Training data needs to be 100% traceable and licensed appropriately.
Energy usage involved in training and running the model needs to be 100% traceable and some minimum % of renewable (if not 100%).
Any model whose training includes data in the public domain should itself become public domain.
And while we’re at it we should look into deliberately taking more time at lower clock speeds to try to reduce or eliminate the water usage gone to cooling these facilities.
Part of what makes me so annoyed is that there’s no realistic scenario I can think of that would feel like a good outcome.
Emphasis on realistic, before anyone describes some insane turn of events.
If we’re talking realm of pure fantasy: destroy it.
I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.
But a lot of sci-fi doesn’t really address the run up to AI, in fact a lot of it just kind of assumes there’ll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.
Put it out of its misery.
How do you “destroy it”? I mean, you can download an open source model to your computer right now in like five minutes. It’s not Skynet, you can’t just physically blow it up.
OP asked what people wanted to happen, and even later “destroy gen AI” as an option. I get it is not realistically feasible, but it’s certainly within the realm of options provided for the discussion. No need to police their pie in the sky dream. I’m sure they realize it’s not realistic.