

So, are they going to learn anything or just do it right again?
So, are they going to learn anything or just do it right again?
No need to keep going when you compare today to 2008. Shifting the goal post from interactive NPCs to animated cutscenes only confirmes that.
Mate, LLMs are really good for creative stuff. And they improved SO much over the last 2 years. How could you even think to say “they will never”? You can already have conversations with AI, you could a year ago. Now the context window is MASSIVE, you are going to talk for a long time before that runs out at 200k tokens. Let alone methods to condense this down to only relevant information, which would essentially give it infinite memory about what you talk about, easily in the millions of words.
You forgot all the colorful tape we put above the camo to not get hit by friendlies.
Obviously you get downvoted, but just as obviously this is the future. Nonsense static responses are useless, having actual responses that match what happens would take immersion to a whole new level.
Handcrafted can still use any sort of (power)tool. The actual difference is that it is low volume instead of mass production, even if the same tools are used.
How dare you say something positive about LLMs on Lemmy? At least 2 downvotes, people here are a joke.
No.
There are so many “unlocked” ones, they too end up where they belong and not randomly on the parking lot.
As your article says, they understood them. How could they make toys with them otherwise?
They obviously understood the wheel, engine would be a better comparison. You are correct otherwise.
I can only hope that this was generated instead of actually typed by someone.
Ahahahahaahahhahahahahahahhahaha
Ahahahahahhaha what?
TIG? Holy cow yes please, let’s stack some dimes!
It is pretty much like taking a normal picture of a mirror.
o3 yes perhaps, we will see then. Would be amazing.
Since version 4 it has no problem generating working code. The question is how complex the code can get etc. But currently with o1 (o3 mini perhaps a bit less) a dozen functions with 1000 lines of code are really possible without a flaw.
You mean o3 mini? Wasn’t it on the level of o1, just much faster and cheaper? I noticed no increase in code quality, perhaps even a decrease. For example it does not remember things far more often, like variables that have a different name. It also easily ignores a bunch of my very specific and enumerated requests.
Can you elaborate what you mean?