You mean, it doesn’t already?
So… I just today had this realization, that a lot of the software engineering that I grew up with is not something that a lot of people now know how to do.
When I was growing up, you had to make your own data structures. This was during the time when almost the whole of this chart was C. You had to do linked lists, sometimes you had to do hash tables. If you knew B-trees and graph algorithms, you were a super-fancy person and probably could make the big bucks if you wanted to.
I don’t think that is what we need to go back to. I like being able to do big things with code super-quickly (including with LLM help) and not have to worry about trivial details or craft my own hash tables for every project. But also, I worry a little bit that a lot of the scope of what software is able to accomplish, outside of special environments or projects, is starting to be limited to “what can I accomplish by pasting together some preexisting libraries in a pretty straightforward way”. And, because so much of what’s out there is assembled with that approach, we get these teetering mountains of dependencies underneath every single project. I have a strong feeling that there is some kind of mathematics which implies that the number of dependencies attached to the average project is growing exponentially year by year, up to the limit of what people have the patience to put up with in their build process, which keeps increasing as computers get faster.
I’m not even trying to say it as good or bad, although there are worrying aspects to me. I’m just saying it didn’t occur to me (again, literally until earlier today when I had this random realization) how much “being a software engineer” has changed. This idea, that most of what someone does can be duplicated by the fairly stupid capabilities of the currently available LLMs, is a big confirmation for that.
I’ve said it before, but I’ll say it again: this sounds like complete bullshit - at least for now. Having played with multiple generative models for code generation myself, 4 times out of 5 there are profound problems with the code they spit out. And sometimes it’s complete crap.
Sure, you can refine your prompts to improve the quality. But at that point it’s usually quicker, easier, and more accurate to just write the code yourself.
And ‘vibe coding’ sounds like conceptual vaporware. Unless you feed a shit load of often proprietary data into the LLM, chances are it’s will not be able to capture enough of your business rules. And as result, the code it outputs is deeply flawed. And I don’t really see a way around that, at least while there are experienced developers around who can bridge the gap better than AIs can.
ETA:
There was a point in the late 1970s to early '80s when many people thought people required programming skills to use a computer effectively because there were very few pre-built applications for all the various computer platforms available. School systems worldwide made educational computer literacy efforts to teach people to code.
Before too long, people made useful software applications that let non-coders utilize computers easily—no programming required. Even so, programmers didn’t disappear—instead, they used applications to create better and more complex programs. Perhaps that will also happen with AI coding tools.
This is an interesting analogy. I’m not sure the two concepts (using a computer vs creating software on a computer) are as close as the author thinks. And I still think this underestimates the importance of accurately implementing proprietary business rules accurately. But they might be on to something. Maybe.
Future?