(no title)
blindhippo | 6 days ago
But writing code was never much more than 35-40% of my job while working for companies/others. Most my time has always gone towards communication, design, and validation. All three of those are not particularly vulnerable to mass AI automation except for the most trivial of scenarios and I have not seen evidence that has changed in over 2 years of so called "improvements".
My "exit plan" ultimately is to be one of the engineers capable of using these tools to scale my impact accordingly so I can focus on higher order problem solving, which ultimately is what is most valuable. I would be more concerned if I was in marketing/sales or frankly middle management.
Maybe this is just "copium" on my part, who knows, this sector is moving fast.
mekael|6 days ago
df2dd|6 days ago
If LLMs provide substantial material to be able to produce what one envisions faster, that is great. But LLMs will not be doing the envisioning. Most humans already are poor at that. Hence why there are very few real 'visionaries' in history.
Envisioning always requires deep thinking. If LLMs eat away at a humans ability to sit and think, this will make envisioning solutions harder. So you'll see more stuff produced, but largely more crap.