top | item 45083197

(no title)

kaptainscarlet | 6 months ago

I've also had a similar experience. I have become too lazy since I started vibe-coding. My coding has transitioned from coder to code reviewer/fixer vey quickly. Overall I feel like it's a good thing because the last few years of my life has been a repetition of frontend components and api endpoints, which to me has become too monotonous so I am happy to have AI take over that grunt work while I supervise.

discuss

order

latexr|6 months ago

> My coding has transitioned from coder to code reviewer/fixer vey quickly. Overall I feel like it's a good thing

Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

> the last few years of my life has been a repetition of frontend components and api endpoints, which to me has become too monotonous

It’s a surprise that so many people have this problem/complaint. Why don’t you use a snippet manager?! It’s lightweight, simple, fast, predictable, offline, and includes the best version of what you learned. We’ve had the technology for many many years.

the_real_cher|6 months ago

> Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

I never remembered those keywords to begin with.

Checkmate!

onion2k|6 months ago

Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

Devs shouldn't be blindly accepting the output of an LLM. They should always be reviewing it, and only committing the code that they're happy to be accountable for. Consequently your coding and syntax knowledge can't really atrophy like that.

Algorithms and data structures on the other hand...

TuringTest|6 months ago

> Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

You can locally run pretty decent coding models such as Qwen3 Coder in a RTX 4090 GPU through LM Studio or Ollama with Cline.

It's a good idea even if they give slightly worse results in average, as you can limit your spending of expensive tokens for trivial grunt work and use them only for the really hard questions where Claude or ChatGPT 5 will excel.

realharo|6 months ago

>Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

Realistically, that's probably never going to happen. Expecting it is just like the prepper mindset.

stavros|6 months ago

Yeah, exactly the same for me. It's tiring writing the same CRUD endpoints a thousand times, but that's how useful products are made.

foolserrandboy|6 months ago

I wonder why it’s not the norm to use code generation or some other form of meta programming to handle this boring repetitive work?

therein|6 months ago

The lazy reluctance you feel is atrophy in the making. LLMs induce that.

kaptainscarlet|6 months ago

That's my biggest worry, atrophy. But I will cross that bridge when I get to it.