(no title)
sottol | 7 months ago
I ended shoehorned into backend dev in Ruby/Py/Java and don't find it improves my day to day a lot.
Specifically in C, it can bang out complicated but mostly common data-structures without fault where I would surely do one-off errors. I guess since I do C for hobby I tend to solve more interesting and complicated problems like generating a whole array of dynamic C-dispatchers from a UI-library spec in JSON that allows parsing and rendering a UI specified in YAML. Gemini pro even spat out a YAML-dialect parser after a few attempts/fixes.
Maybe it's a function of familiarity and problems you end using the AI for.
freeone3000|7 months ago
Brendinooo|7 months ago
Yes.
>in domains where you have trouble judging the quality
Sure, possibly. Kind of like how you think the news is accurate until you read a story that's in your field.
But not necessarily. Might just be more "I don't know how do to <basic task> in <domain that I don't spend a lot of time in>", and LLMs are good at doing basic tasks.