(no title)
sgarland | 3 days ago
TFA author (and me), and you have wildly different motivations. I don't know the author, but have said verbatim much of what they wrote, so I feel like I can speak on this.
Beyond the fact that I recognize the company has to continue exist for me to be employed, none of those hold the slightest bit of interest for me. What motivates me are interesting technical challenges, full stop. As an example, recently at my job we had a forced AI-Only week, where everyone had to use Claude Code, zero manual coding. This was agony to me, because I could see it making mistakes that I could fix in seconds, but instead I had to try to patiently explain what I needed to be done, and then twiddle my thumbs while cheerful nonsense words danced around the screen. One of the things I produced from that was a series of linters to catch sub-optimal schema decisions in PRs. This was praised, but I got absolutely no joy from it, because I didn't write it. I have written linters that parse code using its AST before, and those did bring me joy, because it was an interesting technical challenge. Instead, all I did was (partially) solve a human challenge; to me, that's just frustration manifest, because in my mind if you don't know how to use a DB, you shouldn't be allowed to use the DB (in prod - you have to learn, obviously).
I am fully aware that this is largely incompatible with most workplaces, and that my expectations are unrealistic, but that doesn't change the fact that it is how I feel.
jonstewart|3 days ago
I also share some of your philosophy — life is too short for us not to find joy at work, if we can. It’s a lot easier to find that joy when the team’s shipping valuable software, of course.
sgarland|3 days ago
What's frustrating (I've said that a lot, I know) to me is that my skills are seen as valued, but my opinions aren't. I also have a pathological need to help people, and so when someone asks me, I can't help but patiently explain for the Nth time how a B+tree works (I include docs! I've written internal docs at varying levels!) and why their index design won't work. This is usually met with "Thanks!" because I've solved their problem, until the next problem occurs. When I then point out that they have a systemic issue, and point to the incidents proving this, they don't want to hear it, because that turns "I made an error, and have fixed it" into "I have made a deep architectural mistake," and people apparently cannot stand to be wrong.
That also baffles me - I don't think I'm arrogant or conceited; when I'm wrong, I publicly say so, and explain precisely where I was mistaken, what the correct answer is, and provide references. Being wrong isn't a moral failing, or even necessarily an indictment on your skills, but for some reason, people are deathly afraid to admit they were wrong.
miningape|3 days ago
sgarland|3 days ago
Re: AI, that's not to say I don't use it, I just view it as a sometimes useful tool that you have to watch very closely. I also often view their use as an X-Y problem.
Another recent example: during the same AI week, someone made an AI Skill (I'm not sure how that counts as software, but I digress) that connects to Buildkite to find failed builds, then matches the symptoms back to commit[s]. In their demo, they showed it successfully doing so for something that "took them hours to solve the day before." The issue was having deployed code before its sibling schema migration.
While I was initially baffled at how they missed the logs that very clearly said "<table_name> not found," after having Claude go do something similar for me later, I realized it's at least partially because our logs are just spamming bullshit constantly. 5000-10000 lines isn't uncommon. Maybe if you weren't mislabeling what are clearly DEBUG messages as INFO, and if you didn't have so many abstractions and libraries that the stack traces are hundreds of lines deep, you wouldn't need an LLM to find the needle in the haystack for you.