top | item 46452975

(no title)

mulmboy | 2 months ago

> AI seems to have caught up to my own intelligence even in those narrow domains where I have some expertise. What is there left that AI can’t do that I would be able to verify?

The last few days I've been working on some particularly tricky problems, tricky in the domain and in backwards compatibility with our existing codebase. For both these problems GPT 5.2 has been able to come to the same ideas as my best, which took me quite a bit of brain racking to get to. Granted it's required a lot of steering and context management from me as well as judgement to discard other options. But it's really getting to the point that LLMs are a good sparring partner for (isolated technical) problems at the 99th percentile of difficulty

discuss

order

judahmeek|2 months ago

You steered a sycophantic LLM to the same idea that you had already had & think that's worth bragging about?

mulmboy|2 months ago

I'm well ware that they can be sycophantic, and I structure things to avoid that like asking "what do you think of this problem" and seeing the idea fall out rather than providing anything that would suggest it. In one of these two cases it took an idea that I had inkling of, fleshed it out, and expanded it to be much better than I had.

And I'm not bragging. I'm expressing awe, and humility that I am finding a machine can match me on things that I find quite difficult. Maybe those things aren't so difficult after all.

By steering I mean more steering to flesh out the context of the problem and to find relevant code and perform domain-specific research. Not steering toward a specific solution.