top | item 43528522

(no title)

sansseriff | 11 months ago

I think about the reviewer problem. An AI can write 3000 lines in less than a minute. But it might take me an hour to understand the architecture it's decided on.

There's a couple possibilities with this:

1. Agents become so powerful that a human can't conceivably keep up with them. And, it becomes a drain on efficiency for any human to try. The only important things are wether or not the prompt fits the desired outcome, and is the creation 'safe'. Safe can mean many things. Will not crash, will not leak data, will not take over the world... Atlas Computing is one startup that's taking this view. By ensuring an AI can only do 'safe' things as defined by some formal ontology/methods.

2. A human stays in the loop, and tries to stay at least reasonably up to date on the code architecture. For this to work long term, the weak link is the human understanding. In which case there's interesting opportunities for AI-generated lessons, animations, and examples that are used to get the human up to speed as fast as possible. If I see a very nice 3Blue1Brown style animation generated by AI about how a piece of software functions, than I can probably start working with it more quickly than if I only had the code. At least if the animation links very closely with the code itself.

discuss

order

No comments yet.