top | item 35233459

(no title)

komposit | 2 years ago

Yeah this is the future right here. A couple of iterations over this paradigm and we can all go to the beach afaic. Software engineering in our day and age is 1% deep problem solving (coming up with novel algorithms, for example) and 99% writing glue code. The real problem with the 99%, and the reason we get paid what we do is that this requires a lot of thoughtful problem decomposition. Once the problem is adequately decomposed it becomes a ticket that can be picked up, and further decomposition follows in situ.

What we should try to do is write a tool which can provide a sort of chatGPT integrated ide where a dev can

- specify an overall goal of varying complexity. - ask chatGPT to split this up in smaller subtasks - iterate down the tree until chatgpt decides a task is specific enough for implementation to start - ask chatGPT to write tests verifying task completion - then initiate a feedback loop where gpt can suggest code, run the tests (in a containerized setting), evaluate if output is as expected, and amend changes - once tests pass, commit, move onto the next ticket.

Programming then becomes a process of guided decomposition with humans mainly guiding the process along.

discuss

order

komposit|2 years ago

On a related note. Problem decomposition is also the reason why we should all be worried about AI, even with it's current capabilities. After all, every nefarious goal, once decomposed into smaller units, is not necessarily recognizably nefarious any longer. The challenges of organized crime are more in the logistics and HR departments more than anywhere else and their problems, once framed in that context, won't make any AI suspicious, however much training openAI does on its LLM.