top | item 38260643

(no title)

Dunati | 2 years ago

I'm probably bad at writing prompts, but in my limited experience, I spend more time reviewing and correcting the generated code than it would have taken to write it myself. And that is just for simple tasks. I can't imagine thinking a llm could generate millions of lines of bug free code.

discuss

order

jon-wood|2 years ago

Asking GPT to do a task for me currently feels like asking a talented junior to do so. I have to be very specific about exactly what it is I'm looking for, and maybe nudge it in the right direction a couple of times, but it will generally come up with a decent answer without me having to sink a bunch of time into the problem.

If I'm honest though I'm most likely to use it for boring rote work I can't really be bothered with myself - the other day I fed it the body of a Python method, and an example of another unit test from the application's test suite, then asked it to write me unit tests for the method. GPT got that right on the first attempt.

miiiiiike|2 years ago

That’s where I am too. I think almost everyone has that “this is neat but it’s not there yet” moment.

aleph_minus_one|2 years ago

> I think almost everyone has that “this is neat but it’s not there yet” moment.

I rather have this moment without the “this is neat” part. :-) i.e. a clear “not there yet” moment, but with serious doubts whether it will be there anytime in the foreseeable future.

meiraleal|2 years ago

It seems like the problem is with your view of everyone based on a n=1 experiment. I've been shipping production-ready code for my main job for months saving hundreds of work/hours.

antupis|2 years ago

Personally, for me this flow works fine AI does the first version -> I heavily edit it & debug & write tests for it -> code does what I want -> I tell AI to refactor this -> tests pass and the ticket is done.