top | item 46957701

(no title)

fhd2 | 19 days ago

What kills me personally is that I'm constantly 80% there, but the remaining 20% can be just insurmountable. It's really like gambling: Just one more round and it'll be useful, OK, not quite, just one more, for hours.

discuss

order

CSSer|19 days ago

Do you mean in terms of adding one more feature or in terms of how a feature you're adding almost works but not quite right?

I find the latter a lot more challenging to cut my losses when it's on a good run (and often even when I know I could just write this by hand), especially because there's as much if not more intrigue about whether the tool can accomplish it or not. These are the moments where my mind has drifted to think about it the exact way you describe it here.

fhd2|19 days ago

Yeah the latter. Doesn't happen much with super obvious stuff, but especially when I'm a bit fuzzy on requirements, it can be just endless iterations.

gedy|19 days ago

And let's get real: AI companies will not be satisfied with you paying $20 or even $200 month if you can actually develop your product in a few days with their agents. They are either going to charge a lot more or string you along chasing that 20%.

bilekas|19 days ago

That's an interesting business model actually : "Oh hey there, I see you're almost finished your project and ready to launch, watch theses adverts and participate in this survey to get the last 10% of your app completed"

jmkni|19 days ago

Gambling is a great analogy, my lucks going to turn around, just one more prompt

co_king_3|19 days ago

If you think it's getting 80% right on its own, you're a victim of Anthropic and OpenAI's propaganda.

Melonai|19 days ago

No I kind of see this too, but the 80% is very much the more simple stuff. AI genuinely saves me some time, but I always notice that if I try to "finish" a relatively complex task that's a bit unique in some regards, when a bit more complex work is necessary, something slightly domain-related maybe, I start prompting and prompting and banging my head against the terminal window to make it try to understand the issue, but somehow it still doesn't turn out well at all, and I end up throwing out most of the work done from that point on.

Sometimes it looks like some of that comes from AI generally being very very sure of its initial idea "The issue is actually very simple, it's because..." and then it starts running around in circles once it tries and fails, you can pull it out with a bit more prompting, but it's tough. The thing is, it is sometimes actually right, from the very beginning, but if it isn't...

This is just my own perspective after working with these agents for some time, I've definitely heard of people having different experiences.