(no title)
ivoras | 1 year ago
> In particular, I asked ChatGPT to write a function by knowing precisely how I would have implemented it. This is crucial since without knowing the expected result and what every line does, I might end up with a wrong implementation.
In my eyes, it makes the whole idea of AI coding moot. If I need to explain every step in detail - and it does not "understand" what it's doing; I can virtually the statistical trial-and-error behind its action - then what's the point? I might as well write it all myself and be a bit more sure the code ends up how I like it.
link: https://www.linkedin.com/feed/update/urn:li:activity:7289241...
doug_durham|1 year ago
WD-42|1 year ago
sneedle|1 year ago
[deleted]
simonw|1 year ago
bcrosby95|1 year ago
alkonaut|1 year ago
But this is the kind of thing a LLM excels at. It gives you 200 lines of impl right away, you have a good understanding of both what it should look like and how it should work.
Slow and error prone to type but, but quick and easy to verify once done, that's the key use case for me.
guelo|1 year ago
unknown|1 year ago
[deleted]