top | item 36204841

(no title)

lakySK | 2 years ago

Definitely experienced all 3, but still getting more boost than drag overall.

The large codebase context might be somewhat solvable and I've seen projects that use embeddings to find the relevant bits of code to feed GPT to help it with context. No clue how well any of them work though, haven't tested them yet.

I've definitely noticed times when the conversation gets cut off and it can't "remember" the previous messages. Often, it results in a loop of ChatGPT giving me a solution, me getting an error and sending it back, then ChatGPT being terribly sorry and suggesting a new solution. Repeat 3 times and often we make a full circle to the first solution in this way...

discuss

order

dinvlad|2 years ago

I know right? To me it feels like cheating in an exam, where I ask a know-it-all who gives a straight answer without me understanding it, but then it turns out that the know-it-all didn’t know it very well either, but just appeared confident, so we both failed the exam.

lakySK|2 years ago

If you treat LLMs as know-it-alls and just copy the output, then your expectations of the current generation of LLMs are too high.

That doesn’t mean they’re not useful though.