top | item 42603552

(no title)

aorona | 1 year ago

I have been using LLMs (chapt-gpt, perplexity, claude) for development for over a year. It is helpful for summary explanations of concepts and boilerplate for frameworks and library APIs. But it makes errors within those consistently.

Its a great tool and saves a great deal of time, but I have yet to go beyond generating snippets I have to vet, typically finding a made up library API call or misunderstanding of my natural language prompt.

I find it hard to pare down these LLM evangelizing articles into take aways that improve my day to day.

discuss

order

nurettin|1 year ago

I know it is in the nature of probabilistic neural network outputs, but it almost feels like these commercial models are built to make those mistakes (making up functions/parameters) and it is all a big conspiracy to hide the real useful stuff from general public.

I started giving the models api docs and headers before interacting with them and it seems to work a lot better.