top | item 47182835

(no title)

whoami4041 | 2 days ago

The key to my point is in the word "generating". Meaning human input/judgement by actually typing more SQL than the LLM produces. The model's reasoning and code generation pipelines are typically 2 separate code paths, so it may not always actually do what it intends which can lead to unexpected results.

discuss

order

No comments yet.