top | item 34644724

(no title)

vacuumcl | 3 years ago

There are ways that LLM's can self-improve, such as in this paper: https://arxiv.org/abs/2210.11610

I would speculate that there are more ways to train on logical consistency of the output, and improve the models further.

discuss

order

No comments yet.