(no title)
youoy | 3 months ago
Very generous from Tao to say it can be a prompting issue. It always surprises me how easily it is for people to says that the problem is not the LLM, but them. With other types of ML/AI algorithms we dont see this. For example, after a failed attempt or lower score in a comparison table, no one writes "the following benchmark results may be wrong, and our proposed algorithm may not be the best. We may have messed up the hyperparameter tunning, initialization, train test split..."
macleginn|3 months ago
ainch|3 months ago