(no title)
jksk61
|
1 year ago
funny paper, I still don't know what was the goal of it. It is evident to anyone that LLM can't perform any meaningful reasoning, why even bothering in building such an infrastructure to test whether it is able to become a "scientist".
kkzz99|1 year ago
Its not and its pretty evident to anyone that has actually used SotA LLMs for more than 5 minutes.
somenameforme|1 year ago
---
LLM: The answer is A.
Me: That's wrong. Try again.
LLM: Oh I'm sorry, you're completely right. The answer is B.
Me: That's wrong. Try again.
LLM: Oh I'm sorry, you're completely right. The answer is A.
Me: Time to short NVDA.
LLM: As an AI language learning model without real-time market data or the ability to predict future stock movements, I can't advise on whether it's an appropriate time to short NVIDIA or any other stock.
---
rcxdude|1 year ago
throwup238|1 year ago
imtringued|1 year ago
jjtheblunt|1 year ago