(no title)
enceladus06 | 1 month ago
Parsing 100 different scientific articles or even google search results is not going to be possible before I get bored and move on. This is the value of LLM.
Even if the LLM data is used in training or sold off one way to protect oneself, is to add in knowingly incorrect data to the chat. You know it is incorrect, the LLM will believe it. Then the narrative is substantially changed.
Or wait like 6mo and the opensource Chinese models [Kimi/Qwen/Friends] will have caught up to Claude and Gemini IMO. Then just run these models quantized locally on Apple Silicon or GPU.
No comments yet.