top | item 39953155

(no title)

raycat7 | 1 year ago

regarding your last concern, I found that yuxiaw is their COO[1], so it can't be considered a copy?

[1] https://www.librai.tech/team

discuss

order

vinni2|1 year ago

Ok but bigger issue is there is evidence that the LLMs are not better than specialized models for fact-checking. https://arxiv.org/abs/2402.12147

Xudong|1 year ago

Hello vinni2, thank you for mentioning the paper. However, I noticed that it hasn't gone through peer review yet. Also, the paper suggests that fine-tuning may work better than in-context learning, but that's not a problem. You can fine-tune any LLMs like GPT-3.5 for this purpose and use them with this framework. Once you have fine-tuned GPT, for example, with specific data, you'll only need to modify the model name (https://github.com/Libr-AI/OpenFactVerification/blob/8fd1da9...). I believe this approach can lead to better results than what the paper suggests.