How do you handle the very well known limits of LLMs in your especially-sensitive use case? Hallucinations are the leading example. Health queries are a really bad place to do even mild “imagining” of responses.
I completely agree with you! LLMs are not good for medical queries, and that's exactly the reason I built this tool. I have used a simple RAG mechanism where I feed only research papers from trusted sources to the LLM to summarise them. In short, every answer is grounded in research papers. The idea is to cut down the time to do research for daily medical queries. I am still working on refining the answers, though - many new features are coming soon to make the research process easy.
arunbhatia|8 months ago