top | item 46406867

(no title)

DGrechko | 2 months ago

Agree with you on that, it was my concern too, but the way I think about it is access to information, the goal is not to provide hallucinations with a straight face (aka GPT), but rather use it as a way to extract necessary information fast. For instance, I have a built-in RAG that reads of growing collection on books on medical, survival, etc. (https://github.com/dmitry-grechko/waycore-knowledge) that AI agent is using to answer questions. Moreover, it has a built-in safety loop to always inform users on the accuracy of the information, but also if the information request has an impact on health & safety, it will warn users about it too.

So, I certainly see the inherited risk and problems, but mostly think about it as a means of information extraction

discuss

order

freeone3000|2 months ago

Putting the lookup in the AI means it can hallucinate the lookup. Putting the assesment of risk in the AI means it can fail on the assessment.

Please reconsider using a full text search index instead.

DGrechko|2 months ago

Good point. I’ll add it to the roadmap. I still want to experiment with AI features as I feel it can add value despite hallucinations, but safety and transparency are crucial - completely agree.